repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
6,602
closed
Create README.md
08-19-2020 17:56:24
08-19-2020 17:56:24
transformers
6,601
closed
Fix GPT2DoubleHeadsModel to work with model.generate()
`token_type_ids` were not passed through `model_specific_kwargs` nor cut properly with `use_cache`/`past` usage (maybe this should be corrected through bigger PR for all models available for caching).
08-19-2020 17:27:25
08-19-2020 17:27:25
I do not know what happened on your PR, any way you can restore or rebase/force-push for the diff to be readable?<|||||>force-pushed<|||||>@julien-c Rebased onto latest commit in master (https://github.com/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e)<|||||>Don't think the change to `past[-1]` is applicable to all models. We should enforce the use of `return_dict=True` in TF as well to simplify this behaviour<|||||>> Don't think the change to `past[-1]` is applicable to all models. @patrickvonplaten removed this commit.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=h1) Report > Merging [#6601](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26d5475d4b6644528956df3020dbaa436b443706?el=desc) will **increase** coverage by `5.22%`. > The diff coverage is `80.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6601/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6601 +/- ## ========================================== + Coverage 75.91% 81.13% +5.22% ========================================== Files 195 168 -27 Lines 39827 32288 -7539 ========================================== - Hits 30233 26198 -4035 + Misses 9594 6090 -3504 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.66% <80.00%> (-1.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-71.56%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: | | [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-53.13%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-39.33%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-37.21%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.72%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.29% <0.00%> (-20.48%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.22%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/6601/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=footer). Last update [26d5475...8ffb4c0](https://codecov.io/gh/huggingface/transformers/pull/6601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hey @LSinev, I think the `token_type_ids` would have to be increased by 1 at each generation step. With the current generation implementing this logic is quite tedious - I think we should put this PR on hold until generation is properly refactored. Will try to get to this within the next 3,4 weeks. Hope that's fine!<|||||> @LSinev I think this implementation is what I need! (haven't tried it) For my purpose, I would have a `context` (`token_type_ids`=0), a `prompt token` (`token_type_ids`=1) followed by prediction (`token_type_ids`=1). With this PR, seems like I can pass in `context` + `prompt token` with `token_type_ids` and the last `token_type_ids` will be reused in the generated, which is what I want. > I think the `token_type_ids` would have to be increased by 1 at each generation step @patrickvonplaten Do you mean `position_ids`? I just got an idea. What if we allow users to pass in a function for `prepare_inputs_for_generation`, so users can easily customize the behavior? 🤔 💥<|||||>> like That would work with `past`, but would break if you set `use_cache=False` because `token_type_ids` length's is not increased. I can see that this feature is required. However, I want to wait until the refactoring is done before adding more functionality to the general `generate()` method. Adding a function to `prepare_input_ids` is a bit tooo hacky for me as well (we would need to add a function arg to `generate()`)<|||||>Was just curious what was the status of this PR? It seems all the checks have passed<|||||>> > like > > That would work with `past`, but would break if you set `use_cache=False` because `token_type_ids` length's is not increased. I can see that this feature is required. However, I want to wait until the refactoring is done before adding more functionality to the general `generate()` method. > > Adding a function to `prepare_input_ids` is a bit tooo hacky for me as well (we would need to add a function arg to `generate()` which I don't really like Sorry to only get back you guys now. So having refactored the `generate()` method, I think we can should add an `if token_type_ids in model_kwargs` here: https://github.com/huggingface/transformers/blob/70708cca1a2f3e01d9d72a3aaa7ab078dfef639e/src/transformers/generation_utils.py#L184 to deal with this case. The way it's implemented now does not really work as explained in the message above.<|||||>Maybe I should add changes to `GPT2ForSequenceClassification` too? Or maybe `GPT2PreTrainedModel` should have widest `prepare_inputs_for_generation` for all inherited models?<|||||>Tests added. Not a good showcase (because the model is not finetuned to use `token_type_ids` in the way like https://github.com/huggingface/transfer-learning-conv-ai), but at least it tests that `token_type_ids` are used (and expanded if `num_return_sequences` passed) and affect output. Not sure if slow tests are tested online at PR submission, though.<|||||>Tests are great! It's fine if it's not a good showcase - as long as we make sure the functionality works and will work in the future this is great! Great job @LSinev and thanks for bearing with me throughout this PR :-)
transformers
6,600
closed
[wip] Seq2SeqTrainer
cc @sshleifer
08-19-2020 16:25:36
08-19-2020 16:25:36
transformers
6,599
closed
Pegasus: IndexError: index out of range in self
@sshleifer ## Environment info - `transformers` version: 3.0.2 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using: google/pegasus-cnn_dailymail The problem arises when using: ``` import torch from transformers import AutoModelWithLMHead, AutoTokenizer torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' model = AutoModelWithLMHead.from_pretrained("google/pegasus-cnn_dailymail") tokenizer = AutoTokenizer.from_pretrained("google/pegasus-cnn_dailymail") src_text = [ '(CNN)James Best, best known for his portrayal of bumbling sheriff Rosco P. Coltrane on TV\'s "The Dukes of Hazzard," died Monday after a brief illness. He was 88. Best died in hospice in Hickory, North Carolina, of complications from pneumonia, said Steve Latshaw, a longtime friend and Hollywood colleague. Although he\'d been a busy actor for decades in theater and in Hollywood, Best didn\'t become famous until 1979, when "The Dukes of Hazzard\'s" cornpone charms began beaming into millions of American homes almost every Friday night. For seven seasons, Best\'s Rosco P. Coltrane chased the moonshine-running Duke boys back and forth across the back roads of fictitious Hazzard County, Georgia, although his "hot pursuit" usually ended with him crashing his patrol car. Although Rosco was slow-witted and corrupt, Best gave him a childlike enthusiasm that got laughs and made him endearing. His character became known for his distinctive "kew-kew-kew" chuckle and for goofy catchphrases such as "cuff \'em and stuff \'em!" upon making an arrest. Among the most popular shows on TV in the early \'80s, "The Dukes of Hazzard" ran until 1985 and spawned TV movies, an animated series and video games. Several of Best\'s "Hazzard" co-stars paid tribute to the late actor on social media. "I laughed and learned more from Jimmie in one hour than from anyone else in a whole year," co-star John Schneider, who played Bo Duke, said on Twitter. "Give Uncle Jesse my love when you see him dear friend." "Jimmy Best was the most constantly creative person I have ever known," said Ben Jones, who played mechanic Cooter on the show, in a Facebook post. "Every minute of his long life was spent acting, writing, producing, painting, teaching, fishing, or involved in another of his life\'s many passions." Born Jewel Guy on July 26, 1926, in Powderly, Kentucky, Best was orphaned at 3 and adopted by Armen and Essa Best, who renamed him James and raised him in rural Indiana. Best served in the Army during World War II before launching his acting career. In the 1950s and 1960s, he accumulated scores of credits, playing a range of colorful supporting characters in such TV shows as "The Twilight Zone," "Bonanza," "The Andy Griffith Show" and "Gunsmoke." He later appeared in a handful of Burt Reynolds\' movies, including "Hooper" and "The End." But Best will always be best known for his "Hazzard" role, which lives on in reruns. "Jimmie was my teacher, mentor, close friend and collaborator for 26 years," Latshaw said. "I directed two of his feature films, including the recent \'Return of the Killer Shrews,\' a sequel he co-wrote and was quite proud of as he had made the first one more than 50 years earlier." People we\'ve lost in 2015 . CNN\'s Stella Chan contributed to this story.' ] batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` The tasks I am working on is: * abstractive text summarization ## To reproduce 1. Run the script above. The traceback: ``` IndexError Traceback (most recent call last) <ipython-input-1-ac154ffeb574> in <module> 13 ] 14 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) ---> 15 translated = model.generate(**batch) 16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/projects/transformers/src/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs) 394 encoder = self.get_encoder() 395 --> 396 encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask) 397 398 # Expand input ids if num_beams > 1 or num_return_sequences > 1 ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict) 328 329 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale --> 330 embed_pos = self.embed_positions(input_ids) 331 x = inputs_embeds + embed_pos 332 x = self.layernorm_embedding(x) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/projects/transformers/src/transformers/modeling_bart.py in forward(self, input_ids, use_cache) 1337 # starts at 0, ends at 1-seq_len 1338 positions = torch.arange(seq_len, dtype=torch.long, device=self.weight.device) -> 1339 return super().forward(positions) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1812 # remove once script supports set_grad_enabled 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1815 1816 IndexError: index out of range in self ``` ## Expected behavior IndexError: index out of range in self
08-19-2020 15:40:55
08-19-2020 15:40:55
Found bug, will be fix by EOD.<|||||>This is a coincidence. I found the same issue with pegasus-arxiv and pegasus-pubmed today. Glad this will be fixed<|||||>> > ## Information > Model I am using: google/pegasus-cnn_dailymail > > The problem arises when using: > > ``` > import torch > from transformers import AutoModelWithLMHead, AutoTokenizer > > torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' > > model = AutoModelWithLMHead.from_pretrained("google/pegasus-cnn_dailymail") > tokenizer = AutoTokenizer.from_pretrained("google/pegasus-cnn_dailymail") > > src_text = [ > '(CNN)James Best, best known for his portrayal of bumbling sheriff Rosco P. Coltrane on TV\'s "The Dukes of Hazzard," died Monday after a brief illness. He was 88. Best died in hospice in Hickory, North Carolina, of complications from pneumonia, said Steve Latshaw, a longtime friend and Hollywood colleague. Although he\'d been a busy actor for decades in theater and in Hollywood, Best didn\'t become famous until 1979, when "The Dukes of Hazzard\'s" cornpone charms began beaming into millions of American homes almost every Friday night. For seven seasons, Best\'s Rosco P. Coltrane chased the moonshine-running Duke boys back and forth across the back roads of fictitious Hazzard County, Georgia, although his "hot pursuit" usually ended with him crashing his patrol car. Although Rosco was slow-witted and corrupt, Best gave him a childlike enthusiasm that got laughs and made him endearing. His character became known for his distinctive "kew-kew-kew" chuckle and for goofy catchphrases such as "cuff \'em and stuff \'em!" upon making an arrest. Among the most popular shows on TV in the early \'80s, "The Dukes of Hazzard" ran until 1985 and spawned TV movies, an animated series and video games. Several of Best\'s "Hazzard" co-stars paid tribute to the late actor on social media. "I laughed and learned more from Jimmie in one hour than from anyone else in a whole year," co-star John Schneider, who played Bo Duke, said on Twitter. "Give Uncle Jesse my love when you see him dear friend." "Jimmy Best was the most constantly creative person I have ever known," said Ben Jones, who played mechanic Cooter on the show, in a Facebook post. "Every minute of his long life was spent acting, writing, producing, painting, teaching, fishing, or involved in another of his life\'s many passions." Born Jewel Guy on July 26, 1926, in Powderly, Kentucky, Best was orphaned at 3 and adopted by Armen and Essa Best, who renamed him James and raised him in rural Indiana. Best served in the Army during World War II before launching his acting career. In the 1950s and 1960s, he accumulated scores of credits, playing a range of colorful supporting characters in such TV shows as "The Twilight Zone," "Bonanza," "The Andy Griffith Show" and "Gunsmoke." He later appeared in a handful of Burt Reynolds\' movies, including "Hooper" and "The End." But Best will always be best known for his "Hazzard" role, which lives on in reruns. "Jimmie was my teacher, mentor, close friend and collaborator for 26 years," Latshaw said. "I directed two of his feature films, including the recent \'Return of the Killer Shrews,\' a sequel he co-wrote and was quite proud of as he had made the first one more than 50 years earlier." People we\'ve lost in 2015 . CNN\'s Stella Chan contributed to this story.' > ] > batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device) > translated = model.generate(**batch) > tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) > ``` > Currently I am overcoming this by passing additional parameter max_length=512 in prepare_seq2seq_batch. This is not what I want. I want to expand the length to atleast 2048 if possible. But I don't see a way to do it<|||||>The bug should be fixed. Please try again. The following max_lengths are recommended: ``` max_input_length = { "xsum": 512, "cnn_dailymail": 1024, "newsroom": 512, "wikihow": 512, "multi_news": 1024, "reddit_tifu": 512, "big_patent": 1024, "arxiv": 1024, "pubmed": 1024, "gigaword": 128, "aeslc": 512, "billsum": 1024, "large": 1024, } ``` @suchig You can probably do longer with the current code by passing `max_position_embeddings=2048`, but no model was finetuned on sequences of that length so it might (a) OOM or (b) perform poorly.<|||||>> The bug should be fixed. Please try again. > > The following max_lengths are recommended: > > ``` > max_input_length = { > "xsum": 512, > "cnn_dailymail": 1024, > "newsroom": 512, > "wikihow": 512, > "multi_news": 1024, > "reddit_tifu": 512, > "big_patent": 1024, > "arxiv": 1024, > "pubmed": 1024, > "gigaword": 128, > "aeslc": 512, > "billsum": 1024, > "large": 1024, > } > ``` > > @suchig You can probably do longer with the current code by passing `max_position_embeddings=2048`, but no model was finetuned on sequences of that length so it might (a) OOM or (b) perform poorly. At this junction, I am not even able to go beyond PegasusForConditionalGeneration.from_pretrained(mname) statement. It was working fine earlier in the morning. It feels as though something has changed in the pretrained model checkpoint this afternoon.<|||||>@suchig please stick to one issue. <|||||>Hi @sshleifer , I want to a train a model for 2048 input length. My idea is to use a distilled teacher model and change the source length to 2048. How can I change the `max_position_embeddings`? I'm using the following command to train the model on CNN-DM dataset. ```python examples/seq2seq/distillation.py --teacher sshleifer/distill-pegasus-cnn-16-4 --data_dir ./data/cnn_dm/ --student_decoder_layers 3 --student_encoder_layers 12 --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 --val_metric rouge2 --max_source_length 2048 --max_target_length=142 --val_max_target_length=142 --test_max_target_length=142 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --sortish_sampler --num_train_epochs=6 --warmup_steps 500 --output_dir distilpegasus_xsum_12_3 --gpus 1 --tokenizer_name sshleifer/distill-pegasus-cnn-16-4 --num_workers=0 --adafactor```<|||||>That command will be very slow, but you can achieve your goal by passing `max_position_embeddings=2048` [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/distillation.py#L41)
transformers
6,598
closed
Create README.md
08-19-2020 15:32:26
08-19-2020 15:32:26
transformers
6,597
closed
tf2 transformers cache dir
what is the directory of tf2 transformers cache weights? Thanks
08-19-2020 14:32:12
08-19-2020 14:32:12
It should be `.cache/torch/transformers` I think (even though "torch" is in the name).<|||||>Probably that's the case. Thanks.
transformers
6,596
closed
Fix #6575
Make docs clearer as mentioned in #6575
08-19-2020 13:26:15
08-19-2020 13:26:15
transformers
6,595
closed
Typo fix in 04-onnx-export
08-19-2020 12:42:24
08-19-2020 12:42:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=h1) Report > Merging [#6595](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab42d74850233cff9df87701d257d9b975435f66&el=desc) will **increase** coverage by `1.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6595/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6595 +/- ## ========================================== + Coverage 79.42% 80.52% +1.10% ========================================== Files 156 156 Lines 28127 28127 ========================================== + Hits 22339 22650 +311 + Misses 5788 5477 -311 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6595/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=footer). Last update [ab42d74...a49a870](https://codecov.io/gh/huggingface/transformers/pull/6595?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,594
closed
Add "Leveraging Pretrained Checkpoints for Generation" Seq2Seq models.
This PR adds the models from the following paper: Paper: https://arxiv.org/pdf/1907.12461.pdf The paper does a great job at showing how pretrained BERT & RoBERTa model can be leveraged for Seq2Seq tasks and yields good results on many seq2seq tasks. It's fits very well with the current implementation of the EncoderDecoder framework. This PR adds code to port all pretrained encoder decoder models that can be found here: https://tfhub.dev/s?module-type=text-generation&subtype=module,placeholder, which can be found here: https://huggingface.co/models?search=google%2Froberta and here: https://huggingface.co/models?search=google%2Fbert2 An example of how a model can be used is here: https://huggingface.co/google/roberta2roberta_L-24_bbc Big thanks to @shashiongithub for providing me with the tokenizer files and giving valuable insights on setting the correct generation parameters!
08-19-2020 12:36:56
08-19-2020 12:36:56
If I understand correctly, the BERT model used here is slightly different because: - It doesn't use token type IDs - It's tying its word embedding layer to its LM head - No pooling layer Doesn't that just mean we could use an additional architecture instead of an entire model class? Something like the following, in `modeling_bert.py`: ```py @add_start_docstrings( """Bert Model with a `language modeling` head on top that acts as a decoder in a seq2seq setting.""", BERT_START_DOCSTRING ) class CausalBertModel(BertPreTrainedModel): def __init__(self, config): super().__init__(config) if not config.is_decoder: logger.warning("If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True.`") self.bert = BertModel(config) self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) self.init_weights() ``` And this module wouldn't accept token type IDs as an input. I don't know what to do regarding the tokenizer though. This ^ approach could probably leverage @julien-c's https://github.com/huggingface/transformers/pull/6995<|||||>Naming ideas: `BertFor{Conditional}Generation`, `BertEncoder`, `BertDecoder`. Reasoning: + `CausalBert` doesn't make sense if the encoder wasn't trained with a causal mask. + I think in class naming it's more important to give someone a sense of how to use something than how that thing was trained, but that's not an opinion I hold strongly. Anyways, your signatures look super clean, easy and consistent! Excited to try these out+happy to help check metrics.<|||||>@LysandreJik, There are a couple of problems with that: 1) I also need a different `BertEmbeddings` or manually set `self.token_type_embeddings` to a zero matrix. Even if `token_type_ids` is set to `None` in Bert, the `self.token_type_embeddings` is always used. This model just does not have the embeddings (and should not have them IMO). I could set the `self.token_type_embedding` matrix just to 0, but then people using this class for training would not realize that a `self.token_type_embedding` matrix is trained which it shouldn't. So, here I think either way, I will need a separete `BertEmbeddings` class. 2) A bigger problem is the config class. Because I need both the new `CausalBertForCausalLM` and `BertLMHeadModel` in the `AUTO_MODELS_FOR_CAUSAL_LM` class (to leverage both models with the `EncoderDecoder` framework), the two models have to have different config classes. I guess we could also create a separate config class and overwrite the inherited config class from `BertPretrainedModel`, but then IMO, it's cleaner to just create a new `PretrainedModelClass` and in this case we can directly create a completely new model class So overall, it seems to me that a separate model class is the cleaner way to go - what do you think? @sshleifer - very much agree here! Think the naming should be different...`BertEncoder` is already taken though. I could go for `BertForGenerationEncoder` and `BertForGenerationDecoder` and `BertForGenerationConfig` - No need for `BertForConditionalGeneration` as the `EncoderDecoderModel will be used for this<|||||>BertForGenerationEncoder and BertForGenerationDecoder and BertForGenerationConfig 👍 I do see lysandre's point though and would be fine with you setting `token_type` matrix to 0 if it's small (which I think it is). <|||||>summarization models seem to function and are uploaded here: https://huggingface.co/models?search=google%2Froberta2roberta<|||||>**UPDATE**: PR is ready for review @sshleifer @LysandreJik @sgugger . Would be awesome if you could take a look<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=h1) Report > Merging [#6594](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/15478c1287a4e7b52c01730ffb0718243d153600?el=desc) will **increase** coverage by `2.11%`. > The diff coverage is `75.76%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6594/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6594 +/- ## ========================================== + Coverage 78.37% 80.49% +2.11% ========================================== Files 164 167 +3 Lines 31026 31314 +288 ========================================== + Hits 24318 25207 +889 + Misses 6708 6107 -601 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.23% <ø> (-0.05%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.52% <40.00%> (-4.78%)` | :arrow_down: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `88.78% <50.00%> (-3.22%)` | :arrow_down: | | [src/transformers/modeling\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0X2dlbmVyYXRpb24ucHk=) | `69.19% <69.19%> (ø)` | | | [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `94.64% <94.64%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.33% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.61% <100.00%> (+0.13%)` | :arrow_up: | | [src/transformers/configuration\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnRfZ2VuZXJhdGlvbi5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <100.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <100.00%> (-0.28%)` | :arrow_down: | | ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/6594/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=footer). Last update [15478c1...aa953cb](https://codecov.io/gh/huggingface/transformers/pull/6594?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> Looks great to me! I mostly have annoying nits about the docs, cause I'm an annoying person. Haha, no you are 100% right - sorry for being so sloppy with the docs! I should have learnt it by now ....<|||||>@sshleifer @sgugger - thanks a lot for your suggestions. I went for the name `BertGenerationEncoder` and `BertGenerationDecoder` now. I think it's the best trade-off between short and concise name that is not confusing.<|||||>Have the "share" models been implemented? In the paper, in many tasks they achieve the best results.<|||||>Yes you can find then under google/roberta2roberta<|||||>Thank you. How to tie weights in the code for training own model?<|||||>`tie_encoder_decoder=True` -> The code in this model card should show you how to do it :-) https://huggingface.co/patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16
transformers
6,593
closed
[Tests common] Fix flaky test
The test `test_model_outputs_equivalence` fails quite often at the moment because of a problem with `nan - nan`. This should solve the issue. Also added more explicit error message in case test error occurs. Flaky error happens here for example: https://app.circleci.com/pipelines/github/huggingface/transformers/10798/workflows/44e689b2-f4b3-49be-88b3-a5b214eac6c5/jobs/75173
08-19-2020 12:25:13
08-19-2020 12:25:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=h1) Report > Merging [#6593](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab42d74850233cff9df87701d257d9b975435f66&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6593/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6593 +/- ## ========================================== + Coverage 79.42% 79.43% +0.01% ========================================== Files 156 156 Lines 28127 28127 ========================================== + Hits 22339 22343 +4 + Misses 5788 5784 -4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6593/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=footer). Last update [ab42d74...d243891](https://codecov.io/gh/huggingface/transformers/pull/6593?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Long term, I would rather fill nan with a sentinel value like -11.27 (my negative birthday, dont forget!) and check equality but I am also fine with the temp fix.<|||||>Ok pinging @LysandreJik here that we should take a look next week. Will note it down as well.
transformers
6,592
closed
Unable to save and load RoBERTa model using tensorflow
I have trained my model with Roberta-base and tested, it works. But when i try to save the model I get following error: /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/save.py in list_dependencies(self, obj) 126 obj._object_identifier, # pylint: disable=protected-access 127 name, --> 128 extra_dependencies.keys())) 129 yield base.TrackableReference(name, extra_dependencies[name]) 130 else: ValueError: Error when exporting object <tensorflow.python.keras.layers.core.Activation object at 0x7fef7d287128> of with identifier=_tf_keras_layer. The object has an attribute named regularization_losses, which is reserved. List of all reserved attributes: dict_keys(['regularization_losses', 'variables', 'trainable_variables', 'keras_api']) Unable to solve it. Is it some obvious mistake am doing? # Code: path = '/path/model_1' tf.keras.models.save_model(model_roberta_base,path, overwrite=True,include_optimizer=False,save_format='tf') new_model = tf.keras.models.load_model(path) result =predct_func(new_model,"Technology driven by Medical imaging and Artificial Intellignence") plot_result(result)
08-19-2020 10:59:10
08-19-2020 10:59:10
Hey @kiruthiga-A, Could you post a complete code example so that we can reproduce the error? If the complete code is too long, a google colab would be super useful.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have this same exact issue importing a model from HuggingFace
transformers
6,591
closed
tf generation utils: remove unused kwargs
08-19-2020 05:12:41
08-19-2020 05:12:41
transformers
6,590
closed
Delete Unused TFModelTesterMixin attributes
Afaict, neither `test_torchscript` or `test_pruning` is used. https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L76
08-19-2020 05:06:58
08-19-2020 05:06:58
transformers
6,589
closed
[seq2seq] finetune.sh OOMs in fp16 w torch 1.6 on colab
On trying to fine-tune either T5 or BART models for summarization I was encountering OOM repeatedly in the latest code whereas it used to work fine earlier for me, atleast on Google Colab. On checking the startup scripts and latest commits I saw that optimizations have been added for native pytorch fp16 support recently. On removing the fp16 parameter from the script it started working as expected. Please check if this could be a real issue or just a matter of a dangling parameter that needs to be removed? Thanks @sshleifer @patil-suraj
08-19-2020 04:07:45
08-19-2020 04:07:45
try passing `--fp16 --fp_16_opt_level=O1` that is a relevant default that has changed. I have also experienced some torch 1.6 issues, so would love to know if that helps. Semi-relatedly, a good practice is to run ``` !pip freeze | grep transformers !pip freeze | grep torch ``` at the top of colab so that when you go back you can know what version you were on.<|||||>Thanks for the quick reply! I tried this and it didn't work though :( Removing the fp16 parameter for now and fine-tuning. Will keep the colab advice in mind :)<|||||>+1 on this. Using `fp16` with level `O1` or `O2` both causes OOM even for batch size 1. Without `fp16` fine-tuning works. Torch 1.6.0, transformers 3.0.2, Linux, V100 GPU.<|||||>This is a torch 1.6 issue. I haven't gotten anything working well with torch 1.6 + fp16. torch 1.5.1 with apex installed works well for me. <|||||>I tried running fp16 training with `amp_backend=apex` and `amp_backend=native` (passing them as additional args) and the latter does much better in terms power consumption, but memory consumption is same for both (wandb GPU graphs). However, both of them OOM during the validation step. May have something to do with beam search since my validation batch size is 1. <img width="1564" alt="Screen Shot 2020-08-31 at 12 47 28" src="https://user-images.githubusercontent.com/1833708/91760533-60c7ae00-eb88-11ea-888b-550e3bb28967.png"> <|||||>Can you try torch 1.5.1 ? <|||||>Succeeds with 1.5.1, and power and temperature are in-line with native.<|||||>However, the process failing during generation for 1.6.0 suggests there's some optimization missing during the generation steps which causes OOM.<|||||>Another thing to note which might be related: Validation (400 samples) takes 5x time for 1 epoch of training (2400 samples). Even if accounting for beam size (4x), it is much slower.<|||||>Interesting! I would definitely be open to a PR here if you have a fix in mind!<|||||>Thanks! I have a couple ideas and will try them out and create a PR if any of them works.<|||||>I think the problem is that the `_generative_step` method calls `_step` in it, causing 2x forward steps within each validation step. Also, `model.generate` steps are inherently slower than an eval forward pass, even with `num_beams=1`, about 30-60x slower. But this is a different problem than the OOM issue on 1.6.0. Maybe should split this up into a different issue?<|||||>The problem is with `model.generate` that causes OOM on PyTorch 1.6. I switched out to using a custom `validation_step` that only uses `_step` and does not make a call to `model.generate`; it succeeds and is fast. The drawback is that I cannot use beam search for the validation step and keep `do_predict` set to `False` to ensure the test step does not execute. All of which are acceptable limitations to me for faster val, val not running into OOM and being able to use native fp16 with PyTorch 1.6.0. I'm happy to create a PR for it if it makes sense to check it in.<|||||>That PR would be interesting. More interesting would be figuring out why generate OOMs in these conditions.<|||||>Definitely the question for why generate OOMs is interesting but one I haven't found an answer for yet. I suggested a workaround in #7004 using the fix I described earlier.<|||||>OK, I'm gunna try to fix the underlying issue today/tomorrow and if I fail, we'll move to your PR. Thanks!<|||||>Does anyone have a snippet that replicates the OOM outside of colab? I have no trouble running `examples/seq2seq/test_bash_script.py` on self hosted hardware in torch 1.6. <|||||>The issue wasn't on Colab but on AWS.<|||||>What was your command/hardware?<|||||>Command: `python script.py ...` with a bunch of args (I have a custom wrapper that inherits `SummarizationModule` for initialization and adds extra args. I did not modify train / eval / test in that so should be identical to running `python fine-tune.py` from `finetune.sh`). GPU: V100-SXM2-16GB.<|||||>I can replicate on v100 with cuda 10.1, torch 1.6, python 3.7. The problem is that during the first call to `generate` (during the validation sanity check) the untrained model generates `config.max_length` tokens, causing OOM. Easiest fix is adding `--num_sanity_val_steps=0` to your command. LMK if that works. The linked PR above allows the user to limit how many tokens are generating during validation, which may be independently helpful. <|||||>Hmm, that makes sense. I'll also say that in the screenshots I had attached earlier it occurred at the end of the first epoch during validation, so setting that new flag should help with that. It is a tricky choice between setting a `max_length` for generate steps that is different from the model's expected output. I do prefer using a forward pass' output (my PR #7004) as a substitute for the runtime output when it is with the correct `max_length` instead of a shorter output that fits within the memory at that time. However, this still does not explain the avg gen time being 30-60x time per batch (with equal batch sizes for training and validation). Lastly, the same model does generate without producing an OOM at run-time on similar hardware with the model producing upto `max_length`, which continues to baffle me.<|||||>I don't have any slow down after the validation sanity check in my replication. Maybe I haven't found your bug.<|||||>I don't understand > Lastly, the same model does generate without producing an OOM at run-time on similar hardware with the model producing upto max_length, which continues to baffle me. Do you have a snippet that does not involve finetune.py (just calls `generate`) that OOMs/is way slower?<|||||>> I don't have any slow down after the validation sanity check in my replication. Maybe I haven't found your bug. Or maybe it got resolved between when I tested it and this version. No worries. > Do you have a snippet that does not involve finetune.py (just calls generate) that OOMs/is way slower? This might not matter anymore if the previous is fixed during training since this is specifically at runtime. Regardless, here's an example I have that is much slower. ```python %%timeit with torch.no_grad(): generated_ids = model.generate(tokenized_input["input_ids"].to("cuda"), skip_special_tokens=True, clean_up_tokenization_spaces=False, num_beams=3, top_p=0.9, repetition_penalty=10, decoder_start_token_id=model.config.decoder_start_token_id, max_length=model.config.max_length) ``` which produces: `10.4 s ± 7.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)`<|||||>The sequence produced is of length 311. While the output sequence length is long (max possible is 768), 10 seconds is still quite a lot.<|||||>can you send a full working example that I can copy paste and try in different torch versions?<|||||>Sure! I'm using a finetuned model and a custom dataset so changed the below to `bart-large` and removed the lines where a dataset is queried. Everything else is the same. ```python from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig tokenizer = BartTokenizer.from_pretrained("facebook/bart-large") model = BartForConditionalGeneration.from_pretrained("facebook/bart-large") model = model.to("cuda") model = model.eval() tokenized_input = tokenizer(..., return_tensors="pt", max_length=model.config.max_position_embeddings) with torch.no_grad(): generated_ids = model.generate(tokenized_input["input_ids"].to("cuda"), skip_special_tokens=True, clean_up_tokenization_spaces=False, num_beams=3, top_p=0.9, repetition_penalty=10, decoder_start_token_id=model.config.decoder_start_token_id, max_length=model.config.max_length) ``` I'm running this in a notebook so I can time profile the generate step.<|||||>I've spent a quite some time today focused on trying various combinations of `generate`. The increased delay arises from a `num_beams` count that is large, which leads to the model producing longer outputs, thus compounding the generate time (`num_beams` * `max_length`). In conclusion, it doesn't appear to be a bug but a property of generate being more memory intensive.<|||||>@sshleifer , I have the same issue, and I am using the latest version that includes the PR you provided. I set eval_max_gen_length to 30 and still getting OOM during the sanity check. Do I also have to set num_sanity_val_steps=0 ?<|||||>@vikigenius : I don't think setting `num_sanity_val_steps` to 0 will solve it since it'll only delay what's bound to happen during the validation step later.<|||||>+ `--num_sanity_val_steps=0` fixes it for me. + I only have a problem in that sanity check phase when the model is untrained. Future calls to `generate` (without super high `config.min_length`/`eval_max_gen_length`) don't OOM for me. + `--eval_num_beams=2` may also help save memory. + I'd love to see somebody isolate the OOMing call to generate **outside** of training logic so that I can reproduce.
transformers
6,588
closed
all_hidden_states indentation bug in modeling_bert.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @lavanyashukla ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L490 ## Expected behavior https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L490 There is a small bug here, I suppose Line491 and Line491 should be inside loop chunk. <!-- A clear and concise description of what you would expect to happen. -->
08-19-2020 02:59:54
08-19-2020 02:59:54
My bad, I ignore some code in the file.
transformers
6,587
closed
Fix bart base test
08-19-2020 00:38:21
08-19-2020 00:38:21
spurious
transformers
6,586
closed
wip: Code to add lang tags to marian model cards
08-19-2020 00:34:07
08-19-2020 00:34:07
Let me know if/when you need me to review this!<|||||>Do you think it's a good investment of time to clean up automated marian model cards/conversion? For first 1000 models from `OPUS-MT-Train` repo, the process was basically download and convert everything, if there is a '+' in the model name try to find a matching group name. Automated model card is an f string + Jorg's README. For the next 500 models, the process was: decide whether we don't want the model ("the blacklisting process"): - if it's "short pair" (as in en-es/en-zh) identical to one we have and the score (which we get from https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/models/released-models.txt) is worse than the BLEU of the previous model (which we extract disgustingly from the README.md). - if either side is "jpx" (see slack: "you can ignore jpx" Assuming we DO want the model: - identical state dict conversion to the previous iteration - Automated model card = yaml front matter with language tags (incl. group expansion for multilingual, look at all the tags on this guy: https://huggingface.co/Helsinki-NLP/opus-mt-roa-en) + f string + Jorg's README. - also write `metadata.json` to S3 so that, in the future, blacklisting need not involved extracting floats from markdown. (The `api-inference` issue is a disk space one that @mfuntowicz hopes to resolve tomorrow, curl-requests come back 👍 ) This process works well but is obviously not as simple as it could/should be. The state dict automation that other projects have is simple and done, but the automated model carding needs and blacklisting needs to be at least partially rewritten. I also suspect that if we do this again in a third repo we will need to add more model card/blacklisting logic. ### Proposed next steps - add metadata.json to every opus-mt model in the hub - strip the opus-mt model prefix from every model in the hub. What's the point of it? - some one off deletions of old capital letter group names as discussed in #fixme - get this code clean enough that it runs on future models for the current `Tatoeba-Challenge` repo. Did any of that make sense? What do you think?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=h1) Report > Merging [#6586](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `1.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6586/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6586 +/- ## ========================================== + Coverage 76.58% 77.61% +1.02% ========================================== Files 181 181 Lines 34828 34828 ========================================== + Hits 26674 27032 +358 + Misses 8154 7796 -358 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.46% <0.00%> (-72.60%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.13% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (+0.32%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (+0.66%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (+3.24%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6586/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=footer). Last update [28cf873...ab93341](https://codecov.io/gh/huggingface/transformers/pull/6586?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,585
closed
Failing bart-base slow test
``` =================================== FAILURES =================================== ____________ BartModelIntegrationTests.test_bart_base_mask_filling _____________ [gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python self = <tests.test_modeling_bart.BartModelIntegrationTests testMethod=test_bart_base_mask_filling> @slow def test_bart_base_mask_filling(self): pbase = pipeline(task="fill-mask", model="facebook/bart-base") src_text = [" I went to the <mask>."] results = [x["token_str"] for x in pbase(src_text)] expected_results = ["Ġbathroom", "Ġrestroom", "Ġhospital", "Ġkitchen", "Ġcar"] > self.assertListEqual(results, expected_results) E AssertionError: Lists differ: ['Ġlibrary', 'Ġhospital', 'Ġbathroom', 'Ġmovies', 'Ġpolice'] != ['Ġbathroom', 'Ġrestroom', 'Ġhospital', 'Ġkitchen', 'Ġcar'] E E First differing element 0: ``` https://github.com/huggingface/transformers/runs/996031882?check_suite_focus=true
08-18-2020 23:22:42
08-18-2020 23:22:42
transformers
6,584
closed
Failing ONNX Slow Test
``` ___________________ OnnxExportTestCase.test_quantize_pytorch ___________________ [gw0] linux -- Python 3.7.6 /home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/python self = <tests.test_onnx.OnnxExportTestCase testMethod=test_quantize_pytorch> @require_torch @slow def test_quantize_pytorch(self): for model in OnnxExportTestCase.MODEL_TO_TEST: path = self._test_export(model, "pt", 12) quantized_path = quantize(path) # Ensure the actual quantized model is not bigger than the original one > if quantized_path.stat().st_size >= Path(path).stat().st_size: E AttributeError: 'NoneType' object has no attribute 'stat' tests/test_onnx.py:76: AttributeError =============================== warnings summary =============================== ``` https://github.com/huggingface/transformers/runs/996031882?check_suite_focus=true
08-18-2020 23:21:59
08-18-2020 23:21:59
@sshleifer Is it still a failing test? I can have a look<|||||>Still failing! See https://github.com/huggingface/transformers/runs/1024137280?check_suite_focus=true<|||||>On it 💪 <|||||>attempt at: #6716
transformers
6,583
closed
add intro to nlp lib & dataset links to custom datasets tutorial
Being a bit more explicit with the recommendation to use NLP where possible. Adds a link to the hub for each dataset used as well as a brief introduction to the NLP lib at the bottom.
08-18-2020 21:03:19
08-18-2020 21:03:19
transformers
6,582
closed
BatchEncoding interacts poorly with apex.amp
## Environment info - `transformers` version: 3.0.0 - Platform: Linux - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 GPU ### Who can help tokenizers: @mfuntowicz ## Information We are building a multi-task transformer on top of the transformers API, using Nvidia's Apex to get the best speed out of the GPUs. When using Apex with `O2` optimisations (which are necessary to get speedups on our model), it inserts casts around the inputs to `forward` which in our case is a `BatchEncoding`. This cast blindly calls `BatchEncoding.to` with a `dtype` argument, which in turn is passed through to the values `to` method because there is only a type hint `str` on the `BatchEncoding.to` input argument, not a `isinstance` check. This causes our tensor of BPE ids to be cast from a `LongTensor` into `HalfTensor` and then the embedding layer raises an error because it's not expecting floating point values (also we rounded away most of our vocabulary terms). ## To reproduce Run amp.initialize with the `O2` opt level on a model which's forward method accepts BatchEncoding as the input. This [line](https://github.com/NVIDIA/apex/blob/4ef930c1c884fdca5f472ab2ce7cb9b505d26c1a/apex/amp/_initialize.py#L46) in apex calls the `to` method blindly on all the input types, which in this case calls `BatchEncoding.to` which doesn't support the full scope of a pytorch `to` because it's designed for moving things between devices and not for casting them. We're currently guarding it this way: ```python def patched_to(self, device: Union[str,torch.device]): """Send all values to device by calling v.to(device)""" if isinstance(device, str) or isinstance(device, torch.device): self.data = {k: v.to(device=device) for k, v in self.data.items()} return self BatchEncoding.to = patched_to ``` ## Expected behavior The `BatchEncoding.to` method to not to pass the cast through (as the documentation indicates). Though it's not clear if this is because the `to` method is underspecified, or because Apex is being quite aggressive (and a little silly) in it's blind casting of everything. Either way, if the `BatchEncoding.to` method isn't supposed to be able to cast things (and it doesn't look like it) then it probably shouldn't pass casts down to the tensors it contains.
08-18-2020 20:08:21
08-18-2020 20:08:21
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any interest in fixing this?<|||||>Hello! Indeed, there doesn't seem to be any type cast. Do you want to open a PR with a fix?<|||||>Do you think the fix we have is appropriate? I'm worried it'll break something elsewhere which is expecting the to method to do casting.<|||||>I think here we could do something better than what is currently implemented. We could do it like it's done in the PyTorch implementation, as it takes all `args` and `kwargs`. Something like this: ```py def to(self, *args, **kwargs) -> "BatchEncoding": """ *Appropriate docstring here* Args: *Appropriate Args here* Returns: :class:`~transformers.BatchEncoding`: The same instance of :class:`~transformers.BatchEncoding` after modification. """ self.data = {k: v.to(*args, **kwargs) for k, v in self.data.items()} return self ``` What do you think?<|||||>Won't that pass the casts down? It looks like it would directly pass everything through, but we wouldn't usually want to cast the `LongTensor` that lives in a `BatchEncoding`.<|||||>Ah, then I misunderstood. Let me try to understand the issue better to see how to work it out: as Apex casts from `LongTensor` to `HalfTensor` when using `BatchEncoding` which raises this issue, doesn't the issue happen if the inputs are `torch.Tensor`s instead of a dict/`BatchEncoding`? These inputs would be converted to `HalfTensor`s as well, resulting in the error, wouldn't it?<|||||>Apex.amp blindly calls a `to()` method on anything that gets passed into a module if it's a type it doesn't know about, passing it arguments that cast a tensor to `HalfTensor`. `BatchEncoding` exposes a `to()` method which is designed for moving things between devices, and not for casting, yet it passes the casts down to the tensors despite the docs saying it's only for moving between devices. We don't want to cast the tensors inside `BatchEncoding` to `HalfTensor` as that truncates and rounds the vocabulary to 65k and every 16th word in the high end, even if we cast the indices back to `LongTensor` at the point they are looked up in the embedding. This is a mostly silent failure as if you blindly insert the cast back in that the embedding error suggests you lose a lot of vocabulary items. The workaround I posted makes `BatchEncoding` ignore any casts that are passed into it. Honestly the issue is that pytorch overloaded the `to` method with two completely different functions, casting and device transfer, and python doesn't have a type system that allows you to select an appropriate method overloading based on the inbound method arguments.<|||||>Thanks for taking the time to write such a detailed explanation. I think your proposal makes complete sense. The way the `to` method is implemented in `BatchEncoding` is only intended for use with devices, not casting, so it won't break the current implementation. On top of what you have proposed, I think logging a warning in the case that it's not a device would be appropriate (if it doesn't get caught in the `if` statement). We would want to warn users that their cast did not go through if they attempt doing something like this, rather than silently doing nothing.<|||||>Sure. I can log a warning in there and make up a PR.
transformers
6,581
closed
Error using pipeline with officiel docker image (transformers 3.0.2)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.76-linuxkit-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## To reproduce Steps to reproduce the behavior: 1. Start docker image: `docker run -it huggingface/transformers-pytorch-cpu /bin/bash` 2. Start python 3: `python3` 3. Import transfomers and create the pipeline: ``` >>> from transformers import pipeline >>> nlp = pipeline('question-answering') ``` 4. Define question and context: ``` >>> question='What is the GOC at Zinia-1 ?' >>> context='s. However numerous types of data tend to prove presence of oil i.e. mud gas data show C3 and traces of C4, shows on cuttings and oil was seen by LFA during MDT pump out The middle unit U4 1844-1880.5 MD is oil bearing. No gas is seen from logs at the assumed GOC depth within this unit flat spot at 1849 MD or -1830 TVDSS The lower unit U2 1880.5-1909 mMD is also oil bearing. Pressure data indicate that the 3 units are on-trend although the GWD gas while drilling result suggest barriers between the U2, U4 and U5 in addition to the ultimate seal. The oil sampled in the UM4 has higher viscosity than the oil collected. s. However numerous types of data tend to prove presence of oil i.e. mud gas data show C3 and traces of C4, shows on cuttings and oil was seen by LFA during MDT pump out The middle unit U4 1844-1880.5 MD is oil bearing. No gas is seen from logs at the assumed GOC depth within this unit flat spot at 1849 MD or -1830 TVDSS The lower unit U2 1880.5-1909 mMD is also oil bearing. Pressure data indicate that the 3 units are on-trend although the GWD gas while drilling result suggest barriers between the U2, U4 and U5 in addition to the ultimate seal. The oil sampled in the UM4 has higher viscosity than the oil collected' ``` 5. Call the prediction for question answering: ``` >>> nlp(question=question, context=context, verbose=False) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File "/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 0 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Get the answer like this (this result comes from 2.11.0): `{'score': 0.00025710496321816774, 'start': 300, 'end': 322, 'answer': '1849 MD or -1830 TVDSS'}`
08-18-2020 19:17:51
08-18-2020 19:17:51
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,580
closed
[examples/text-classification] update xnli-mt url
This PR updates the XNLI-MT 1.0 file url, the zip at the old url is corrupted. @sgugger
08-18-2020 16:41:14
08-18-2020 16:41:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=h1) Report > Merging [#6580](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7516bcf27319a2aea9bbe927f8e4d8e501e23c99&el=desc) will **decrease** coverage by `0.57%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6580/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6580 +/- ## ========================================== - Coverage 80.53% 79.95% -0.58% ========================================== Files 156 156 Lines 28130 28130 ========================================== - Hits 22654 22492 -162 - Misses 5476 5638 +162 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `61.95% <0.00%> (-34.79%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6580/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=footer). Last update [7516bcf...3ececb0](https://codecov.io/gh/huggingface/transformers/pull/6580?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,579
closed
[Pegasus Doc] minor typo
Minor typo correction @sshleifer
08-18-2020 16:18:35
08-18-2020 16:18:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=h1) Report > Merging [#6579](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7516bcf27319a2aea9bbe927f8e4d8e501e23c99&el=desc) will **decrease** coverage by `0.52%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6579/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6579 +/- ## ========================================== - Coverage 80.53% 80.00% -0.53% ========================================== Files 156 156 Lines 28130 28130 ========================================== - Hits 22654 22506 -148 - Misses 5476 5624 +148 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6579/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=footer). Last update [7516bcf...1e290b6](https://codecov.io/gh/huggingface/transformers/pull/6579?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,578
closed
Latest source has bug in pipelines for short inputs (regardless of padding)
## Environment info - `transformers` version: 3.0.2 - Platform: Darwin-19.2.0-x86_64-i386-64bit - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz @LysandreJik ## Information Model I am using (Bert, XLNet ...): bert-large-uncased-whole-word-masking-finetuned-squad The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Run the latest source (do not get the error from pip) Steps to reproduce the behavior: 1. install master from source 2. run question answering pipeline 3. Use a document shorter than max_model_length (with or without a padding_strategy) Traceback: ``` Traceback (most recent call last): .... output = pipeline(question=question, context=doc, max_seq_len=512, max_answer_len=50) File "/Users/justingrace/Google Drive/healx_root/code/transformers/src/transformers/pipelines.py", line 1673, in __call__ fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} File "/Users/justingrace/Google Drive/healx_root/code/transformers/src/transformers/pipelines.py", line 1673, in <dictcomp> fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} ValueError: expected sequence of length 512 at dim 1 (got 481) ``` ## Expected behavior the module should not have an error for the non-padded tensors stored in the squad feature set. These are expected to not conform to the max_model_length and need to be handled alternatively -- for instance I am not sure these objects are required at all so they may be removed and masks can be used on the padded tensors.
08-18-2020 16:09:55
08-18-2020 16:09:55
Hey @jusjosgra, Can you post a complete code snippet to produce the error. I'm using `master` and running this code does not give me any errors: ```python from transformers import pipeline qa = pipeline("question-answering", model="bert-large-uncased-whole-word-masking-finetuned-squad", tokenizer="bert-large-uncased-whole-word-masking-finetuned-squad") output = qa(question=question, context=context, max_seq_len=512, max_answer_len=50) ``` Do you use batches as an input in your example? <|||||>Using @patrickvonplaten code I am experiencing the same problem here. Is truncation & padding of sqad examples not working correctly? In my case, I'm using batches as an input, therefore I introduce a list of questions (the same repeated n times) and n contexts for the model to search the answer in. <|||||>@alexvaca0 are you running on a pypi version or on the current `master` branch?<|||||>@LysandreJik on the current master branch<|||||>Okay, pinging @mfuntowicz to see if he has any insights.<|||||>I'm starting to suspect this has something to do with certain tokens... Look, this is one of the texts that the pipeline tokenizes to 485 instead of 512: **For a city containing multiple hospitals such as NYC, we defined a 102 proximity metric in this study as population normalized number of hospital beds within Prior to analysis of potential predictors, we considered multiple base regression models. 110 Given the significant spatial correlation in the present case data as evidenced by the 111 Moran Index, I (176) = 0.481, p < 0.0005 [24] , we explored potential regression models 112 both with and without spatial effects. We compared four base models (no predictors): 113 1) a Poisson model with random intercept; 2) a Poisson Besag-York-Mollié (BYM) by Eq 1: where υ i has an intrinsic conditional autoregressive (ICAR) structure [27] . We used the 119 reparameterization of the BYM model proposed by Riebler et al. [28] , known as the 120 BYM2 model and shown in Eq 2: where τ γ is the overall precision hyperparameter, ϕ ∈ [0, 1] is the mixing hyperparameter 122 representing the proportional division of variance between the spatial and nonspatial 123 effects, υ * is the spatial (ICAR) effect with a scaling factor such that Var (υ * ) ≈ 1, and 124 ν * is the nonspatial random-effect with ν * ∼ N (0, 1). Penalized complexity (PC) priors 125 are applied to hyperparameters τ γ and ϕ (compared to log-gamma priors in the random 126 intercept model) [29] . All four models used ZCTA population as the exposure and a 127 log-link function. We selected the model with the lowest Deviance Information Criterion 128 (DIC) [30] , representing the best trade-off between model fit and complexity. Characteristics for the four base models examined, including hyperparameters, are 130 shown in i , scaled nonspatial random-effect; υ * i , scaled spatial random-effect with intrinsic conditional autoregressive structure; τ ν , precision for nonspatial random effect, log-Gamma prior; τ γ , overall precision, penalized complexity (PC) prior; ϕ, mixing parameter, PC prior; n, overdispersion parameter, PC Gamma prior. Multiple regression models were built using a method adjusted from Nikolopoulos et 136 al.** I don't know if it's a coincidence that it has some mathematical signs that the tokenizer might not be processing correctly...<|||||>Facing the same issue when dealing with multiple examples and padding strategy="longest". I think it might have to do with the fact that the features are created per example and hence the padding is done per example. The padding code does not get to see all examples to find the longest example. https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1753-L1764 Same issue with `squad.py` https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L354-L362<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,577
closed
CamembertForCausalLM
This PR adds `CamembertForCausalLM` by subclassing `RobertaForCausalLM`, so that it can be used with the `EncoderDecoderModel`. @patrickvonplaten
08-18-2020 15:46:48
08-18-2020 15:46:48
@patrickvonplaten didn't add any tests since it subclasses `RobertaForCausalLM` which is already tested.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=h1) Report > Merging [#6577](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `1.14%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6577/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6577 +/- ## ========================================== + Coverage 79.18% 80.33% +1.14% ========================================== Files 156 156 Lines 28129 28132 +3 ========================================== + Hits 22275 22599 +324 + Misses 5854 5533 -321 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | | | [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6577/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=footer). Last update [12d7624...1e68a94](https://codecov.io/gh/huggingface/transformers/pull/6577?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>hi @patrickvonplaten , could you take a look ? Thanks !<|||||>Yes, right. But this intended for the `EncoderDecoder` model class which can use pre-trained encoder for seq2seq tasks. See PR #6411 and #6538 . Also Patrick shared this awesome paper which explores this idea and has shown some good results on seq2seq tasks. https://arxiv.org/pdf/1907.12461.pdf<|||||>Thanks for the explanation! Yes it makes perfect sense to use these models for seq2seq but I still think maybe we should add a note somewhere in the document to explain this?<|||||>Definitely! Will update the docs to explicitly state that this is intended for seq2seq and might not perform well on just causal modelling. Will also link the paper in `EncoderDecoder` doc page, so people will know what to choose.<|||||>Btw, @patil-suraj did you start doing some Roberta2Roberta experiments? Wanted to start running some experiments next week - wanted to check if you already had some interesting results<|||||>@patrickvonplaten Just started one experiment for qg and going to run cnn/dm after that, will let you know the results. Since Roberta has `bos`, what's the `decoder_start_token_id` for `Roberta2Roberta`, is it `bos` or `pad` token ?<|||||>In my experiments with bert2bert, I used the same token for encoder bos and decoder bos, but it's up to you! `bos` makes more sense to me in the case of Roberta2Roberta.<|||||>Okay, Thanks Patrick!
transformers
6,576
closed
Add hyperparameter search to Trainer
This PR introduces an access point to easily use optuna or Ray Tune hyperparameter search with `Trainer`. Here is an example of use on MRPC (requires nlp installed from source on branch cloudpickle): ``` from nlp import load_dataset, load_metric from transformers import AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) # Won't be necessary when this PR is merged with master since the Trainer will do it automatically encoded_dataset.set_format(columns=['attention_mask', 'input_ids', 'token_type_ids', 'label']) def model_init(): return AutoModelForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, data_collator=DataCollatorWithPadding(tokenizer), train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) # Defaut objective is the sum of all metrics when metrics are provided, so we have to maximize it. trainer.hyperparameter_search(direction="maximize") ``` This will use optuna or ray Tune depending on which platform is installed (default to optuna if both are there). You can force a backend by passing `backend="ray"` (or `backend="optuna"`). Here is another example on a simple regression problem (will convert to test in a follow-up PR): ``` from transformers import Trainer, TrainingArguments import numpy as np class RegressionDataset: def __init__(self, a=2, b=3, length=64, seed=42): np.random.seed(seed) self.length = length self.x = np.random.normal(size=(length,)).astype(np.float32) self.y = a * self.x + b + np.random.normal(scale=0.1, size=(length,)).astype(np.float32) def __len__(self): return self.length def __getitem__(self, i): return {'input_ids': self.x[i], 'label': self.y[i]} class RegressionModel(torch.nn.Module): def __init__(self, a=0, b=0): super().__init__() self.a = torch.nn.Parameter(torch.tensor(a).float()) self.b = torch.nn.Parameter(torch.tensor(b).float()) def forward(self, input_ids=None, labels=None): y = input_ids * self.a + self.b if labels is None: return (y,) loss = torch.nn.functional.mse_loss(y, labels) return (loss, y) train_set = RegressionDataset() eval_set = RegressionDataset() model = RegressionModel() training_args = TrainingArguments("test", evaluate_during_training=True, eval_steps=8, disable_tqdm=True) def compute_metrics(eval_pred): predictions, labels = eval_pred true = np.abs(predictions - labels) <= 0.25 return {'accuracy': true.astype(np.float32).mean().item()} trainer = Trainer(args=training_args, train_dataset=train_set, eval_dataset=eval_set, model_init=lambda: RegressionModel(), compute_metrics=compute_metrics) best_trial = trainer.hyperparameter_search(direction="maximize") ``` To customize the hyperparameter searched, pass along a `hp_space` function like this (for optuna): ``` def my_hp_space(trial): return { "learning_rate": trial.suggest_float("learning_rate", 1e-2, 1, log=True), "num_train_epochs": trial.suggest_int("num_train_epochs", 1, 5), "seed": trial.suggest_int("seed", 1, 40), "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [4, 8, 16, 32, 64]), } best_trial = trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space) ``` To customize the objective, pass along a `compute_objective` function.
08-18-2020 15:30:32
08-18-2020 15:30:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=h1) Report > Merging [#6576](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.92%`. > The diff coverage is `62.88%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6576/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6576 +/- ## ========================================== - Coverage 80.37% 79.45% -0.93% ========================================== Files 156 156 Lines 28058 28380 +322 ========================================== - Hits 22552 22548 -4 - Misses 5506 5832 +326 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `95.55% <ø> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | | | ... and [43 more](https://codecov.io/gh/huggingface/transformers/pull/6576/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=footer). Last update [d329c9b...44e9543](https://codecov.io/gh/huggingface/transformers/pull/6576?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Neat implem! I was initially wondering whether we should just have a `HyperTrainer` whose only job would be to spawn multiple instances of Trainers, given that the adherence between hp search and everything else in this class is rather small. Thoughts? Others feel free to chime in <|||||>> I was initially wondering whether we should just have a `HyperTrainer` whose only job would be to spawn multiple instances of Trainers We can also take that road, my reasoning was that since the `HyperTrainer` would need to take all the same arguments as `Trainer` and then some, it made sense to just add it to the class. I think it's easier for a user to just have one class as an access point.
transformers
6,575
closed
Tokenizer further tokenizes pretokenized input
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: current master - Platform: MacOS - Python version: 3.7 ### Who can help @mfuntowicz ## Information It seems that passing pretokenized input to the Tokenizer and setting `is_pretokenized=True` doesn't prevent the Tokenizer from further tokenizing the input. This issue already came up in #6046 and the reason for this seems to be #6573 . A workaround is to set `is_pretokenized=False`. What hasn't been reported yet is that this issue also arises with FastTokenizers where we see the same behavior. However, there is no workaround for FastTokenizers (or at least I haven't found one...). Setting `is_pretokenized=False` will raise a ValueError. ## To reproduce ```python from transformers.tokenization_auto import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-german-cased") fast_tokenizer = AutoTokenizer.from_pretrained("bert-base-german-cased", use_fast=True) text = "Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist" pretokenized_text = ['Schar', '##tau', 'sagte', 'dem', 'Tages', '##spiegel', ',', 'dass', 'Fischer', 'ein', 'Id', '##iot', 'ist'] tokenized = tokenizer.encode(text) # returns list of len 15 -> 13 tokens + 2 special tokens pretokenized_tok = tokenizer.encode(pretokenized_text, is_pretokenized=True) # returns list of len 23 -> too large pretokenized_tok_2 = tokenizer.encode(pretokenized_text, is_pretokenized=False) # returns list of len 15 -> 13 tokens + 2 special tokens fast_tokenized = fast_tokenizer.encode(text) # returns list of len 15 -> 13 tokens + 2 special tokens fast_pretokenized_tok = fast_tokenizer.encode(pretokenized_text, is_pretokenized=True) # returns list of len 23 -> too large # fast_pretokenizer_tok2 = fast_tokenizer.encode(pretokenized_text, is_pretokenized=False) # would raise: 'ValueError: TextInputSequence must be str' tokenized_decoded = tokenizer.decode(tokenized) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' pretokenized_tok_decoded = tokenizer.decode(pretokenized_tok) # returns '[CLS] Schar # # tau sagte dem Tages # # spiegel, dass Fischer ein Id # # iot ist [SEP]' pretokenized_tok_2_decoded = tokenizer.decode(pretokenized_tok_2) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' fast_tokenized_decoded = fast_tokenizer.decode(fast_tokenized) # returns '[CLS] Schartau sagte dem Tagesspiegel, dass Fischer ein Idiot ist [SEP]' fast_pretokenized_tok_decoded = fast_tokenizer.decode(fast_pretokenized_tok) # returns '[CLS] Schar # # tau sagte dem Tages # # spiegel, dass Fischer ein Id # # iot ist [SEP]' ```
08-18-2020 14:23:54
08-18-2020 14:23:54
Hi, `is_pretokenized=True` actually means that you are providing a list of **words** as strings instead of a full sentence or paragraph not sub-words. The step which is skipped in this case is the *pre* tokenization step, not the tokenization step. This is useful for NER or token classification for instance but I understand that the wording can be confusing, we will try to make it more clear in the docstring and the page of the doc ([here](https://huggingface.co/transformers/preprocessing.html#pre-tokenized-inputs)) cc @sgugger and @LysandreJik <|||||>Adding this to my TODO.<|||||>Thanks for making this clear! :)
transformers
6,574
closed
[docs] Fix number of 'ug' occurrences in tokenizer_summary
08-18-2020 13:57:36
08-18-2020 13:57:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=h1) Report > Merging [#6574](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9&el=desc) will **decrease** coverage by `1.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6574/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6574 +/- ## ========================================== - Coverage 79.24% 78.15% -1.10% ========================================== Files 156 156 Lines 28130 28130 ========================================== - Hits 22292 21985 -307 - Misses 5838 6145 +307 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6574/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=footer). Last update [cfa26d2...5e35d96](https://codecov.io/gh/huggingface/transformers/pull/6574?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
6,573
closed
[docs] Fix wrong newline in the middle of a paragraph
08-18-2020 13:55:09
08-18-2020 13:55:09
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=h1) Report > Merging [#6573](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9&el=desc) will **increase** coverage by `0.14%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6573/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6573 +/- ## ========================================== + Coverage 79.24% 79.38% +0.14% ========================================== Files 156 156 Lines 28130 28130 ========================================== + Hits 22292 22332 +40 + Misses 5838 5798 -40 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.52%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6573/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=footer). Last update [cfa26d2...0a20942](https://codecov.io/gh/huggingface/transformers/pull/6573?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
6,572
closed
Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task.
I am working with some pre-training tasks recently, and found that transformers have not yet supported NSP tasks. So I write the code of **Dataset and DataCollator for BERT NSP task**, now I try to contribute. This is my first contribution to an open source project. If there is something wrong, please point it out, thank you.
08-18-2020 13:34:03
08-18-2020 13:34:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=h1) Report > Merging [#6572](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfa26d2b412ac3494eef06506004ca857c115ad9?el=desc) will **increase** coverage by `1.01%`. > The diff coverage is `13.15%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6572/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6572 +/- ## ========================================== + Coverage 79.24% 80.26% +1.01% ========================================== Files 156 156 Lines 28130 28243 +113 ========================================== + Hits 22292 22668 +376 + Misses 5838 5575 -263 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `56.32% <10.52%> (-35.52%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `62.80% <13.33%> (-28.11%)` | :arrow_down: | | [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6572/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=footer). Last update [cfa26d2...e924e84](https://codecov.io/gh/huggingface/transformers/pull/6572?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for reply and carefully review! I understand what you and @JetRunner mean now. The purpose of submitting this script is to provide a tool that can test `DataCollator` and `TextDataset` (I don't know if I am doing this right). I agree with you and think it is not elegant to add a `run_bert_nsp.py` script alone. I will remove this script and support NSP together with MLM objective later.<|||||>Mmmm, your rebase makes the PR touch way too many files now and quite unreadable. You should close and reopen it (once ready) so we can see the diff a little bit better.<|||||>ok i will reopen the pr. thanks~<|||||>I’m very sorry, I didn’t know that pr should be closed before rebase, which resulted in PR touch way too many files. I didn't notice you have already implement the `BertForPreTraining` in the modeling_bert file before, so I modified `BertForNextSencencePrediction` to support both mlm and nsp objectives. These changes have now been revert.<|||||>@sgugger `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` now support mlm and nsp together. <|||||>You need to reopen a new PR, this one will forever have all those files diff with all the old commits you rebased, so we can't review it.<|||||>got it.
transformers
6,571
closed
token-classification: update url of GermEval 2014 dataset
Hi, the URL to the GermEval 2014 dataset was recently changed as the dataset is stored on a Google Drive now. This PR updates the URL in main documentation and in the example shell scripts.
08-18-2020 13:09:58
08-18-2020 13:09:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=h1) Report > Merging [#6571](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7719ecd19f63876fdc2d31699977c7ced3643417?el=desc) will **decrease** coverage by `0.28%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6571/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6571 +/- ## ========================================== - Coverage 82.26% 81.98% -0.29% ========================================== Files 172 172 Lines 33077 33077 ========================================== - Hits 27211 27118 -93 - Misses 5866 5959 +93 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.52% <0.00%> (-34.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-6.73%)` | :arrow_down: | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.19% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.17% <0.00%> (-2.70%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6571/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=footer). Last update [5c1d5ea...debfb81](https://codecov.io/gh/huggingface/transformers/pull/6571?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This one is obsolete, correct @stefan-it ? Please confirm and close if so.<|||||>Hi @vblagoje , the PR would also update the readme example 😅 I will resolve the merge conflicts now!<|||||>Resolved 🤓
transformers
6,570
closed
Can not use the convert_graph_to_onnx.py to convert the pytorch model to onnx model
When I convert the pytorch model to onnx as follow: ``` import os import torch from pytorch_pretrained_bert import BertModel model = BertModel.from_pretrained('bert-base-uncased') base_path = os.path.dirname(__file__) pt_path = os.path.join(base_path, 'bert.pt') torch.save(model, pt_path) ``` Then I use the convert_graph_to_onnx.py to convert the pytorch model to onnx model, as follow: `python src/transformers/convert_graph_to_onnx.py --framework pt --model bert.pt bert.onnx` **Error:** `Error while converting the model: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte` I don't know what caused this - `transformers` version:3.0.1 - Platform: pycharm - Python version: 3.7.2 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): 2.0.0 - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO Thank you for the help :)
08-18-2020 12:47:18
08-18-2020 12:47:18
same issues here. please help!<|||||>same issue<|||||>Same issue with Camembert torch model python transformers/convert_graph_to_onnx.py --model camembert-base --framework pt /tmp/camembert-base.onnx ====== Converting model to ONNX ====== ONNX opset version set to: 11 Loading pipeline (model: camembert-base, tokenizer: camembert-base) Using framework PyTorch: 1.4.0 Found input input_ids with shape: {0: 'batch', 1: 'sequence'} Found input attention_mask with shape: {0: 'batch', 1: 'sequence'} Found output output_0 with shape: {0: 'batch', 1: 'sequence'} Found output output_1 with shape: {0: 'batch'} Ensuring inputs are in correct order token_type_ids is not present in the generated input list. Generated inputs order: ['input_ids', 'attention_mask'] Error while converting the model: export() got an unexpected keyword argument 'use_external_data_format' <|||||>My issue was due to an older version of pytorch v1.4 where export function arguments are not the same as in version v1.6<|||||>Any Solutions for this issue ?<|||||>Hello @junkgear, we're doing a rework of the ONNX implementation, you can take a look at the proposal here: https://github.com/huggingface/transformers/pull/11786 Let me know if that works for you!<|||||>I dumped the vocab and config file in model folder and was able to convert Gave model folder as input instead of bin file to convert_graph_to_onnx.py script ``` from transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup, RobertaConfig, RobertaModel, RobertaTokenizer) MODEL_CLASSES = {'roberta': (RobertaConfig, RobertaModel, RobertaTokenizer)} config_class, model_class, tokenizer_class = MODEL_CLASSES['roberta'] config = config_class.from_pretrained('microsoft/codebert-base') tokenizer = tokenizer_class.from_pretrained('microsoft/codebert-base') #convert config file to json and save in model folder config.to_json_file('model/config.json') #save the vocabulary tokenizer.save_vocabulary('model/') ```<|||||>> I dumped the vocab and config file in model folder and was able to convert > Gave model folder as input instead of bin file to convert_graph_to_onnx.py script > > ``` > from transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup, > RobertaConfig, RobertaModel, RobertaTokenizer) > > MODEL_CLASSES = {'roberta': (RobertaConfig, RobertaModel, RobertaTokenizer)} > > config_class, model_class, tokenizer_class = MODEL_CLASSES['roberta'] > > config = config_class.from_pretrained('microsoft/codebert-base') > > tokenizer = tokenizer_class.from_pretrained('microsoft/codebert-base') > > #convert config file to json and save in model folder > config.to_json_file('model/config.json') > > #save the vocabulary > tokenizer.save_vocabulary('model/') > ``` This reply saved me an hour or two. If you look into convert_graph_to_onnx.py, --model parameter can only be HuggingFace's model id or a path. So anything like my_model.ckpt won't work. It should be a path containing vocab.txt, config.json, and pytorch_model.bin. Also the tokenizer should be either a tokenizer/model id or a folder containing the tokenizer files. Thanks a lot @junkgear !
transformers
6,569
closed
[Model card] Bert2GPT2 EncoderDecoder model
08-18-2020 12:03:26
08-18-2020 12:03:26
transformers
6,568
closed
KeyError with DPR reader in Question Answering pipeline
## Environment info - `transformers` version: 3.0.2 (I have also tried with current `master`) - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (with Google Colab) - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using: DPR reader (i.e. https://huggingface.co/facebook/dpr-reader-single-nq-base) to perform a Question Answering task. ## To reproduce Steps to reproduce the behavior: ```from transformers import pipeline model_name="facebook/dpr-reader-single-nq-base" tokenizer_name="facebook/dpr-reader-single-nq-base" my_pipeline = pipeline("question-answering", model=model_name, tokenizer=tokenizer_name) ``` Error message: ``` KeyError Traceback (most recent call last) <ipython-input-2-b3a2784cffcf> in <module>() 4 model_name="facebook/dpr-reader-single-nq-base" 5 tokenizer_name="facebook/dpr-reader-single-nq-base" ----> 6 my_pipeline = pipeline("question-answering", model=model_name, tokenizer=tokenizer_name) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 208 209 if "model_type" in config_dict: --> 210 config_class = CONFIG_MAPPING[config_dict["model_type"]] 211 return config_class.from_dict(config_dict, **kwargs) 212 else: KeyError: 'dpr' ```
08-18-2020 11:28:43
08-18-2020 11:28:43
Hi @LysandreJik and @lhoestq , FYI, there is still this error on Transformers. Have you planned to support DPR into pipeline API? Thanks<|||||>Thanks for reporting ! The reader is currently not configured as a model for question answering indeed, but that's definitely something we'll change. We'll also make it loadable as an an open-domain question answering Pipeline object.<|||||>Hi @lhoestq , I got it. Is there any approximate release date? I believe that open-domain QA Pipeline will be really useful, thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,567
closed
XLNet Bug when training with apex 16-bit precision
XLNet training fail, while using 16-bit precision, because of tensor creation with explicit usage dtype=torch.float mode in relative_positional_encodings function.
08-18-2020 11:13:02
08-18-2020 11:13:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=h1) Report > Merging [#6567](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6?el=desc) will **decrease** coverage by `0.77%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6567/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6567 +/- ## ========================================== - Coverage 79.18% 78.41% -0.78% ========================================== Files 156 156 Lines 28129 28129 ========================================== - Hits 22275 22056 -219 - Misses 5854 6073 +219 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.30% <100.00%> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6567/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=footer). Last update [12d7624...1c18ddf](https://codecov.io/gh/huggingface/transformers/pull/6567?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Of course, @JetRunner, here it is ![Screenshot from 2020-08-20 11-47-50](https://user-images.githubusercontent.com/22281936/90748799-516c7900-e2db-11ea-993f-7f38642b155f.png) <|||||>@JetRunner, done<|||||>Merging since the CI error looks unrelated.
transformers
6,566
closed
Can not convert the pytorch pretrained bert model to onnx model
When I convert the pytorch pretrained bert model to onnx model as follows: ``` import os import torch from pytorch_pretrained_bert import BertTokenizer, BertModel model = BertModel.from_pretrained('bert-base-uncased') input_1 = torch.LongTensor(1, 14) input_2 = torch.LongTensor(1, 14) input_3 = torch.LongTensor(1, 14) base_path = os.path.dirname(__file__) onnx_path = os.path.join(base_path, 'bert.onnx') torch.onnx.export(model, (input_1, input_2, input_3), onnx_path) ``` **raise error**: `IndexError: index out of range in self` - `transformers` version: 3.0.2 - Platform: pycharm - Python version: 3.8 - PyTorch version (GPU?): 1.6.0 no GPU - onnx version: 1.7.0 - pytorch-pretrained-bert version: 0.6.2 Thank you for the help :)
08-18-2020 10:16:11
08-18-2020 10:16:11
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,565
closed
New Feature: Best-First Beam Search
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Best-First Beam Search is a faster beam search algorithm for decoding. https://arxiv.org/abs/2007.03909 (By Clara Meister, Tim Vieira, Ryan Cotterell) ## Motivation This technique can be essential for production-purpose inference and of great interest to our users. It would be a good idea to integrate Best-First Beam Search to Hugging Face transformers (for GPT, BART, T5, etc.).
08-18-2020 09:29:30
08-18-2020 09:29:30
The first step would be refactoring our current implementation with the priority queue. I am on it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,564
closed
T5 Gradient Checkpointing
# 🚀 Feature request Currently, only Bert supports gradient checkpointing which allow the model to be fine-tuned on GPUs with small memory. It will be great to make T5 also support gradient checkpointing. Code: https://github.com/huggingface/transformers/blob/0735def8e1200ed45a2c33a075bc1595b12ef56a/src/transformers/modeling_bert.py#L461 ## Motivation T5 has very big models with 3B and 11B parameters which make it impossible to be fine-tuned on most GPUs. Gradient checkpointing will allow these huge models to be fine-tuned on GPUs. This will lead to much better results on downstream tasks using on house GPUs without the need to fine-tuned it on TPUs. ## Your contribution If I am not mistaken all what need to be change is the following block: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L752 ``` for i, (layer_module, past_key_value_state) in enumerate(zip(self.block, past_key_value_states)): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if getattr(self.config, "gradient_checkpointing", False): def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, extended_attention_mask, position_bias, encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, head_mask[i], past_key_value_state, use_cache, output_attentions, ) else: layer_outputs = layer_module( hidden_states, attention_mask=extended_attention_mask, position_bias=position_bias, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, encoder_decoder_position_bias=encoder_decoder_position_bias, head_mask=head_mask[i], past_key_value_state=past_key_value_state, use_cache=use_cache, output_attentions=output_attentions, ) # layer_outputs is a tuple with: # hidden-states, key-value-states, (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) hidden_states, present_key_value_state = layer_outputs[:2] if i == 0: # We share the position biases between the layers - the first layer store them # layer_outputs = hidden-states, key-value-states (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) position_bias = layer_outputs[3 if output_attentions else 2] if self.is_decoder and encoder_hidden_states is not None: encoder_decoder_position_bias = layer_outputs[5 if output_attentions else 3] # append next layer key value states if use_cache: present_key_value_states = present_key_value_states + (present_key_value_state,) if output_attentions: all_attentions = all_attentions + (layer_outputs[2],) # We keep only self-attention weights for now ``` @patrickvonplaten thanks in advance for looking into it.
08-18-2020 09:10:48
08-18-2020 09:10:48
Also pinging @LysandreJik for notification in case this is easy to implement<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Keep it a live :)<|||||>That's an important feature indeed! Will try to tackle this with @LysandreJik @VictorSanh in the new year :-) <|||||>Hi, I'm not too familiar with T5 internals but I crudely tried modifying `modeling_t5.py` as OP suggested, but I ran into some issues with unsupported return values for `torch.utils.checkpoint.checkpoint`, so it seems like there might be something else other than that block that needs changing? ``` File "/data/sauravkadavath/miniconda3/envs/transformers-4.0.0/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 163, in checkpoint return CheckpointFunction.apply(function, preserve, *args) TypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got NoneType) for return value 1 ```<|||||>Hey @ssss1029, Thanks for playing around with the feature! Would you mind using your code to open a PR? I'll help you get it merged. It is very well possible that we might have to change some more code in T5 to make it work. Ideally, I'd try to base the T5 gradient checkpointing's code as much as possible on how Bart does it.<|||||>Lots of people have been asking for T5 checkpointing, so your PR would be a great contribution if you want to give it a try :-) <|||||>Hi Patrick, unfortunately, I'm pretty new to Huggingface internals and I won't have the bandwidth to implement this.<|||||>@patrickvonplaten @ssss1029 Just a straightforward workaround, but not for PR. I modify the torch.utils.checkpoint file to overcome its limitation. See the code below, all the modificaitons are with comments. Training with t5-base, I obverse the loss is droping down as same as with gradient_checkpointing off and the memory usage drops down as well. But don't have time to do full verification now. ### 1. checkpoint.CheckpointFunction ```python class CheckpointFunction(torch.autograd.Function): @staticmethod def forward(ctx, run_function, preserve_rng_state, *args): check_backward_validity(args) ctx.run_function = run_function ctx.preserve_rng_state = preserve_rng_state if preserve_rng_state: ctx.fwd_cpu_state = torch.get_rng_state() ctx.had_cuda_in_fwd = False if torch.cuda._initialized: ctx.had_cuda_in_fwd = True ctx.fwd_gpu_devices, ctx.fwd_gpu_states = get_device_states(*args) ctx.save_for_backward(*args) with torch.no_grad(): outputs = run_function(*args) # return outputs # # Lie to torch we have no None items, to avoid the assert # result = [] for o in outputs: if o is None: o = torch.zeros(0).cuda() result.append(o) return tuple(result) @staticmethod def backward(ctx, *args): if not torch.autograd._is_checkpoint_valid(): raise RuntimeError("Checkpointing is not compatible with .grad(), please use .backward() if possible") inputs = ctx.saved_tensors rng_devices = [] if ctx.preserve_rng_state and ctx.had_cuda_in_fwd: rng_devices = ctx.fwd_gpu_devices with torch.random.fork_rng(devices=rng_devices, enabled=ctx.preserve_rng_state): if ctx.preserve_rng_state: torch.set_rng_state(ctx.fwd_cpu_state) if ctx.had_cuda_in_fwd: set_device_states(ctx.fwd_gpu_devices, ctx.fwd_gpu_states) detached_inputs = detach_variable(inputs) with torch.enable_grad(): outputs = ctx.run_function(*detached_inputs) if isinstance(outputs, torch.Tensor): outputs = (outputs,) # # Skip None items and tensors which requires_grad are False when do backward # backward_outputs = [] backward_args = [] for o, a in zip(outputs, args): if o is not None and o.requires_grad: backward_outputs.append(o) backward_args.append(a) torch.autograd.backward(backward_outputs, backward_args) # torch.autograd.backward(outputs, args) grads = tuple(inp.grad if isinstance(inp, torch.Tensor) else inp for inp in detached_inputs) return (None, None) + grads ``` ### 2. checkpoint.checkpoint() ```python def checkpoint(function, *args, **kwargs): preserve = kwargs.pop('preserve_rng_state', True) if kwargs: raise ValueError("Unexpected keyword arguments: " + ",".join(arg for arg in kwargs)) outputs = CheckpointFunction.apply(function, preserve, *args) # # Resotre None items to result # result = [] for o in outputs: if len(o) == 0: o = None result.append(o) return tuple(result) ``` ### 3. modeling_t5.T5Stack.forward(), just the common way ```python if getattr(self.config, "gradient_checkpointing", False): def create_custom_forward(module): def custom_forward(*inputs): return tuple(module(*inputs, use_cache, output_attentions)) return custom_forward layer_outputs = checkpoint( create_custom_forward(layer_module), hidden_states, extended_attention_mask, position_bias, encoder_hidden_states, encoder_extended_attention_mask, encoder_decoder_position_bias, head_mask[i], past_key_value, ) else: layer_outputs = layer_module( hidden_states, attention_mask=extended_attention_mask, position_bias=position_bias, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, encoder_decoder_position_bias=encoder_decoder_position_bias, head_mask=head_mask[i], past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) ``` <|||||>Hey @xFinal , Your 3rd approach is definitely the one we'd be super happy to integrate into Transformers. Thanks a mille for you contribution already. If anyone in the community wants to give it a shot to add @xFinal's 3rd proposed solution to `modeling_t5.py` that would be awesome :-)<|||||>Hi @patrickvonplaten , Glad to hear it's helpful! But I have two worries about the integration: 1. It's not tested by a full train yet. 2. The approch now is to modify the **torch.utils.checkpoint** file which is a part of Pytorch. Maybe not suitable for integration I think. Maybe there will be more elegant way, like adjust t5 itself? <|||||>Hi @xFinal , I tried your solution and got the following error: `TypeError('CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for return value 1') > /share/home/dwaydwaydway/t5/src/transformers/src/transformers/models/t5/modified_gradient_ckpt.py(124)checkpoint() 123 --> 124 outputs = CheckpointFunction.apply(function, preserve, *args) 125 ` May I ask which pytorch version did you use?<|||||>@dwaydwaydway, The verison is 1.7.1 Make sure return tuple type in CheckpointFunction.forward()<|||||>This issue has been stale for 1 month.<|||||>Inspired by @xFinal's solution, I implemented [another workaround](https://github.com/veritable-tech/transformers/commit/cdbc6d52ff07fe41c9585fdae017279ea1a4cf6b) that doesn't require modifying the `Checkpoint` class (by returning a dummy Tensor instead of None in `T5Block.forward`). It seems to work, but my tests might not be comprehensive enough.<|||||>Hey @ceshine - do you mind opening a PR for it? :-)<|||||>> Hey @ceshine - do you mind opening a PR for it? :-) Not at all. I'll open a PR after a bit more polishing.<|||||>@ceshine that's great! :)
transformers
6,563
closed
Pretokenized text handling mistake in tokenization_utils.py file
Hi, I'm wondering if this is a mistake that in the `tokenization_utils.py` file, the pretokenized text will be further tokenized, while the raw text would not. The code is as follows: ``` if is_pretokenized: tokens = list(itertools.chain(*(self.tokenize(t, is_pretokenized=True, **kwargs) for t in text))) return self.convert_tokens_to_ids(tokens) else: return self.convert_tokens_to_ids(text) ```
08-18-2020 07:55:36
08-18-2020 07:55:36
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,562
closed
[Urgent] Word embedding initialization documentation and code might mismatch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> https://huggingface.co/transformers/main_classes/model.html ```resize_token_embeddings``` this documentation says it returns ```torch.nn.Embeddings``` The source code also used ```nn.Embedding``` (https://huggingface.co/transformers/_modules/transformers/modeling_utils.html#PreTrainedModel.resize_token_embeddings). But I checked the resized ```embedding.weight``` that the added embedding weight std() is about 0.01 ~ 0.02 and mean is around 0. While pytorch ```nn.Embedding``` is initialized from **N(0, 1)** (https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html). Is there any gap between documentation and implementation? May I know ```resize_token_embeddings``` initialize weights from ```uniform(-0.05, 0.05)``` or other distributions ?? which might not be N(0, 1). Though the source code really used ```nn.Embedding``` ..... - `transformers` version: 2.5.1 - Platform: linux - Python version: 3.7.4 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer.add_tokens('<MEME>') bert_model = BertModel.from_pretrained("bert-base-uncased") bert_model.resize_token_embeddings(len(tokenizer)) print(bert_model.embeddings.word_embeddings.weight[-1].std()) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-18-2020 07:23:48
08-18-2020 07:23:48
the new embeddings are initialized with N(0, 0.02) by default in `_get_resized_embeddings` https://github.com/huggingface/transformers/blob/9c2b2db2cdf0af968aae58d6075b6654224fb760/src/transformers/modeling_utils.py#L650-L651 calling `_init_weights` https://github.com/huggingface/transformers/blob/9c2b2db2cdf0af968aae58d6075b6654224fb760/src/transformers/modeling_bert.py#L592-L597<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,561
closed
Create my own language model
I'm new in NLP and i want to give BERT a try. I have a wikipedia corpus (in Indonesian language of course, and in .txt format) and want to train it with bert multilingual cased. For further use, i expect that BERT can "adapt" well with indonesian language and can do specific task which is a text similarity task, or if possible do automated scoring based on the 2 texts given. Can i use the [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) to fine-tune and create my own language model? If it's possible, then what are the exact steps to achieve this? Thankyou in advance.
08-18-2020 07:23:08
08-18-2020 07:23:08
For open-ended questions like this you should try https://discuss.huggingface.co Did you read https://huggingface.co/blog/how-to-train?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,560
closed
Huggingface create_optimizer method not working
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.6.6 - PyTorch version (GPU?): 1.5.0+cpu (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @jplu ## Information Model I am using (Bert, XLNet ...): Roberta The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run this code: ``` import tensorflow as tf from transformers import RobertaConfig, TFRobertaForMaskedLM, create_optimizer config = RobertaConfig() optimizer,lr = create_optimizer(1e-4,1000000,10000,0.1,1e-6,0.01) training_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model = TFRobertaForMaskedLM(config) model.compile(optimizer=optimizer, loss=training_loss) input = tf.random.uniform(shape=[1,25], maxval=100, dtype=tf.int32) hist = model.fit(input, input, epochs=1, steps_per_epoch=1,verbose=0) ``` 2. I am getting an error: > TypeError: apply_gradients() got an unexpected keyword argument 'experimental_aggregate_gradients' ## Expected behavior optimizer should be created
08-18-2020 07:14:52
08-18-2020 07:14:52
It seems that keras is passing `experimental_aggregate_gradients` to `apply_gradients`, but the *transformers* TF2 optimizer does not have this argument (see https://github.com/huggingface/transformers/blob/master/src/transformers/optimization_tf.py#L224). One workaround right now is to set `optimizer._HAS_AGGREGATE_GRAD = False`, which prevents keras from passing this argument.<|||||>Thanks for the analysis @volker42maru. @jplu when you're back from vacation, we should fix this optimizer to accept this argument.<|||||>Hello! I was aware of this, and this was on purpose to make the trainer compliant with all the TF 2.X versions. Now that the trainer is fixed to v 2.2 min, I will modify accordingly the method. Thanks @volker42maru to raise this and make me remember I had to update this.<|||||>The PR #6717 should fix the problem.
transformers
6,559
closed
Finetuning GPT2 produces IndexError: index out of range in self error
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I am finetuning the pretrained GPT2 model on my dataset, using a custom defined loss function. I am getting this error below, but don't really have an idea as to what the issue might be. Here is the error: ```python Traceback (most recent call last): File "run_finetune_gpt2.py", line 143, in <module> main() File "run_finetune_gpt2.py", line 130, in main trainer.train() File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/data/ric-2020/textgen/finetune_gpt2.py", line 90, in forward top_p=0.95 File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/generation_utils.py", line 483, in generate model_specific_kwargs=model_specific_kwargs File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/generation_utils.py", line 524, in _generate_no_beam_search outputs = self.generate_text_while_finetuning(**model_inputs) File "/data/ric-2020/textgen/finetune_gpt2.py", line 44, in generate_text_while_finetuning output_hidden_states=output_hidden_states, File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 470, in forward position_embeds = self.wpe(position_ids) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` I save many of the inputs that go into the training process: ```python TRAINING ARGS output_dir: models/textgen/out overwrite_output_dir: False do_train: True do_eval: False do_predict: False evaluate_during_training: False per_device_train_batch_size: 8 per_device_eval_batch_size: 8 per_gpu_train_batch_size: None per_gpu_eval_batch_size: None gradient_accumulation_steps: 1 learning_rate: 5e-05 weight_decay: 0.0 adam_epsilon: 1e-08 max_grad_norm: 1.0 num_train_epochs: 3.0 max_steps: -1 warmup_steps: 0 logging_dir: models/textgen/logs logging_first_step: False logging_steps: 500 save_steps: 500 save_total_limit: None no_cuda: False seed: 42 fp16: False fp16_opt_level: O1 local_rank: -1 tpu_num_cores: None tpu_metrics_debug: False debug: False dataloader_drop_last: False eval_steps: 1000 past_index: -1 --------------------------------------------- MODEL ARGS activation_function: gelu_new architectures: ['GPT2LMHeadModel'] attn_pdrop: 0.1 bos_token_id: 50256 embd_pdrop: 0.1 eos_token_id: 50256 initializer_range: 0.02 layer_norm_epsilon: 1e-05 model_type: gpt2 n_ctx: 1024 n_embd: 768 n_head: 12 n_layer: 12 n_positions: 1024 resid_pdrop: 0.1 summary_activation: None summary_first_dropout: 0.1 summary_proj_to_labels: True summary_type: cls_index summary_use_proj: True task_specific_params: {'text-generation': {'do_sample': True, 'max_length': 50}} vocab_size: 50257 --------------------------------------------- TRAINER ARGS args: TrainingArguments( output_dir='models/textgen/out', overwrite_output_dir=False, do_train='True', do_eval=False, do_predict=False, evaluate_during_training=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='models/textgen/logs', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1) data_collator: <function sd_data_collator at 0x7ffaba8f8e18> train_dataset: <custom_dataset.SDAbstractsDataset object at 0x7ffa18c8c400> eval_dataset: None compute_metrics: None prediction_loss_only: False optimizers: None tb_writer: <torch.utils.tensorboard.writer.SummaryWriter object at 0x7ff9f79e45c0> ``` The interesting thing is that the dataset that generates this error consists of 5426 examples. However, if I use another dataset that has 1024 examples, the training completes. So I'm not really sure what's going on. I'm happy to provide any other information. Thanks in advance for the help!
08-18-2020 06:38:47
08-18-2020 06:38:47
looks similar to #6192<|||||>Hey @aclifton314, Could you post a complete code example here so that we can reproduce the error?<|||||>@patil-suraj thanks for your reply. I checked the vocab and input embed size and they are the same: ```python model.transformer.wte.weight.shape[0] = 50257 len(model.tokenizer) = 50257 ``` The OP in the issue you linked to figured out that, for him, the issue was coming from the cross entropy loss: > I figured it out, it was due to the change in ignore_index=-100 instead of -1 in the cross entropy loss which was causing the issue. I'll close this. For me, I calculate a loss with respect to an n-grams model I made so I'm not entirely sure that is the source of the problem. What's more, doing a test on the training with a dataset that has 1024 examples completes, while using a dataset with 5426 examples produces the `IndexError: index out of range in self` error.<|||||>@patrickvonplaten sure, let me see what I can put together.<|||||>@patrickvonplaten I'm having problems coming up with some sample code that reproduces the problem exactly. Since I am generating text during the training process, my hunch is that I generate a sequence that is longer than what GPT2 allows. It looks like another user posted some code that might serve as a sample to reproduce the error here: https://github.com/huggingface/transformers/issues/6599<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,558
closed
Can we resize embedding with embedding weighted initialized differently??
# ❓ Questions & Help When we add new tokens, this method automatically adds embedding using torch nn.Embedding. https://huggingface.co/transformers/_modules/transformers/modeling_utils.html#PreTrainedModel.resize_token_embeddings The documentation says the resized embeddings are nn. Embedding, which said they by default initialize weights from N(0, 1) (https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html). But I have checked that the resized embedding weights are almost N(0, 0.01) or N(0, 0.02)? Can I check **the true distribution of resized embedding weights** ? ## If I want the embedding weights initialized differently, how can I achieve that efficiently? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-18-2020 06:33:08
08-18-2020 06:33:08
One way is to use `config.initializer_range` #6562<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,557
closed
Create README.md
Good morning, @julien-c ! Have a nice day! =)
08-18-2020 06:18:53
08-18-2020 06:18:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=h1) Report > Merging [#6557](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `1.14%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6557/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6557 +/- ## ========================================== + Coverage 79.18% 80.33% +1.14% ========================================== Files 156 156 Lines 28129 28129 ========================================== + Hits 22275 22597 +322 + Misses 5854 5532 -322 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.79%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6557/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=footer). Last update [12d7624...ce57708](https://codecov.io/gh/huggingface/transformers/pull/6557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Cool dataset from AllenAI
transformers
6,556
closed
Create README.md
08-18-2020 06:10:18
08-18-2020 06:10:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=h1) Report > Merging [#6556](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **decrease** coverage by `0.77%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6556/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6556 +/- ## ========================================== - Coverage 79.18% 78.41% -0.78% ========================================== Files 156 156 Lines 28129 28129 ========================================== - Hits 22275 22057 -218 - Misses 5854 6072 +218 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.11% <0.00%> (-0.84%)` | :arrow_down: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6556/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=footer). Last update [12d7624...6a0e0c9](https://codecov.io/gh/huggingface/transformers/pull/6556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,555
closed
Small typo fixes for model card: electra-base-german-uncased
08-18-2020 05:45:49
08-18-2020 05:45:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=h1) Report > Merging [#6555](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12d7624199e727f37bef7f53d527df7fabdb1fd6&el=desc) will **increase** coverage by `0.81%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6555/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6555 +/- ## ========================================== + Coverage 79.18% 79.99% +0.81% ========================================== Files 156 156 Lines 28129 28129 ========================================== + Hits 22275 22503 +228 + Misses 5854 5626 -228 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+3.79%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.22% <0.00%> (+9.72%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6555/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=footer). Last update [12d7624...b2a7699](https://codecov.io/gh/huggingface/transformers/pull/6555?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,554
closed
The squad processor's multi-threading crashes the script / causes large models to reload every call
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.17763-SP0 - Python version: 3.6.8 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: NA ### Who can help @mfuntowicz ## Information When using the `question-answering` pipeline, my script is run multiple times due to the thread pool within `squad_convert_examples_to_features`. In combination with other large models that take minutes to load, this causes them to reload every time the pipeline is called - or error outright if the code is not encapsulated. I've temporarily patched the `transformers\data\processors\squad.py` file on my end, forcing it to run in the current thread rather than multi-threading: ```python # with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: # annotate_ = partial( # squad_convert_example_to_features, # max_seq_length=max_seq_length, # doc_stride=doc_stride, # max_query_length=max_query_length, # is_training=is_training, # ) # features = list( # tqdm( # p.imap(annotate_, examples, chunksize=32), # total=len(examples), # desc="convert squad examples to features", # disable=not tqdm_enabled, # ) # ) squad_convert_example_to_features_init(tokenizer) for example in examples: features.append(squad_convert_example_to_features( example, max_seq_length=max_seq_length, doc_stride=doc_stride, max_query_length=max_query_length, is_training=is_training )) ``` I'm wondering if there's a better solution here, though. Is this a bug? Or maybe an option could added in to allow single-threaded processing of the `examples`? I may be doing something wrong on my end too; not sure. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. For a simplified example, run the following python script from terminal: ```python #!/usr/bin/env python3 from transformers import pipeline print('The script has been imported.') nlp = pipeline('question-answering', framework='pt') print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) ``` ``` (environment) C:\Users\Dennis\Desktop\New folder>python runner.py The script has been imported. The script has been imported. Traceback (most recent call last): File "<string>", line 1, in <module> File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 105, in spawn_main Traceback (most recent call last): exitcode = _main(fd) File "runner.py", line 6, in <module> File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 114, in _main print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) prepare(preparation_data) File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in __call__ File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path for example in examples File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in <listcomp> run_name="__mp_main__") File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 263, in run_path for example in examples pkg_name=pkg_name, script_name=fname) File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\data\processors\squad.py", line 325, in squad_convert_examples_to_features File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 96, in _run_module_code with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: mod_name, mod_spec, pkg_name, script_name) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 119, in Pool File "c:\users\dennis\appdata\local\programs\python\python36\lib\runpy.py", line 85, in _run_code context=self.get_context()) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 174, in __init__ exec(code, run_globals) File "C:\Users\Dennis\Desktop\New folder\runner.py", line 6, in <module> self._repopulate_pool() print(nlp(question='Who walked on the moon?', context='Niel Armstrong walked on the moon.')) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in __call__ w.start() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 322, in _Popen for example in examples File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\pipelines.py", line 1264, in <listcomp> return Popen(process_obj) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) for example in examples File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\reduction.py", line 60, in dump File "C:\Users\Dennis\Desktop\New folder\environment\lib\site-packages\transformers\data\processors\squad.py", line 325, in squad_convert_examples_to_features ForkingPickler(file, protocol).dump(obj) with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: BrokenPipeError: [Errno 32] Broken pipe File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 119, in Pool context=self.get_context()) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 174, in __init__ self._repopulate_pool() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\pool.py", line 239, in _repopulate_pool w.start() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "c:\users\dennis\appdata\local\programs\python\python36\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` ## Expected behavior The script doesn't crash / reload the script every pipeline call.
08-18-2020 04:17:03
08-18-2020 04:17:03
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,553
closed
fix incorrect codecov reports
As discussed at https://github.com/huggingface/transformers/issues/6317 codecov currently sends an invalid report when it fails to find a code coverage report for the base it checks against. When this happens it goes looking for the nearest hash with report, which often leads to a report that is not representative of the true impact of the proposed PR. This PR fixes it by adding: `require_base: yes` # don't report if there is no base coverage report And then: - let's add this for clarity, this supposedly is already the default. `require_head: yes` # don't report if there is no head coverage report - and perhaps no point reporting on doc changes as they don't make any difference to the coverage and the 0% change comment just generates noise: `require_changes: true` # only comment if there was a change in coverage These options are documented here: https://docs.codecov.io/docs/codecovyml-reference#comment
08-18-2020 03:23:34
08-18-2020 03:23:34
Great, thanks for diving into it @stas00!<|||||>Alas, it solved only part of the problem - the main issue is still there https://github.com/huggingface/transformers/pull/6553 :(
transformers
6,552
closed
Add model card for facebook/mbart-large-en-ro
Suggested content: ___ tags: - translation language: en ___ link to fairseq readme link to docs.
08-18-2020 01:33:34
08-18-2020 01:33:34
``` --- language: - en - ro --- ``` no?<|||||>I didn't know you can have two! Will inference api still populate the text box with "My name is wolfgang"?<|||||>Yes it will use the first one to populate the widget if there are several<|||||>Done: new config/moon-landing bug ![image](https://user-images.githubusercontent.com/6045025/90800838-2c7c0400-e2e3-11ea-8d4d-40e0aeac0850.png)
transformers
6,551
closed
TFTrainer Example
Is there an example that uses TFTrainer to fine-tune a model with more than one input type? Encountering some difficulty in figuring out how TFTrainer wants the tensorflow dataset structured. It doesn't seem to like one constructed from conventional numpy slices, e.g. `train_dataset = tf.data.Dataset.from_tensor_slices((input_ids, attention_mask, token_type_ids))`. I've dug through the documentation and a two dozen notesbooks and can't find an example of what an appropriate dataset input looks like.
08-18-2020 00:53:39
08-18-2020 00:53:39
I am facing issue with : /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py in __init__(self, model, args, train_dataset, eval_dataset, compute_metrics, prediction_loss_only, tb_writer, optimizers) 87 self.tb_writer = tb_writer 88 else: ---> 89 self.tb_writer = tf.summary.create_file_writer(self.args.logging_dir) 90 91 if is_wandb_available(): AttributeError: 'dict' object has no attribute 'logging_dir' One good working example of TFTrainer would be very helpful. Thanks<|||||>We're working on the examples and there should be one for every task soon (in PyTorch and TensorFlow). For your specific problem, I think it's missing a dictionary. There is a brand new tutorial from @joeddav on how to fine-tune a model on your custom dataset that should be helpful to you [here](https://huggingface.co/transformers/master/custom_datasets.html). Click on the TensorFlow button on the code examples to switch the code from PyTorch to TensorFlow, or on the open in colab button at the top where you can select the TensorFlow notebook that goes with the tutorial.<|||||>Yes, you want to pass a tuple to `from_tensor_slices` where the first element is a dict of `kwarg:input` and the second is the labels. So in your case: ``` train_dataset = tf.data.Dataset.from_tensor_slices(( {"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids}, labels # just a tensor, or a dict if multiple outputs as with question answering )) ``` The minibatches in the format of the inputs dict will by passed as kwargs to the model at each train step. The [tutorial](https://huggingface.co/transformers/master/custom_datasets.html) @sgugger recommended has some more examples.<|||||>Thanks Joe and Sylvain! - Transformers 3.0.2 - Windows - Tensorflow 2.3 - Model: TFGPT2LMHeadModel @joeddav Hmmm... there might be an issue with parsing inputs for `TFGPT2LMHeadModel` or their might be problems with `_training_steps` (I remember reading that it was being deprecated or rewritten somewhere). I tried implementing the solution you indicated above, an extrapolation from the example that Sylvain linked to, and other variations, all with the same effect `ValueError: too many values to unpack (expected 2)` which triggers on this line in TFTrainer `for step, training_loss in enumerate(self._training_steps(train_ds, optimizer))`. Tried: - `input_ids`, `attention_mask`, `token_type_ids` each as a list of lists - `input_ids`, `attention_mask`, `token_type_ids` each as a tf tensor - `input_ids`, `attention_mask`, `token_type_ids` each as list of numpy arrays - `input_ids`, `attention_mask`, `token_type_ids` each as a single numpy array - Where `train_encodings` is a dictionary of the above input types (e.g. `train_encodings['input_ids'] = input_ids`, neither of these worked when passed to `TFTrainer`: -- `train_dataset = tf.data.Dataset.from_tensor_slices(train_encodings)` -- `train_dataset = tf.data.Dataset.from_tensor_slices((train_encodings))` Since labels is not a recognized argument for `TFGPT2LMHeadModel`, presumably labels would be be just another key in `train_encodings` (e.g. `train_encodings['labels'] = labels)`. @sgugger I encountered an encoding error when I was testing the inputs from IMDb reviews example. Here's a potential replacement that worked for me: ``` def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): f = open(text_file, encoding = 'utf-8') texts.append(f.read()) f.close() labels.append(0 if label_dir is "neg" else 1) return texts, labels ``` <|||||>@alexorona ahh, I believe this is an issue with TensorFlow LM-head models that we recently resolved – previously these models didn't take `labels` and didn't calculate the loss, so they didn't work with Trainer. Try building transformers from source and see if you still have the issue. (You can install from source by cloning the repo or just doing `pip install --upgrade git+https://github.com/huggingface/transformers.git`). Then you'll want to prepare your dataset so that the `labels` are the encoded `input_ids`: ``` tf.data.Dataset.from_tensor_slices((dict(train_encodings), train_encodings.input_ids)) ``` If `train_encodings` are of type `BatchEncoding`, I believe you'll have to explicitly cast them as a dict as I do above. We should document this in `TFTrainer`.<|||||>@joeddav Thanks! So I kind of got this to work, but could use some clarification on your last comment. After building from source, this will run until eval if inputs are already tf tensors: ``` input_ids = tf.convert_to_tensor(input_ids) attention_mask = tf.convert_to_tensor(attention_mask) token_type_ids = tf.convert_to_tensor(token_types) labels = tf.convert_to_tensor(labels) train_encodings = {"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids} train_dataset = tf.data.Dataset.from_tensor_slices((train_encodings, labels)) ``` I'm getting a warning that says `Converting sparse IndexedSlices to a dense Tensor of unknown shape` and an error that it can't find _prediction_loop -- `'TFTrainer' object has no attribute '_prediction_loop'` -- the latter of which is probably just a result of the changes to TFTrainer. I'm not sure how to interpret `train_encodings.input_ids`. Are you saying that we should make `train_encodings `an object with the labels set to input_ids? Labels are usually in the range `[-100, 0, ..., config.vocab_size] `with `-100` indicating its not part of the target. There's a lot of situations and setups where you want a token in the `input_ids`, but you don't want to calculate loss on it (for example when distinguishing between the target input and the history).<|||||>> an error that it can't find _prediction_loop -- 'TFTrainer' object has no attribute '_prediction_loop' -- the latter of which is probably just a result of the changes to TFTrainer. Yep, that's just a bug. `TFTrainer._prediction_step` is deprecated and it looks like we missed a reference to it. I think line 415 of `trainer_tf.py` just needs to be changed to call `self.prediction_step`. > Are you saying that we should make train_encodings an object with the labels set to input_ids? No, sorry. You're right there are lots of situations where you would need something more complex, I was just using that as the most basic example of passing in labels for LM training. You just want the labels to be of the same shape as `input_ids` with the range exactly as you described. > I'm getting a warning that says Converting sparse IndexedSlices to a dense Tensor of unknown shape What format are your `labels` in? I'm not sure why they'd be sparse. TFTrainer will calculate the loss by calling `model(batch_encodings, labels=batch_labels)` which returns the loss as the first element. Try just passing one instance to the model and see if you get any errors and check that the returned loss looks reasonable (i.e. not NaN or something). In your case, that'd look like, ```python outputs = model({key: val[:1] for key, val in train_encodings.items()}, labels=labels[:1]) loss = outputs[0] ```<|||||>I got this working, thanks for the help. ``` def example_to_features(input_ids,attention_masks,token_type_ids,y): return {"input_ids": input_ids, "attention_mask": attention_masks, "token_type_ids": token_type_ids},y train_ds = tf.data.Dataset.from_tensor_slices((input_ids_train,attention_masks_train,token_ids_train,label_ids_train)).map(example_to_features) and use train_ds for training trainer = TFTrainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds ) ``` One question, when I do trainer.train(), it's not displaying progress, but I see in logs it's training. Is there some verbose option I am missing?<|||||>Yeah the TFTrainer is not using any progress bar. Will add them soonish (with an option to disable for people who prefer not to see them), like in the PyTorch Trainer.<|||||>Thank you, Also if chose to train native Keras way: `optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)` why is model.train() missing? I thought without it it still be eval mode right?<|||||>@astromad You can edit the `TFTrainer` file directly (or copy it from GitHub and add create your own variation, which is what I did). You can add a basic progress bar at about line 500: ``` train_iterator = tqdm(range(epochs_trained, int(epochs + 1)), desc="Epoch") for epoch_iter in train_iterator: # Reset the past mems state at the beginning of each epoch if necessary. if self.args.past_index >= 0: self._past = None epoch_iterator = tqdm(train_ds, desc="Iteration") for step, batch in enumerate(epoch_iterator): ``` Additionally, there's a way to display training loss, but my progress is not that far. I built a custom variation of `Trainer` that does that, but haven't yet incorporated all the changes into `TFTrainer` because the structure is different. Here's my progress so far in introducing continuous display (note: it won't be accurate because there's a number I need to divide by): training_loss = self.train_loss.result() / ((step + 1) * self.total_train_batch_size) loss_update = 'Train Epoch ' + str(epochs_trained) + " (loss " + str(round(training_loss.numpy(), 3)) + ")" epoch_iterator.set_description(loss_update) @joeddav Thanks again, Joe! Astromad's map function creates a `batch` inside of `TFTrainer` that is fed to `self.distributed_training_steps`. This is the same batch structure that results when you instead use `train_dataset = tf.data.Dataset.from_tensor_slices((train_encodings, labels))`, as outlined above. In both cases, what is fed to `self.distributed_training_steps` is a tuple containing: 1) a dictionary object with `input_ids`, `attention_mask` and `token_type_ids` as keys and tf tensors as values, and 2) tf tensor for `labels`. When testing model inputs outside of the context of `TFTrainer` like this: ``` outputs = model({key: val[:1] for key, val in train_encodings.items()}, labels=labels[:1]) loss = outputs[0] ``` It seems that the `labels` are not being registered correctly. Here are the outputs: - `output[0]` is `shape=(381,)` instead of `shape=(1,)`. `381` is the number of values in the `labels` tensor that do not equal `-100`, so it looks like this is a per token loss. - `output[1]` is `shape=(1, 511, 50257)` -- shouldn't this be `shape(1, 512, 50257)`? - `output[2]` is `shape=(2, 1, 16, 512, 64)` Strangely, inside of `TFTrainer` when I print out `training_loss = self.train_loss.result() / ((step + 1) * self.total_train_batch_size)`, it's correctly a `shape=(1,)` tensor. <|||||>just wanna share if this is useful, to construct a prediction from arbitrary sentence this is what I am using: ``` sequence = "my name is firstname lastname and my phone number is 111-222-3333" tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence))) print('Tokens are',tokens) inputs = tokenizer.encode_plus(sequence, return_tensors="tf") print('Inputs are',inputs) outputs = model(inputs) print('output shape',np.shape(outputs)) **And the output** Tokens are ['[CLS]', 'my', 'name', 'is', 'first', '##name', 'last', '##name', 'and', 'my', 'phone', 'number', 'is', '111', '##-', '##22', '##-', '##33', '##33', '[SEP]'] Inputs are {'input_ids': <tf.Tensor: shape=(1, 20), dtype=int32, numpy= array([[ 101, 2026, 2171, 2003, 2034, 18442, 2197, 18442, 1998, 2026, 3042, 2193, 2003, 11118, 29624, 19317, 29624, 22394, 22394, 102]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(1, 20), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 20), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>} output shape (1, 1, 20, 6) ``` <|||||>@joeddav @astromad Very useful examples! It's training correctly using the methods outlined above. Just some kinks to work out. It also looks like the `model.generate` method does not currently support the use of `token_type_ids`. `temperature`, `top_k` and `top_p` do not seem to have any effect on outputs. The example provided in the documentation will not work. Here's an example of one that will work. It's a `gpt2-medium` model fine-tuned on Jane Austen's _Pride and Prejudice_: ``` tokenizer = tokenizer input_sequence = "Mr. Darcy was acting strangely, she thought to herself. What had happened?" inputs = tokenizer.encode_plus(input_sequence , return_tensors="tf") outputs = model.generate(inputs['input_ids'], attention_mask = inputs['attention_mask'], max_length = 200, repetition_penalty = 2.0, pad_token_id = pad_token_id, eos_token_id = eos_token_id) print(tokenizer.decode(outputs.numpy().tolist()[0])) 'Mr. Darcy was acting strangely, she thought to herself. What had happened? She looked at the clock and saw that it must be half-past ten; but as soon afterwards came a knock on her door from Miss Lucas\'s father, who said he would come in an hour or two with his daughter. "Come here," cried Elizabeth, when they entered into Mrs.—Lennox Hall ;—"come now." Mr—Darcy followed them through their dining room towards another large hall where there were many other ladies of distinction seated round tables which formed one side by themselves before each table. The gentlemen sat down opposite him all together, while Jane stood nearby looking very pale for some reason unknown till then only known among those present. When Lydia heard this news, however,,she immediately began to speak aloud:\n <|endoftext|>' ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||> I am currently attempting to train a TF model (using Keras) using a dataset pipeline constructed with tf.data where I use a tf.py_function that tokenizes batches of data _on the fly_. ```python train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(2) def py_func(x): x = x.numpy() x = [i.decode("utf-8") for i in x] d = tokenizer(x, truncation=True, padding=True) return list(d.values()) def ds_map_fn(x,y): flattened_output = tf.py_function(py_func, [x], [tf.int32, tf.int32]) return {"input_ids": flattened_output[0], "attention_mask": flattened_output[1]},y train_ds = train_ds.map(ds_map_fn) for x,y in train_ds.take(2): print(x) ``` When I sample from this pipeline, the result looks sensible ``` {'input_ids': <tf.Tensor: shape=(2, 20), dtype=int32, numpy= array([[ 101, 19443, 23698, 7710, 1195, 1274, 189, 1138, 1609, 184, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 101, 5837, 8752, 4163, 4654, 26063, 5077, 1611, 11644, 22234, 1394, 2822, 5077, 22449, 1606, 1482, 1106, 4609, 4039, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 20), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} {'input_ids': <tf.Tensor: shape=(2, 43), dtype=int32, numpy= array([[ 101, 170, 2944, 2312, 7925, 16234, 11275, 1109, 7085, 13830, 142, 12323, 1658, 1144, 1508, 1164, 1479, 4252, 15304, 1116, 1107, 3315, 13826, 1120, 1103, 12548, 1104, 139, 23698, 7710, 6716, 18630, 189, 1884, 5539, 2240, 2924, 1775, 4426, 2064, 1559, 20923, 102], [ 101, 146, 182, 1136, 2157, 1128, 5380, 189, 6730, 190, 1197, 4438, 1106, 2992, 1252, 178, 1329, 1917, 178, 1138, 1106, 13803, 1128, 1335, 4847, 1358, 1110, 1103, 1211, 18630, 189, 1884, 124, 5822, 2036, 1186, 1658, 1545, 12649, 1403, 102, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 43), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` However, when I try to train a model using this dataset pipeline, I get an error. Training code ```python model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2 ) optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy() ) # can also use any keras loss fn model.fit(train_ds, epochs=3 ) ``` error ``` OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. ``` Any pointers as to what I am doing wrong would be appreciated. In the mean time, I have used the workaround where I first tokenize my data before I create a tf.data pipeline. ```python train_encodings = tokenizer(X_train, truncation=True, padding=True) train_ds = tf.data.Dataset.from_tensor_slices(( dict(train_encodings), y_train )) ```
transformers
6,550
closed
504 Gateway Time-out when trying to access Uploaded Model page
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I recently uploaded a model available at `https://huggingface.co/paulowoicho/t5-podcast-summarisation`. I made a few changes to the model card and reuploaded. Now, the model's page does not load at all. Instead, it shows a 504 Gateway Time-out error. ![image](https://user-images.githubusercontent.com/28223751/90789131-71616400-e2fe-11ea-928c-070da9ffb449.png) <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. -->
08-17-2020 23:27:22
08-17-2020 23:27:22
cc @julien-c I'm also experiencing this.<|||||>Yes, the reason is that your model card's metadata is not valid YAML. I had fixed it on S3 but you overrode it with the previous version – this is a known limitation of our model card system right now and why: - we recommend submitting the model cards to this git repo for now - we are working on a way better system to be released in the coming weeks/months (cc @sgugger @JetRunner and others for context ^^) I've fixed our website to handle those model pages by displaying an error notice<|||||>https://huggingface.co/paulowoicho/t5-podcast-summarisation
transformers
6,549
closed
fixed fast tokenizer use in QA pipeline and added corresponding test
This PR fixes https://github.com/huggingface/transformers/issues/6545 The problem is that the behavior of the python tokenizer and the rust based fast tokenizer is very different. The python tokenizer handles cases where inputs are in different formats (str tokens and int tokens and vice versa), where as the fast tokenizer is unable to do so. My modifications include pretokenizing the query and along side the context, but not encoding it as is done now. Both the query and context are encoded together by the the tokenizer's `encode_plus` method with `is_pretokenized` set to True. Some edge cases where the fast tokenizer fails because of limited functionality (compare to python tok) have been added and I've included comments in those places.
08-17-2020 23:06:29
08-17-2020 23:06:29
The failed test seems to be failing for merged PRs too. I'll fix up the formatting with black
transformers
6,548
closed
Model Clipping reduce the size of the model.
# ❓ Questions & Help Is it possible to just reduce the size of a large model to by removing weights and preserving others. For example we can apply this concept to different dimensions Depth: Layer dropping this is well known an have been shown to work with huggingface repo. Width: For scenarios where we need small size of the model, is it possible to just drop all the weights beyond all the seq length L. For GPT2 kind of model with only left to right flow it shouldn't affect the model perf. How to implement this without disturbing other weights? Thanks in advance for help! <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-17-2020 21:38:28
08-17-2020 21:38:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,547
closed
There is an error in the run_tf_ner.py script with the get_labels fun…
There is an error in the run_tf_ner.py scripts with the get_labels function (does not exist). It does not work with TPU. Both problems are fixed.
08-17-2020 21:27:03
08-17-2020 21:27:03
Thanks for the PR. The issue with the `get_labels` has been fixed in a previous PR. Can you rebase on master and remove the `get_labels` part? We will keep the TPU fix :)<|||||>Closing as already implemented. Thanks a lot for your contribution @RodSernaPerez, looking forward to the next one!
transformers
6,546
closed
[model_cards] update jimregan/BERTreach card with #s of sentences/tokens
08-17-2020 20:41:39
08-17-2020 20:41:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=h1) Report > Merging [#6546](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63144701ed46d6423b5968db40b6d4469d7d9b87&el=desc) will **increase** coverage by `0.93%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6546/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6546 +/- ## ========================================== + Coverage 78.43% 79.37% +0.93% ========================================== Files 156 156 Lines 28129 28129 ========================================== + Hits 22062 22326 +264 + Misses 6067 5803 -264 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-0.76%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: | | ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6546/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=footer). Last update [6314470...3ceca88](https://codecov.io/gh/huggingface/transformers/pull/6546?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,545
closed
QA pipeline fails when using fast tokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1032-gcp-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @LysandreJik @mfuntowicz ## Information Model I am using (Bert, XLNet ...): "sshleifer/tiny-distilbert-base-cased-distilled-squad" The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Modify unit test https://github.com/huggingface/transformers/blob/master/tests/test_pipelines.py#L644-L647 to use fast tokenizer `nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True}))` 2. Run modified unit test <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The test fails with error `ValueError: TextInputSequence must be str` Complete failure result: ``` Testing started at 16:06 ... Launching pytest with arguments test_pipelines.py::QAPipelineTests::test_torch_question_answering in /home/lttazz99/transformers_perf_tests/transformers/tests ============================= test session starts ============================== platform linux -- Python 3.7.7, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /home/lttazz99/miniconda3/envs/qna/bin/python cachedir: .pytest_cache rootdir: /home/lttazz99/transformers_perf_tests/transformers collected 1 item test_pipelines.py::QAPipelineTests::test_torch_question_answering FAILED [100%]HuggingFace None was None founded None in None Paris. None tests/test_pipelines.py:642 (QAPipelineTests.test_torch_question_answering) self = <tests.test_pipelines.QAPipelineTests testMethod=test_torch_question_answering> @require_torch def test_torch_question_answering(self): for model_name in QA_FINETUNED_MODELS: nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True})) > self._test_qa_pipeline(nlp) test_pipelines.py:647: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_pipelines.py:626: in _test_qa_pipeline mono_result = nlp(valid_inputs[0]) ../src/transformers/pipelines.py:1675: in __call__ tqdm_enabled=False, ../src/transformers/data/processors/squad.py:369: in squad_convert_examples_to_features is_training=is_training)) ../src/transformers/data/processors/squad.py:165: in squad_convert_example_to_features return_token_type_ids=True, ../src/transformers/tokenization_utils_base.py:2043: in encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:458: in _encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:369: in _batch_encode_plus is_pretokenized=is_pretokenized, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Tokenizer(vocabulary_size=28996, model=BertWordPiece, unk_token=[UNK], sep_token=[SEP], cls_token=[CLS], pad_token=[PA...sk_token=[MASK], clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=False, wordpieces_prefix=##) sequence = [2777, 1108, 20164, 10932, 2271, 7954, ...] pair = ['Hu', '##gging', '##F', '##ace', 'was', 'founded', ...] is_pretokenized = False, add_special_tokens = True def encode( self, sequence: InputSequence, pair: Optional[InputSequence] = None, is_pretokenized: bool = False, add_special_tokens: bool = True, ) -> Encoding: """ Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences. Args: sequence: InputSequence: The sequence we want to encode. This sequence can be either raw text or pre-tokenized, according to the `is_pretokenized` argument: - If `is_pretokenized=False`: `InputSequence` is expected to be `str` - If `is_pretokenized=True`: `InputSequence` is expected to be `Union[List[str], Tuple[str]]` is_pretokenized: bool: Whether the input is already pre-tokenized. add_special_tokens: bool: Whether to add the special tokens while encoding. Returns: An Encoding """ if sequence is None: raise ValueError("encode: `sequence` can't be `None`") > return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) E ValueError: TextInputSequence must be str ../../../miniconda3/envs/qna/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py:212: ValueError Assertion failed Assertion failed =================================== FAILURES =================================== ________________ QAPipelineTests.test_torch_question_answering _________________ self = <tests.test_pipelines.QAPipelineTests testMethod=test_torch_question_answering> @require_torch def test_torch_question_answering(self): for model_name in QA_FINETUNED_MODELS: nlp = pipeline(task="question-answering", model=model_name, tokenizer=(model_name, {"use_fast": True})) > self._test_qa_pipeline(nlp) test_pipelines.py:647: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_pipelines.py:626: in _test_qa_pipeline mono_result = nlp(valid_inputs[0]) ../src/transformers/pipelines.py:1675: in __call__ tqdm_enabled=False, ../src/transformers/data/processors/squad.py:369: in squad_convert_examples_to_features is_training=is_training)) ../src/transformers/data/processors/squad.py:165: in squad_convert_example_to_features return_token_type_ids=True, ../src/transformers/tokenization_utils_base.py:2043: in encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:458: in _encode_plus **kwargs, ../src/transformers/tokenization_utils_fast.py:369: in _batch_encode_plus is_pretokenized=is_pretokenized, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Tokenizer(vocabulary_size=28996, model=BertWordPiece, unk_token=[UNK], sep_token=[SEP], cls_token=[CLS], pad_token=[PA...sk_token=[MASK], clean_text=True, handle_chinese_chars=True, strip_accents=None, lowercase=False, wordpieces_prefix=##) sequence = [2777, 1108, 20164, 10932, 2271, 7954, ...] pair = ['Hu', '##gging', '##F', '##ace', 'was', 'founded', ...] is_pretokenized = False, add_special_tokens = True def encode( self, sequence: InputSequence, pair: Optional[InputSequence] = None, is_pretokenized: bool = False, add_special_tokens: bool = True, ) -> Encoding: """ Encode the given sequence and pair. This method can process raw text sequences as well as already pre-tokenized sequences. Args: sequence: InputSequence: The sequence we want to encode. This sequence can be either raw text or pre-tokenized, according to the `is_pretokenized` argument: - If `is_pretokenized=False`: `InputSequence` is expected to be `str` - If `is_pretokenized=True`: `InputSequence` is expected to be `Union[List[str], Tuple[str]]` is_pretokenized: bool: Whether the input is already pre-tokenized. add_special_tokens: bool: Whether to add the special tokens while encoding. Returns: An Encoding """ if sequence is None: raise ValueError("encode: `sequence` can't be `None`") > return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) E ValueError: TextInputSequence must be str ../../../miniconda3/envs/qna/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py:212: ValueError ----------------------------- Captured stdout call ----------------------------- HuggingFace None was None founded None in None Paris. None =============================== warnings summary =============================== tests/test_pipelines.py::QAPipelineTests::test_torch_question_answering /home/lttazz99/transformers_perf_tests/transformers/src/transformers/tokenization_utils_base.py:1319: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. FutureWarning, -- Docs: https://docs.pytest.org/en/stable/warnings.html =========================== short test summary info ============================ FAILED test_pipelines.py::QAPipelineTests::test_torch_question_answering - Va... ========================= 1 failed, 1 warning in 2.64s ========================= Process finished with exit code 1 Assertion failed Assertion failed ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Test should pass. --- NOTES: This errors appears to be coming because [here](https://github.com/huggingface/transformers/blob/98ee802023a4db76879c761e7ce3677eb4555871/src/transformers/tokenization_utils_fast.py#L366) the inputs are the query tokens' ids and the context token text. This in turn is passed to the [fast tokenizer](https://github.com/huggingface/tokenizers/blob/5d8728a26b654784572612ae119edc451205c2ff/bindings/python/py_src/tokenizers/implementations/base_tokenizer.py#L211) which expects `str` for both the sequence and the pair but instead gets a sequence of ints for the query and the text for the context. The pipeline fails with when any Bert-like fast tokenizer is used irrespective of the model. The unit test is the easiest way to reproduce this error.
08-17-2020 20:14:57
08-17-2020 20:14:57
I also have run into this issue. @bdalal did you find a fix? I see the above PR, but seems you closed it. <|||||>The fix is not very straightforward because it seems like both the tokenizers expose vastly different APIs and support very different functionalities. You can check this in `tokenization_utils_base.py` and `tokenization_utils_fast.py`. In the context of the pipeline therefore, the tokenizer change is not transparent and requires a fair bit of work to integrate correctly. I'll probably have a better fix up sometime next month if I do end up including the fast tokenizer in my project.<|||||>I've got a hacky working version here: https://github.com/bdalal/transformers Also contains additional performance optimizations<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@bdalal How does the link you provided help solving the issue?<|||||>@MHDBST you can navigate to the [QA pipeline](https://github.com/bdalal/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/pipelines.py#L1656) and look at the changes I made to get them working. This was a while ago, so I don't exactly recall what changes I made. That being said, I believe my code is now redundant because HF has come a long way in integrating fast tokenizers and they're now the default I believe. So you'll be better off just playing around with their latest release.
transformers
6,544
closed
[model_cards] Add a new model for Irish
08-17-2020 19:02:20
08-17-2020 19:02:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=h1) Report > Merging [#6544](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c9564f53433fb637cbd8eec0d902e80e30f91814&el=desc) will **decrease** coverage by `1.13%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6544/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6544 +/- ## ========================================== - Coverage 80.52% 79.39% -1.14% ========================================== Files 156 156 Lines 28108 28129 +21 ========================================== - Hits 22633 22332 -301 - Misses 5475 5797 +322 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `96.73% <100.00%> (+0.96%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6544/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=footer). Last update [407da12...d9608f8](https://codecov.io/gh/huggingface/transformers/pull/6544?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Really cool! If you get a chance, do you think you could add sample inputs for Irish to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts?
transformers
6,543
closed
[BartTokenizerFast] add prepare_seq2seq_batch
This PR adds `prepare_seq2seq_batch` batch method for `BartTokenizerFast` as per the proposal in #6080 @sshleifer
08-17-2020 16:46:33
08-17-2020 16:46:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=h1) Report > Merging [#6543](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0c2389f485a9defdc856871e8362add9b1377a3&el=desc) will **decrease** coverage by `2.36%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6543/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6543 +/- ## ========================================== - Coverage 80.51% 78.14% -2.37% ========================================== Files 156 156 Lines 28094 28120 +26 ========================================== - Hits 22619 21975 -644 - Misses 5475 6145 +670 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.43%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6543/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=footer). Last update [7ca6ab6...fbfe89e](https://codecov.io/gh/huggingface/transformers/pull/6543?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,542
closed
WNUT17 TF example stuck at first epoch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux Ubuntu 16.04 - Python version: 3.6.9 - PyTorch version (GPU?): / - Tensorflow version (GPU?): 2.2.0 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: not that I know ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @jplu ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X, WNUT] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` # Token Classification with W-NUT Emerging Entities """ Next we will look at token classification. Rather than classifying an entire sequence, this task classifies token by token. We'll demonstrate how to do this with [Named Entity Recognition](http://nlpprogress.com/english/named_entity_recognition.html), which involves identifying tokens which correspond to a predefined set of "entities". Specifically, we'll use the [W-NUT Emerging and Rare entities](http://noisy-text.github.io/2017/emerging-rare-entities.html) corpus. The data is given as a collection of pre-tokenized documents where each token is assigned a tag. Let's start by downloading the data. ! wget http://noisy - text.github.io / 2017 / files / wnut17train.conll """ """In this case, we'll just download the train set, which is a single text file. Each line of the file contains either (1) a word and tag separated by a tab, or (2) a blank line indicating the end of a document. Let's write a function to read this in. We'll take in the file path and return `token_docs` which is a list of lists of token strings, and `token_tags` which is a list of lists of tag strings. """ from pathlib import Path import re import numpy as np from sklearn.model_selection import train_test_split import tensorflow as tf from transformers import BertTokenizer, TFBertModel, BertConfig from transformers import DistilBertTokenizerFast from transformers import TFDistilBertForTokenClassification from tensorflow.keras.layers import Input, Dense, Activation, Dropout, LSTM, GlobalMaxPool1D from tensorflow.keras.models import Model from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.sequence import pad_sequences def read_wnut(file_path): file_path = Path(file_path) raw_text = file_path.read_text().strip() raw_docs = re.split(r'\n\t?\n', raw_text) token_docs = [] tag_docs = [] for doc in raw_docs: tokens = [] tags = [] for line in doc.split('\n'): token, tag = line.split('\t') tokens.append(token) tags.append(tag) token_docs.append(tokens) tag_docs.append(tags) return token_docs, tag_docs texts, tags = read_wnut('wnut17train.conll') """Just to see what this data looks like, let's take a look at a segment of the first document.""" print(texts[0][10:17], tags[0][10:17], sep='\n') """`location` is an entity type, `B-` indicates the beginning of an entity, and `I-` indicates consecutive positions of the same entity ("Empire State Building" is considered one entity). `O` indicates the token does not correspond to any entity. Now that we've read the data in, let's create a train/validation split: """ train_texts, val_texts, train_tags, val_tags = train_test_split(texts, tags, test_size=.2) """Next, let's create encodings for our tokens and tags. For the tags, we can start by just create a simple mapping which we'll use in a moment: """ unique_tags = set(tag for doc in tags for tag in doc) tag2id = {tag: id for id, tag in enumerate(unique_tags)} id2tag = {id: tag for tag, id in tag2id.items()} """To encode the tokens, we'll use a pre-trained DistilBert tokenizer. We can tell the tokenizer that we're dealing with ready-split tokens rather than full sentence strings by passing `is_pretokenized=True`. We'll also pass `padding=True` and `truncation=True` to pad the sequences to be the same length. Lastly, we can tell the model to return information about the tokens which are split by the wordpiece tokenization process, which we will need in a moment. """ tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased') train_encodings = tokenizer(train_texts, is_pretokenized=True, return_offsets_mapping=True, padding=True, truncation=True) val_encodings = tokenizer(val_texts, is_pretokenized=True, return_offsets_mapping=True, padding=True, truncation=True) """Great, so now our tokens are nicely encoded in the format that they need to be in to feed them into our DistilBert model below. Now we arrive at a common obstacle with using pre-trained models for token-level classification: many of the tokens in the W-NUT corpus are not in DistilBert's vocabulary. Bert and many models like it use a method called WordPiece Tokenization, meaning that single words are split into multiple tokens such that each token is likely to be in the vocabulary. For example, DistilBert's tokenizer would split the Twitter handle `@huggingface` into the tokens `['@', 'hugging', '##face']`. This is a problem for us because we have exactly one tag per token. If the tokenizer splits a token into multiple sub-tokens, then we will end up with a mismatch between our tokens and our labels. One way to handle this is to only train on the tag labels for the first subtoken of a split token. We can do this in 🤗 Transformers by setting the labels we wish to ignore to `-100`. In the example above, if the label for `@HuggingFace` is `3` (indexing `B-corporation`), we would set the labels of `['@', 'hugging', '##face']` to `[3, -100, -100]`. Let's write a function to do this. This is where we will use the `offset_mapping` from the tokenizer as mentioned above. For each sub-token returned by the tokenizer, the offset mapping gives us a tuple indicating the sub-token's start position and end position relative to the original token it was split from. That means that if the first position in the tuple is anything other than `0`, we will set its corresponding label to `-100`. While we're at it, we can also set labels to `-100` if the second position of the offset mapping is `0`, since this means it must be a special token like `[PAD]` or `[CLS]`. > **NOTE:** Due to a recently fixed bug, -1 must be used instead of -100 when using TensorFlow in 🤗 Transformers <= 3.02. """ def encode_tags(tags, encodings): labels = [[tag2id[tag] for tag in doc] for doc in tags] encoded_labels = [] for doc_labels, doc_offset in zip(labels, encodings.offset_mapping): # create an empty array of -100 doc_enc_labels = np.ones(len(doc_offset), dtype=int) * -1 # 00 arr_offset = np.array(doc_offset) # set labels whose first offset position is 0 and the second is not 0 doc_enc_labels[(arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)] = doc_labels encoded_labels.append(doc_enc_labels.tolist()) return encoded_labels train_labels = encode_tags(train_tags, train_encodings) val_labels = encode_tags(val_tags, val_encodings) train_encodings.pop("offset_mapping") # we don't want to pass this to the model val_encodings.pop("offset_mapping") train_dataset = tf.data.Dataset.from_tensor_slices(( dict(train_encodings), train_labels )) val_dataset = tf.data.Dataset.from_tensor_slices(( dict(val_encodings), val_labels )) """Now load in a token classification model and specify the number of labels:""" model = TFDistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags)) optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=["accuracy"]) # can also use any keras loss fn model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16, verbose=3) ``` ## Expected behavior the example runs and does not get stuck at the first epoch :)
08-17-2020 16:44:00
08-17-2020 16:44:00
Hello! Can you try with the https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py script example, with a usual CoNLL format for your dataset? Then let me know if you still get this issue. Thanks!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,541
closed
replace _ with __ rst links
Apply the suggested https://github.com/huggingface/transformers/pull/6509#issuecomment-674960166 correction - I fixed it for other links too.
08-17-2020 16:09:29
08-17-2020 16:09:29
Failure is not linked to the PR. Thanks for replacing these!
transformers
6,540
closed
bart-base config.*attention_heads (should be 12 was 16)
[Bart-base configuration](https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-base/config.json) has the wrong values for `decoder_attention_heads` and `encoder_attention_heads`. This messes up the self-attention computation and make model totally unusable. ### Who can help @sshleifer
08-17-2020 16:05:40
08-17-2020 16:05:40
thx!<|||||>Should it be 12?<|||||>yes<|||||>Fixed, great catch, sorry for the annoyance.<|||||>cc @VictorSanh if you are still using.
transformers
6,539
closed
Widget can't load model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.27 - Python version: 3.8.2 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu documentation: @sgugger --> @julien-c @mfuntowicz (not sure I tagged correctly because related to inference widget, not listed category) ## Information Model I am using (Bert, XLNet ...): huggingtweets/borisdayma (GPT-2) The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try to do inference from [huggingtweets/borisdayma model page](https://huggingface.co/huggingtweets/borisdayma) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Error displayed in interface: > ⚠️ Can't load weights for 'huggingtweets/borisdayma'. Make sure that: - 'huggingtweets/borisdayma' is a correct model identifier listed on 'https://huggingface.co/models' - or 'huggingtweets/borisdayma' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ![image](https://user-images.githubusercontent.com/715491/90415619-f9cdd380-e076-11ea-879b-7dde7e1601cf.png) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> This works locally: ```python generator = pipeline('text-generation', model='huggingtweets/borisdayma') generator("<|endoftext|>I really like") ```
08-17-2020 15:49:17
08-17-2020 15:49:17
@julien-c @mfuntowicz Any idea of what is going wrong? I'm just wondering if it's anything from my side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,538
closed
[EncoderDecoder] Add functionality to tie encoder decoder weights
This PR adds the functionality to tie encoder / decoder weights for encoder decoder models, by adding a new configuration parameters flag: `tie_encoder_decoder` to the config. If this parameter is set to `True` and the model is an `encoder_decoder` model, then all weights of the encoder layer are set to their respective weights of the decoder layer. This can save a lot of memory for big models, such as shared `t5-3b` models. Tests are added to both `T5Model, T5ForConditionalGeneration` and the `EncoderDecoderModel`. Because BART uses different parameter names for the encoder and decoder, the generic weight tying function as defined in modeling_utils.py cannot be used and a customized function should be implemented if needed in a further PR. The API looks as follows: ```python # T5 from transformers import T5ForConditionalGeneration, T5Config share_t5 = T5ForConditionalGeneration(T5Config(tie_encoder_decoder=True)) t5 = T5ForConditionalGeneration(T5Config()) assert sum(p.numel() for p in share_t5.parameters()) < sum(p.numel() for p in t5.parameters()) # Encoder Decoder from transformers import EncoderDecoderModel share_bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased", tie_encoder_decoder=True) bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased") assert sum(p.numel() for p in share_bert2bert.parameters()) < sum(p.numel() for p in bert2bert.parameters()) ```
08-17-2020 13:45:09
08-17-2020 13:45:09
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=h1) Report > Merging [#6538](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37709b59099bd984858ca1884c6c70403420347d&el=desc) will **increase** coverage by `0.81%`. > The diff coverage is `98.18%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6538/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6538 +/- ## ========================================== + Coverage 79.26% 80.07% +0.81% ========================================== Files 156 156 Lines 28073 28124 +51 ========================================== + Hits 22252 22521 +269 + Misses 5821 5603 -218 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <97.43%> (+0.69%)` | :arrow_up: | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.62% <100.00%> (+0.70%)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.39% <100.00%> (+0.72%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6538/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=footer). Last update [37709b5...1c77a34](https://codecov.io/gh/huggingface/transformers/pull/6538?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Are the bart differences just layernorms? [`embed_positions`, 'embed_tokens`, `self_attn`] all the same afaict.<|||||>@sshleifer - great review, thanks a lot! Regarding Bart, yeah the naming is different for some layers "final_layer_norm" for example and the objects are not of the same type, so I think the best idea is to add a `_tie_encoder_decoder_weights` function to `BartPretrainedModel` that overwrites the default function - what do you think? <|||||>great idea<|||||>> Look good to me and a great addition! > > Should the `tie_encoder_to_decoder_recursively` method be also used in the general encoder-decoder models? Yes, it's added for the encoder-decoder models as well :-) See: ```python # Encoder Decoder from transformers import EncoderDecoderModel share_bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased", tie_encoder_decoder=True) bert2bert = EncoderDecoderModel.from_pretrained_encoder_decoder("bert-base-cased", "bert-base-cased") assert sum(p.numel() for p in share_bert2bert.parameters()) < sum(p.numel() for p in bert2bert.parameters()) ```
transformers
6,537
closed
tokenizers/tokenizers.cpython-36m-darwin.so, 2): Symbol not found: ____chkstk_darwin
Hello I am using jupyter notebook and installed tranformers with: `!{sys.executable} -m pip install transformers ` I have MacOS 10.13 When trying to import the tranformers: `from transformers import BertTokenizer, BertModel ` I get the error: ``` ImportError: dlopen(/Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so, 2): Symbol not found: ____chkstk_darwin Referenced from: /Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so (which was built for Mac OS X 10.15) Expected in: /usr/lib/libSystem.B.dylib in /Users/user1/anaconda3/lib/python3.6/site-packages/tokenizers/tokenizers.cpython-36m-darwin.so ``` Any idea how to fix it?
08-17-2020 12:25:31
08-17-2020 12:25:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,536
closed
[model_cards] Add model cards for Urduhack model (roberta-urdu-small)
Model card added for roberta-urdu-small.
08-17-2020 11:39:14
08-17-2020 11:39:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=h1) Report > Merging [#6536](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/37709b59099bd984858ca1884c6c70403420347d&el=desc) will **decrease** coverage by `1.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6536/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6536 +/- ## ========================================== - Coverage 79.26% 78.16% -1.11% ========================================== Files 156 156 Lines 28073 28073 ========================================== - Hits 22252 21942 -310 - Misses 5821 6131 +310 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6536/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=footer). Last update [37709b5...4cba09e](https://codecov.io/gh/huggingface/transformers/pull/6536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing! First model on https://huggingface.co/languages for Urdu :) If you'd like, would you mind adding sample inputs for Urdu to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts ? (they don't have to be translations of the existing ones)
transformers
6,535
closed
Passing inputs_embeds into GenerationMixin.generate()
# 🚀 Feature request Currently `GenerationMixin.generate()` only accepts `input_ids` but not `inputs_embeds`. Therefore this method is not usable when custom input embeddings are required. In contrast, many models do accept `inputs_embeds` as input. Additionally, for models that have both an encoder and a decoder, it is not possible to run `encoder.forward()` and `decoder.generate()` separately, because `generate()` does not accept `encoder_outputs` as input. ## Motivation Having the flexibility to input `inputs_embeds` or `encoder_outputs` is essential for many tasks. For example, the input can be the concatenation of a sequence of word embeddings and an image embedding or style embedding (of the same embedding size). I want to use `generate()` with a T5 model fine-tuned for such as task, where the input sequence contains both word and non-word embeddings.
08-17-2020 10:57:45
08-17-2020 10:57:45
Hey @ymfa, thanks for the feature request :-) I'll put it on the To-Do list. Not sure how soon we will work on this though. If you have a good idea of how to design this new feature, feel free to open a PR :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi @patrickvonplaten, I'm interested in this feature as well as I'm using GPT-2 with custom input embedding. Is there currently a way to pass the inputs_embeds to the generate function instead of input_ids?<|||||>I've started working on a small PR that provides the flexibility of passing _encoder outputs_ into GenerationMixin.generate(). I chose `encoder_outputs` over `inputs_embeds` because they are more fundamental, thus the fix would be more generally useful. However, it might not satisfy @umbertopietroni's need as GPT-2 is not an encoder-decoder model.<|||||>Is there any update on any kind of solution to it yet or any work around to pass encoder_outputs to generate ? <|||||>Any update on this issue?<|||||>Any update on this issue?<|||||>It is possible to run `inputs_embeds` for an encoder-decoder framework. See https://github.com/huggingface/transformers/pull/14443 . This does assume however that we know the word embedding matrix of the decoder. However for models like GPT2 this is not as straight-forward - see: https://github.com/huggingface/transformers/pull/14443#discussion_r753167493 In general, what is the exact use-case people are interested in here? <|||||>@patrickvonplaten for example in the recent NeurIPS paper ["Multimodal Few-Shot Learning with Frozen Language Models"](https://papers.nips.cc/paper/2021/file/01b7575c38dac42f3cfb7d500438b875-Paper.pdf), the output of a non-trained CNN is directly fed into a pre-trained and frozen language model. In this scenario, the CNN learns how to generate input embeddings such that the pre-trained language model can generate the right caption.<|||||>I see - this makes sense! We should probably adapt the generate function then to allow this scenario. I'll put it on my TODO! <|||||>> I am trying to generate with a decoder-only model using inputs_embeds. Does anyone know useful resources on how to achieve this? <|||||>This should already be possible - will try to put it in the big generate doc refactor that I'm working on at the moment - see https://github.com/huggingface/transformers/issues/15552<|||||>Hi @patrickvonplaten, I am glad to hear there will be doc refactor for generation, thanks for working on this! > This should already be possible I am using version 4.16.2, and when I try to generate with DialoGPT (a decoder only model) as follows `outputs = model.generate(inputs_embeds=inputs_embeds)` I get the following error: ` ValueError: If inputs_embeds is passed as model-specific keyword input then model has to be an encoder-decoder and not a GPT2LMHeadModel. ` <|||||>Hi @patrickvonplaten, I would like to know if there is any updates. I just really need the generate function with parameter `inputs_embeds` for GPT model > I see - this makes sense! We should probably adapt the generate function then to allow this scenario. I'll put it on my TODO! Thank you<|||||>@Tuan-Lee-23 - would you like to open a PR for this to give it a try? :-) This would also help me understand the use case better<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I would also really love to see this. Just tried to generate from inputs_embeds on OPT and got the error message. Thanks! <|||||>@chaddech , could you explain your use-case in a bit more detail here? 1) Why do you want to use word embeddings? 2) Are you not using at all the word embeddings of OPT? 3) Are your OPT model's input embeddings tied to the output embeddings? In general I'm not completely against adding this feature, but only if the use case is solid since it requires lots of changes to `generate()`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patrickvonplaten An example of use case (for me) is an open-ended text generation after **soft-prompt tuning**. During the tuning, only the embeddings of n_tokens prompt is learnable. Other parameters are being frozen. So the input of forward() function is the concatenated embeddings of n_tokens prompt and the embeddings of actual input (discrete tokens). Prompt is represented as dummy -- no actual discrete token (word) linked to it. See [https://github.com/corolla-johnson/mkultra](https://github.com/corolla-johnson/mkultra ) or [https://github.com/kipgparker/soft-prompt-tuning](https://github.com/kipgparker/soft-prompt-tuning) for practicality. It would be a lot easier if [generation_utils](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101) allows for **input_embs** Thank you. > @chaddech , could you explain your use-case in a bit more detail here? > > 1. Why do you want to use word embeddings? > 2. Are you not using at all the word embeddings of OPT? > 3. Are your OPT model's input embeddings tied to the output embeddings? > > In general I'm not completely against adding this feature, but only if the use case is solid since it requires lots of changes to `generate()` <|||||>@ymfa could you maybe open a PR to show how a solution could look like (maybe just a super quick no dirty PR?) Sorry I sadly won't have the time to dive into the other codebases or paper, but would be super happy to guide through a PR! Also cc @patil-suraj @gante <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm running into the same `ValueError` above when trying to replicate the paper "[Locating and Editing Factual Associations in GPT](https://arxiv.org/abs/2202.05262)". This technique relies on injecting noise into the word embeddings to corrupt them. Having this feature added would be very useful. Thanks! **Edit**: I found a workaround, allowing me to extract the next word from GPT2 given a custom embedding: ``` tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2") custom_embeds = torch.randn(1, 5, 768) outputs = model(inputs_embeds=custom_embeds) probs = outputs.logits[:, -1, :].flatten() next_token = probs.argmax() tokenizer.decode(next_token) ```<|||||>Hi @mkyl (and other participants in this thread) 👋 As written above, passing `inputs_embeds` with decoder-only models is not possible at the moment. I see from the number of comments and likes above that this would be a somewhat appreciated functionality, so I want to help the folks here. Here's the issue -- `generate()` does a LOT of lifting to keep its interface simple. To enable calls with `inputs_embeds` we would need to greatly increase the complexity of an already complex piece of code, hurting everyone in the long run 🙅 Thankfully, there is an alternative: we can manually prepare a few inputs and call the generation methods directly, which support passing `inputs_embeds`. The catch is that a critical component of the models, `prepare_inputs_for_generation`, is not expecting `inputs_embeds`, so we will have to monkey patch it. But it works, as you can see in the example below 🙌 (I hope this example helps!) The monkey patch is inconvenient, but I'm not entirely convinced that adding this feature is worth modifying tens of models. I make the following pact with y'all: ⚠️ if this obscure comment in a closed issue reaches 10 reactions, I will implement the change to `prepare_inputs_for_generation` on all text-generation models. (Whoever does the 10th reaction, please tag me; cc @patrickvonplaten ) ____________________________________________________ (Note: prior to v4.26, you have to replace `past_key_values` by `past` in the code below) ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, MaxLengthCriteria, StoppingCriteriaList model = AutoModelForCausalLM.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") text = "Hello world" input_ids = tokenizer.encode(text, return_tensors="pt") # Traditional way of generating text outputs = model.generate(input_ids) print("\ngenerate + input_ids:", tokenizer.decode(outputs[0], skip_special_tokens=True)) # Generating with decoder models from inputs_embeds # Step 1: monkey patch "prepare_inputs_for_generation" to pass inputs_embeds when they are available def prepare_inputs_for_generation(input_ids, past_key_values=None, **kwargs): token_type_ids = kwargs.get("token_type_ids", None) # only last token for inputs_ids if past_key_values is defined in kwargs if past_key_values: input_ids = input_ids[:, -1].unsqueeze(-1) if token_type_ids is not None: token_type_ids = token_type_ids[:, -1].unsqueeze(-1) attention_mask = kwargs.get("attention_mask", None) position_ids = kwargs.get("position_ids", None) if attention_mask is not None and position_ids is None: # create position_ids on the fly for batch generation position_ids = attention_mask.long().cumsum(-1) - 1 position_ids.masked_fill_(attention_mask == 0, 1) if past_key_values: position_ids = position_ids[:, -1].unsqueeze(-1) else: position_ids = None # !!!!!!!!!!!!!!!!!!! start: modified vs original, to pass inputs_embeds when they are available if "inputs_embeds" in kwargs and past_key_values is None: # we only want to use them in the 1st generation step model_inputs = {"inputs_embeds": inputs_embeds} else: model_inputs = {"input_ids": input_ids} model_inputs.update({ "past_key_values": past_key_values, "use_cache": kwargs.get("use_cache"), "position_ids": position_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids, }) return model_inputs # !!!!!!!!!!!!!!!!!!! end: modified vs original, to pass inputs_embeds when they are available model.prepare_inputs_for_generation = prepare_inputs_for_generation # Step 2: prepare the inputs for the generation method manually and call it inputs_embeds = model.transformer.wte(input_ids) # empty input ids -> the output will NOT include the input prompt, but will generate the same text (because of # inputs_embeds) input_ids = torch.LongTensor([[model.config.bos_token_id]]) stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)]) outputs = model.greedy_search( input_ids, inputs_embeds=inputs_embeds, stopping_criteria=stopping_criteria, pad_token_id=model.config.eos_token_id ) print("\ngreedy + inputs_embeds:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ```<|||||>@gante We hit ~10~ 11!<|||||>@gante In this way, it seems that only the first token is based on inputs_embeds.<|||||>Oh damn, this exceeded my expectations 🙈 Added to my todo list! Keep in mind that my queue is long at the moment, so this might take a few months.<|||||>@BugApe in the example above, only the first forward pass will have `inputs_embeds` as input, but you can have more than one token there. If your target application requires manipulating `inputs_embeds` at each generation step, then you'd need to monkey-patch `prepare_inputs_for_generation` to embed the newly generated tokens and then manipulate it as you wish. That will not be included in the planned changes. However, in theory, I could make `generate` accept a dictionary of arbitrary functions to be applied to each input in `prepare_inputs_for_generation` (e.g. a function that embeds `input_ids` and then add some noise, to be applied at each step before the forward pass). I'll make the same pact as above: if this comment reaches 10 reactions, I will add the functionality to my todo list. (whoever does the 10th reaction, please tag me)<|||||>Hi, how's the state of this feature? It would add a lot of flexibility to model construction<|||||>@gante The monkey patch works nice for `model.greedy_search` and `model.contrastive_search`, but cannot work with `model.generate`, which has more utilities. Could you provide a monkey patch that works with `model.generate`? Many thanks! <|||||>Hi @mkyl, @Ryul0rd, @XuanVuNguyen, and other participants in this thread 👋 As promised, `inputs_embeds` can be passed to `.generate()` with decoder-only models 💪 See the example below for reference. The caveat is that if you want it on other models (in addition to GPT2), you'll have to open a PR that makes the same changes as [the ones to GPT2 in this PR](https://github.com/huggingface/transformers/pull/21405) to your model of choice :) To access it, install the latest version: `pip install --upgrade git+https://github.com/huggingface/transformers.git` EDIT: as of 2023-Mar-16, you can access this feature by installing `v4.27`. There are a few models with soft-prompting enabled -- try running it and, if the model lacks support, you'll get an exception with instructions. ____________________________ ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") text = "Hello world" input_ids = tokenizer.encode(text, return_tensors="pt") # Traditional way of generating text outputs = model.generate(input_ids) print("\ngenerate + input_ids:", tokenizer.decode(outputs[0], skip_special_tokens=True)) # From inputs_embeds -- exact same output if you also pass `input_ids`. If you don't # pass `input_ids`, you will get the same generated content but without the prompt inputs_embeds = model.transformer.wte(input_ids) outputs = model.generate(input_ids, inputs_embeds=inputs_embeds) print("\ngenerate + inputs_embeds:", tokenizer.decode(outputs[0], skip_special_tokens=True)) ```<|||||>@BugApe I'm closing this issue for now, as your request is kind of an advanced functionality. However, if it hits the 10 reacts, let me know -- I'll reopen this issue then! 🤗 <|||||>@gante Hi, could you please add this feature to chatglm2 too, thanks a lot.<|||||>Hey @ws1993109 👋 I have low bandwidth at the moment. Would you like to open a PR with it and test it? It should be mostly like copying the [changes on GPT2 from this PR](https://github.com/huggingface/transformers/pull/21405), and confirming that it works.
transformers
6,534
closed
How to fine-tune GPT2 on Arithmetic Problem
# # I want to use the GPT2 to solve simple Arithmetic Problem, how should I make the dataset?
08-17-2020 10:30:27
08-17-2020 10:30:27
Hey @KYRIEZX, we are trying to move "non-bug" questions to our forum - would you mind posting the question there again: https://discuss.huggingface.co/ ? Thanks :-)
transformers
6,533
closed
[Docs Pegasus] Correct Pegasus Link in doc
@sshleifer - Link to Bart paper was accidentally used. Correcting it here.
08-17-2020 10:24:13
08-17-2020 10:24:13
thx!
transformers
6,532
closed
Remove deprecated assertEquals
`assertEquals` is deprecated: https://stackoverflow.com/questions/930995/assertequals-vs-assertequal-in-python/931011 This PR replaces these deprecated methods.
08-17-2020 08:41:46
08-17-2020 08:41:46
transformers
6,531
closed
Fix flaky ONNX tests
Bad indent was causing the `_test_export()` method to return None instead of a path. Use `pathlib.Path` everywhere in `_test_export()`. Should fix #6336 and #6529
08-17-2020 07:31:54
08-17-2020 07:31:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=h1) Report > Merging [#6531](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c6c6139fbb2881ef16ac5d8afb6287467bf66e&el=desc) will **increase** coverage by `0.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6531/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6531 +/- ## ========================================== + Coverage 80.13% 80.55% +0.42% ========================================== Files 156 156 Lines 28073 28073 ========================================== + Hits 22495 22615 +120 + Misses 5578 5458 -120 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.83% <0.00%> (+72.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=footer). Last update [48c6c61...bc493a2](https://codecov.io/gh/huggingface/transformers/pull/6531?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Cool, thanks for taking care of it!
transformers
6,530
closed
Added first model card
Added a model card which lists the tags used in the model.
08-17-2020 07:19:04
08-17-2020 07:19:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=h1) Report > Merging [#6530](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/48c6c6139fbb2881ef16ac5d8afb6287467bf66e&el=desc) will **decrease** coverage by `0.69%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6530/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6530 +/- ## ========================================== - Coverage 80.13% 79.43% -0.70% ========================================== Files 156 156 Lines 28073 28073 ========================================== - Hits 22495 22301 -194 - Misses 5578 5772 +194 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.83% <0.00%> (+72.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=footer). Last update [48c6c61...0820568](https://codecov.io/gh/huggingface/transformers/pull/6530?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@onepointconsulting Thanks for sharing! You can control the default inputs to https://huggingface.co/gilf/french-camembert-postag-model with the `widget:` metadata attribute<|||||>@julien-c Thank you for pointing out the `widget:` metadata attribute. I was not aware of that.
transformers
6,529
closed
skip onnx test until morgan comes back
temporary solution for #6181
08-17-2020 00:49:17
08-17-2020 00:49:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=h1) Report > Merging [#6529](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **decrease** coverage by `1.14%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6529/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6529 +/- ## ========================================== - Coverage 80.38% 79.23% -1.15% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22554 22233 -321 - Misses 5504 5825 +321 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6529/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=footer). Last update [72add6c...885985a](https://codecov.io/gh/huggingface/transformers/pull/6529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,528
closed
attempt to fix onnx test
08-17-2020 00:41:14
08-17-2020 00:41:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=h1) Report > Merging [#6528](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **increase** coverage by `0.20%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6528/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6528 +/- ## ========================================== + Coverage 80.38% 80.59% +0.20% ========================================== Files 156 156 Lines 28058 28058 ========================================== + Hits 22554 22612 +58 + Misses 5504 5446 -58 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6528/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=footer). Last update [72add6c...821c990](https://codecov.io/gh/huggingface/transformers/pull/6528?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,527
closed
Update bert-base-portuguese-cased and bert-large-portuguese-cased cards
08-16-2020 22:32:00
08-16-2020 22:32:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=h1) Report > Merging [#6527](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/00bb0b25ed66a4878f2e0ffdd1ca65b7684db57e&el=desc) will **decrease** coverage by `0.27%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6527/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6527 +/- ## ========================================== - Coverage 80.24% 79.97% -0.28% ========================================== Files 149 149 Lines 27680 27680 ========================================== - Hits 22211 22136 -75 - Misses 5469 5544 +75 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6527/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=footer). Last update [00bb0b2...1d585e3](https://codecov.io/gh/huggingface/transformers/pull/6527?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,526
closed
add BartConfig.force_bos_token_to_be_generated
This PR adds a config flag that makes a generation hack from bart's `adjust_logits_during_generation`, optional. This change (setting the flag to False) improves metrics for bart-large-xsum and mbart-large-en-ro, but not bart-large-cnn (hence the need for the flag). I remember @patrickvonplaten asked me about this in February, and I ran on CNN and then decided the hack needed to stay, could have been more careful. cc @patil-suraj ### Todo [x] test xsum [x] test cnn [x] test en-ro [x] update cnn config [x] test pegasus [x] update distilbart configs [] should max_length be changed in xsum config? ### TODO: - check pegasus - should max_length be 1 lower for configs where hack is removed? Yes. ### Metrics (all on val.source vs. val.target) the first number is runtime over the whole val set. "noforce" means this PR without any config change. (so BOS token is not forced to be generated.) **bart-large-cnn** **master: 86:23 {"rouge1": 44.79, "rouge2": 21.64, "rougeL": 31.18}** noforce: 87:40 {'rouge1': 44.26, 'rouge2': 21.22, 'rougeL': 30.72} **bart-large-xsum** master: 41:59, {'rouge1': 45.16, 'rouge2': 21.77, 'rougeL': 36.35} **noforce: 34:12, {'rouge1': 45.45, 'rouge2': 22.38, 'rougeL': 37.25}** **mbart-large-en-ro** master: 04:58 BLEU=27.83 **noforce: 04:42, BLEU=28.15** **pegasus-xsum** master: 56:12 {'rouge1': 46.69, 'rouge2': 24.13, 'rougeL': 38.79} **noforce: 54:15 {'rouge1': 46.98, 'rouge2': 24.43, 'rougeL': 39.11}** #### Commands ```bash export DATA_DIR=wmt_en_ro python run_eval.py facebook/mbart-large-en-ro \ $DATA_DIR/val.source gens/mbart-enro-branch-gens.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/mbart-enro-master-bleu.json \ --task translation_en_to_ro \ --device cuda \ --fp16 \ --bs 32 export DATA_DIR=$CNN_DIR python run_eval.py facebook/bart-large-cnn \ $DATA_DIR/val.source gens/cnn_val_generations_no_force.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/cnn_gen_no_force_rouge.txt \ --device cuda \ --fp16 \ --bs 32 export DATA_DIR=$CNN_DIR python run_eval.py facebook/bart-large-cnn \ $DATA_DIR/val.source gens/cnn_val_generations_master.txt \ --reference_path $DATA_DIR/val.target \ --score_path gens/cnn_gen_master_rouge.txt \ --device cuda \ --fp16 \ --bs 32 ```
08-16-2020 20:24:49
08-16-2020 20:24:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=h1) Report > Merging [#6526](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `2.39%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6526/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6526 +/- ## ========================================== - Coverage 80.59% 78.19% -2.40% ========================================== Files 156 156 Lines 28058 28055 -3 ========================================== - Hits 22612 21939 -673 - Misses 5446 6116 +670 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.56% <100.00%> (-0.20%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <100.00%> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.43%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-32.38%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6526/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=footer). Last update [2060181...5e951e1](https://codecov.io/gh/huggingface/transformers/pull/6526?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,525
closed
[s2s] docs, document desired filenames nicely
08-16-2020 20:24:36
08-16-2020 20:24:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=h1) Report > Merging [#6525](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `0.55%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6525/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6525 +/- ## ========================================== - Coverage 80.59% 80.03% -0.56% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22612 22457 -155 - Misses 5446 5601 +155 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6525/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=footer). Last update [2060181...f5a5373](https://codecov.io/gh/huggingface/transformers/pull/6525?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,524
closed
[wip] experiment with mbart special tokens
08-16-2020 20:08:48
08-16-2020 20:08:48
transformers
6,523
closed
[testing] replace hardcoded paths to allow running tests from anywhere
Currently, some tests can be run only from the root of the project, since they hardcode relative paths. With this PR, it's now possible to run tests from the root of the project, from inside `tests`, or really anywhere. the gist of the change: ``` - PATH_SAMPLE_TEXT = "./tests/fixtures/sample_text.txt" + PATH_SAMPLE_TEXT = f"{get_tests_dir()}/fixtures/sample_text.txt" ``` now can do: ``` pytest tests/test_trainer.py ``` and: ``` cd tests pytest ./test_trainer.py ``` or even: ``` cd examples pytest ../tests/test_trainer.py ``` i.e. you no longer need to go to the root of the project to run tests (I'm trying to figure out the codecov issue, running multiple tests, so it's easier to run tests from within `tests` dir) p.s. CI failures are unrelated.
08-16-2020 18:18:22
08-16-2020 18:18:22
CI failure is unrelated to this PR<|||||>so dope!
transformers
6,522
closed
Create model cards for indonesian models
Added model cards for indonesian gpt2-small, bert-base and roberta-base models. Thanks.
08-16-2020 17:54:26
08-16-2020 17:54:26
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=h1) Report > Merging [#6522](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `1.35%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6522/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6522 +/- ## ========================================== - Coverage 80.59% 79.23% -1.36% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22612 22233 -379 - Misses 5446 5825 +379 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-3.80%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6522/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=footer). Last update [fe61c05...c056197](https://codecov.io/gh/huggingface/transformers/pull/6522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,521
closed
allow spaces in bash args with "$@"
extends the bugfix for #6477 to more scripts.
08-16-2020 17:35:45
08-16-2020 17:35:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=h1) Report > Merging [#6521](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6521/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6521 +/- ## ======================================= Coverage 80.59% 80.59% ======================================= Files 156 156 Lines 28058 28058 ======================================= Hits 22612 22612 Misses 5446 5446 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=footer). Last update [2060181...4032ff5](https://codecov.io/gh/huggingface/transformers/pull/6521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,520
closed
Can't load pegasus models.
Hi, I've tried uploading Peagsus from PreTrainedModel and PreTrainedTokenizer but run into KeyError, I have transformers 3.0.2 - any idea why that might be happening?
08-16-2020 15:27:50
08-16-2020 15:27:50
Not sure what you mean here. Can you please post the traceback and the code that resulted in the error ?<|||||>This is what I got: ``` KeyError Traceback (most recent call last) <ipython-input-18-eb1fb8795ed4> in <module>() 2 from transformers import AutoTokenizer, AutoModelWithLMHead 3 ----> 4 tokenizer = AutoTokenizer.from_pretrained("google/pegasus-multi_news") 5 6 cla = AutoModelWithLMHead.from_pretrained("google/pegasus-multi_news") 1 frames /usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 204 config = kwargs.pop("config", None) 205 if not isinstance(config, PretrainedConfig): --> 206 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 207 208 if "bert-base-japanese" in str(pretrained_model_name_or_path): /usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 204 205 if "model_type" in config_dict: --> 206 config_class = CONFIG_MAPPING[config_dict["model_type"]] 207 return config_class.from_dict(config_dict, **kwargs) 208 else: KeyError: 'pegasus' ```<|||||>Yep, I've got this too. ![image](https://user-images.githubusercontent.com/17177967/90961867-b83c8e80-e4a3-11ea-8167-fe045b731c0c.png) <|||||>@HenryDashwood , @Kejia I can load both of these models on master branch. What version of transformers are you using, try doing this with master as it's not available in 3.0.2 release. `pip install -U git+https://github.com/huggingface/transformers.git`<|||||>Ah of course. Cheers!<|||||>@patil-suraj I tried installing the master version using the shared URL but still it wasn't updated to master version. Can you share more details for installing Transformers Master version?<|||||>Sorry, typo. should be `-U` and not `-u` `pip install -U git+https://github.com/huggingface/transformers.git`<|||||>I can't load the `PegasusTokenizer` for the checkpoint `google/pegasus-pubmed`: ```Python tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-pubmed") ``` Error: ``` Traceback (most recent call last): File "src/download_model.py", line 17, in <module> tokenizer = PegasusTokenizer.from_pretrained(config['model_name']) File "/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1584, in from_pretrained raise EnvironmentError( OSError: Model name 'google/pegasus-pubmed' was not found in tokenizers model name list (google/pegasus-xsum). We assumed 'google/pegasus-pubmed' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. ``` When using the `AutoTokenizer`: ``` tokenizer = AutoTokenizer.from_pretrained("google/pegasus-pubmed") Traceback (most recent call last): File "/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/configuration_utils.py", line 368, in get_config_dict resolved_config_file = cached_path( File "/home/rafael/miniconda3/envs/torch/lib/python3.8/site-packages/transformers/file_utils.py", line 957, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file google/pegasus-pubmed/config.json not found ``` ping @sshleifer I just installed it from master, still not working for me. Environment Info: - `transformers` version: 3.4.0 - Platform: Linux-5.4.0-51-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no<|||||>Can't replicate on master. Please post `transformers-cli env` info when having download issues, and try solutions above in the thread before posting.<|||||>Ok I solved my issue. **FYI:** The problem was that I saved the loaded model in the directory `google/pegesus-pubmed` once in an invalid way and from now on the `from_pretrained` method tried to load it from the local path first which did not work. Sorry for bothering you!
transformers
6,519
closed
[WIP] Create SequenceClassification MultipleChoice and TokenClassification for tf_longformer
Tests for the new classes are not passing and there is no test yet for TFLongformerClassificationHead (and I am not sure it's needed). @patrickvonplaten Resolves #6401
08-16-2020 14:28:13
08-16-2020 14:28:13
Hey @Groskilled - I added a couple of comments. Let's try to first make the torch tests pass. Let me know if you need more help after implementing the comments.<|||||>Closing PR due to inactivity
transformers
6,518
closed
[docs] Copy code button misses '...' prefixed code
Tested in a local build of the docs. e.g. Just above https://huggingface.co/transformers/task_summary.html#causal-language-modeling Copy will not copy the full code, e.g. `for token in top_5_tokens:` Instead we should get it all: ``` for token in top_5_tokens: print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token]))) ``` >>> for token in top_5_tokens: ... print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token]))) Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint. Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint. Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint. Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint. Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint. Docs for the option fix: https://sphinx-copybutton.readthedocs.io/en/latest/
08-16-2020 13:41:33
08-16-2020 13:41:33
Feels more like a false positive failure?
transformers
6,517
closed
Can't load t5-11b from pre-trained
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: - Python version: 3.8.2 - PyTorch version 1.6 ### Who can help T5: @patrickvonplaten ## Information The model I am using: T5 ## To reproduce Steps to reproduce the behavior: ``` import transformers transformers.T5ForConditionalGeneration.from_pretrained("t5-11b") ``` ``` OSError: Can't load weights for 't5-11b'. Make sure that: - 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior the model should be loaded.
08-16-2020 12:14:23
08-16-2020 12:14:23
Hey @saareliad, can you try: ```python t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False) ``` Also, see: https://github.com/huggingface/transformers/issues/5423 But the model cannot really be run before we take a closer look at: https://github.com/huggingface/transformers/pull/3578.<|||||>@patrickvonplaten mind adding a big disclaimer to the model card for this particular checkpoint? About what you just said (CDN limitation + model parallelism)<|||||>Thanks @patrickvonplaten , Our work successfully adds (several types of) model parallellism and trains T5 and several other large transformers and is integrated with HF for quite a while. Will opensource it soon :)
transformers
6,516
closed
How to enable sampling when using unigram tokenizers?
# ❓ Questions & Help ## Details It is mentioned on [Tokenizer summary][1] that when using unigram tokenizers, "you could sample one of the tokenization according to their probabilities". It however did not detail how to do so. Looking at the source-code, all of `AlbertTokenizer`, `T5Tokenizer` and `XLNetTokenizer` have an argument "sample" in their `_tokenize()` method, which in turns call the encoding method from sentencepiece with sampling enabled. ``` def _tokenize(self, text, sample=False): """ Tokenize a string. """ text = self.preprocess_text(text) if not sample: pieces = self.sp_model.EncodeAsPieces(text) else: pieces = self.sp_model.SampleEncodeAsPieces(text, 64, 0.1) ... ``` But when I check the usages of _tokenize, it seems to be only called without a "sample" argument, in `PreTrainedTokenizer`. ``` def split_on_tokens(tok_list, text): if not text.strip(): return [] if not tok_list: return self._tokenize(text) ``` I tried calling the `encode()` method of the tokenizer class with `sample=True`, but the keyword was not recognized. ``` albert_tokenizer = AlbertTokenizer('m.model') albert_tokenizer.encode(msg, sample=True) >> Keyword arguments {'sample': True} not recognized. ``` How should I enable sampling when using unigram tokenizers? [1]: https://huggingface.co/transformers/tokenizer_summary.html#unigram **A link to original question on the forum/Stack Overflow**: [https://stackoverflow.com/questions/63436152/how-to-enable-sampling-when-using-unigram-tokenizers-in-huggingface](url)
08-16-2020 11:58:53
08-16-2020 11:58:53
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,515
closed
Support additional dictionaries for BERT Japanese tokenizers
This PR is to support additional dictionaries for BERT Japanese tokenizers. Specifically, we add support for `unidic_lite` and `unidic` dictionaries. Both dictionaries are pip-installable like `ipadic` and compatible with the `fugashi` package introduced in #6086 by @polm. (We are going to release newly pre-trained BERT models using these dictionaries as well.)
08-16-2020 08:18:58
08-16-2020 08:18:58
Thank you, @JetRunner and @polm. I've fixed the test-related issues and it should be OK now.
transformers
6,514
closed
Unexpected output(prediction) for TokenClassification, using pipeline
I trained the language model from scratch on my language. fine-tuned it but while predicting the results using "pipeline" but, i am not getting a proper tag for each token. it looks like it is not tokenizing the words properly and giving results on subword tokens, i also tried grouped_entities=True, but not working, my code - ``` import torch from transformers import AutoModelForTokenClassification, AutoTokenizer from transformers import TokenClassificationPipeline # Named entity recognition pipeline, passing in a specific model and tokenizer model = AutoModelForTokenClassification.from_pretrained("./sumerianRoBERTo-finetune") tokenizer = AutoTokenizer.from_pretrained("./sumerianRoBERTo-finetune") nlp_grouped = TokenClassificationPipeline( model=model, grouped_entities=True, tokenizer=tokenizer, ) print(nlp_grouped('szu-nigin 1(u) 7(disz) 1/3(disz) gin2 ku3-babbar')) ``` Results - ``` [{'entity_group': 'N', 'score': 0.7584937413533529, 'word': '<s>szu-'}, {'entity_group': 'V', 'score': 0.7493271827697754, 'word': 'nigin'}, {'entity_group': 'NU', 'score': 0.9881511330604553, 'word': ' 1'}, {'entity_group': 'N', 'score': 0.8397139310836792, 'word': 'u'}, {'entity_group': 'NU', 'score': 0.7238532304763794, 'word': ') 7'}, {'entity_group': 'N', 'score': 0.6140500903129578, 'word': 'disz)'}, {'entity_group': 'NU', 'score': 0.9929361343383789, 'word': ' 1'}, {'entity_group': 'N', 'score': 0.993495523929596, 'word': '/'}, {'entity_group': 'NU', 'score': 0.9997004270553589, 'word': '3'}, {'entity_group': 'N', 'score': 0.7956433892250061, 'word': 'disz) gin'}, {'entity_group': 'NU', 'score': 0.9885044693946838, 'word': '2'}, {'entity_group': 'NE', 'score': 0.6853057146072388, 'word': ' ku'}, {'entity_group': 'N', 'score': 0.9291318953037262, 'word': '3-'}, {'entity_group': 'AJ', 'score': 0.5223987698554993, 'word': 'babbar'}, {'entity_group': 'N', 'score': 0.8513995409011841, 'word': '</s>'}] ``` and when grouped_entities=False, I am getting ``` [{'word': '<s>', 'score': 0.5089993476867676, 'entity': 'N', 'index': 0}, {'word': 'szu', 'score': 0.9983197450637817, 'entity': 'N', 'index': 1}, {'word': '-', 'score': 0.7681621313095093, 'entity': 'N', 'index': 2}, {'word': 'nigin', 'score': 0.7493271827697754, 'entity': 'V', 'index': 3}, {'word': 'Ġ1', 'score': 0.9881511330604553, 'entity': 'NU', 'index': 4}, {'word': 'u', 'score': 0.8397139310836792, 'entity': 'N', 'index': 6}, {'word': ')', 'score': 0.4481121897697449, 'entity': 'NU', 'index': 7}, {'word': 'Ġ7', 'score': 0.9995942711830139, 'entity': 'NU', 'index': 8}, {'word': 'disz', 'score': 0.6592599749565125, 'entity': 'N', 'index': 10}, {'word': ')', 'score': 0.5688402056694031, 'entity': 'N', 'index': 11}, {'word': 'Ġ1', 'score': 0.9929361343383789, 'entity': 'NU', 'index': 12}, {'word': '/', 'score': 0.993495523929596, 'entity': 'N', 'index': 13}, {'word': '3', 'score': 0.9997004270553589, 'entity': 'NU', 'index': 14}, {'word': 'disz', 'score': 0.6896834969520569, 'entity': 'N', 'index': 16}, {'word': ')', 'score': 0.6974959969520569, 'entity': 'N', 'index': 17}, {'word': 'Ġgin', 'score': 0.9997506737709045, 'entity': 'N', 'index': 18}, {'word': '2', 'score': 0.9885044693946838, 'entity': 'NU', 'index': 19}, {'word': 'Ġku', 'score': 0.6853057146072388, 'entity': 'NE', 'index': 20}, {'word': '3', 'score': 0.901140570640564, 'entity': 'N', 'index': 21}, {'word': '-', 'score': 0.9571232199668884, 'entity': 'N', 'index': 22}, {'word': 'babbar', 'score': 0.5223987698554993, 'entity': 'AJ', 'index': 23}, {'word': '</s>', 'score': 0.8513995409011841, 'entity': 'N', 'index': 24}] ``` while I am just looking for labels for space tokenized tags.
08-16-2020 07:23:14
08-16-2020 07:23:14
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,513
closed
Longformer pretrained weights are not really pretrained?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.0.2 - Platform:Ubuntu 18.04 - Python version:3.7 - PyTorch version (GPU?):1.6 Yes - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No @patrickvonplaten Model I am using (Bert, XLNet ...): Longformer The problem arises when using: * [V] my own modified scripts The tasks I am working on is: * [V] my own task or dataset: (give details below) ## To reproduce As a preprocessing step in my pipeline I train a pre-trained model on a subset of Wikipedia dataset. When using roBERTa the results of my fine-tuning steps are as follows (LM task): ![image](https://user-images.githubusercontent.com/31047807/90328740-71ccc880-dfa7-11ea-9df5-0163ae48533b.png) Unfortunately, when simply plugging in longformer (and pointing the pretrained path to `allenai/longformer-base-4096` ![image](https://user-images.githubusercontent.com/31047807/90328734-5e216200-dfa7-11ea-888b-53e759fd2e29.png) Which seems like the weights are simply random-initialized. I both cases (roberta and longformer) I get the message stating the weights have been initialized. What went wrong?
08-16-2020 07:04:17
08-16-2020 07:04:17
Hey @dvirginz, Can you post a code snippet here so that we can reproduce your results or at least see what might have been falsely configured?<|||||>@patrickvonplaten Hi! I'm sorry to have troubled you, the problem was with how I initialized and loaded the weights for the model. It is now working, Thanks!
transformers
6,512
closed
[doc] lighter 'make test'
`make test` on my desktop with 12 CPU cores leads to `pytest -n 12`, which quickly starts swapping - I had to add a huge swap file, but it's still insane load-wise. So I'm proposing to document a lighter option. And, if you're open to `make test-light` or something like that, it would be most welcome. I tested that `-n 2` and `-n 4` aren't very different speed-wise to complete the test suite, if the gpu(s) is the bottleneck. I have been using `-n 3` so far for a balanced not too high load, but still pretty fast completion.
08-16-2020 05:14:56
08-16-2020 05:14:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=h1) Report > Merging [#6512](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `0.89%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6512/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6512 +/- ## ========================================== - Coverage 80.37% 79.48% -0.90% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22552 22302 -250 - Misses 5506 5756 +250 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <ø> (-1.01%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6512/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=footer). Last update [24107c2...ca9b5e9](https://codecov.io/gh/huggingface/transformers/pull/6512?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,511
closed
[doc] Summary of the models fixes
* improve readability - typos, punctuation, clearer sentences 2 questions: 1. https://huggingface.co/transformers/model_summary.html#t5 I think the example is incorrect in the target part of the example. I think it's missing "cute". Before: > For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the token dog, is and cute, the input becomes “My <x> very <y> .” and the target is “<x> dog is <y> . <z>” Proposed change: > For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the tokens: "dog", "is" and "cute", the encoder input becomes “My <x> very <y> .” and the target input becomes “<x> dog is <y> cute .<z>” 2. At https://huggingface.co/transformers/model_summary.html#full-vs-sparse-attention - in the "LSH attention" section: > "The attention mask is modified to mask the current token (except at the first position) because it will give a query and key equal (so very similar to each other). " It's missing a word at the end of the sentence. Is it "equal attention"? - Also in the "Local attention" just after the previous section, it goes: "This is shown in Figure 2d of the paper" but there is no link or name of the paper it's referring to. @sgugger
08-16-2020 04:51:30
08-16-2020 04:51:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=h1) Report > Merging [#6511](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `0.43%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6511/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6511 +/- ## ========================================== - Coverage 80.37% 79.94% -0.44% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22552 22431 -121 - Misses 5506 5627 +121 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <ø> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `45.31% <0.00%> (-50.00%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `51.66% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6511/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=footer). Last update [24107c2...78bd9e3](https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,510
closed
new Makefile target: docs
- add a new "docs" target to validate docs and document it this is needed to avoid `build_doc` CI failures when editing docs not sure why gh reports whitespace differences - odd.
08-16-2020 02:51:35
08-16-2020 02:51:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=h1) Report > Merging [#6510](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84d33317aec4e07ff2bc60721c81c9d519cefd3a&el=desc) will **increase** coverage by `2.74%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6510/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6510 +/- ## ========================================== + Coverage 77.77% 80.52% +2.74% ========================================== Files 156 156 Lines 28094 28094 ========================================== + Hits 21850 22622 +772 + Misses 6244 5472 -772 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <0.00%> (+0.83%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <0.00%> (+1.63%)` | :arrow_up: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6510/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=footer). Last update [84d3331...bd93fc8](https://codecov.io/gh/huggingface/transformers/pull/6510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Yes, it does. Please let me know if it's a problem for future PRs. It surely adds to the reviewer's efforts I can see :( There must be a way to configure github ui to ignore whitespace difference in diff, right? On user side this can be done via adding &w=1: https://github.com/huggingface/transformers/pull/6510/files?diff=split&w=1 or via UI: https://stackoverflow.com/a/51755490/9201239 It doesn't look as if there is a project-wide setting to do that. I looked but I don't see this option. It looks like it was discussed here https://github.com/sindresorhus/refined-github/issues/191, but hasn't been resolved to a completion, only providing a UI to remove whitespace manually for each PR.
transformers
6,509
closed
[doc] multiple corrections to "Summary of the tasks"
- improve readability - typos, punctuation, clearer sentences, numerical bullets where needed - fix incorrect script names - add missing scripts and add hyper links where missing - a minor code improvement to improve understanding at the very last section (replaced output ids with translated text) One thing I didn't know how to fix is the outdated referance to a script that no longer exists: > If you would like to fine-tune a model on a summarization task, you may leverage the ``examples/summarization/bart/run_train.sh`` (leveraging pytorch-lightning) script. @sgugger
08-16-2020 02:33:36
08-16-2020 02:33:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=h1) Report > Merging [#6509](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `1.16%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6509/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6509 +/- ## ========================================== - Coverage 80.37% 79.21% -1.17% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22552 22225 -327 - Misses 5506 5833 +327 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <ø> (-2.51%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6509/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=footer). Last update [24107c2...bd1f4c5](https://codecov.io/gh/huggingface/transformers/pull/6509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good to go thanks. (side-note: we tend to default to two underscores for links as otherwise, sphinx will always associate the description to the same link).<|||||>> side-note: we tend to default to two underscores for links as otherwise, sphinx will always associate the description to the same link I will know now - thank you: fixed here https://github.com/huggingface/transformers/pull/6541
transformers
6,508
closed
[doc] make the text more readable, fix some typos, add some disambiguation
@sgugger
08-15-2020 22:26:58
08-15-2020 22:26:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=h1) Report > Merging [#6508](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **increase** coverage by `0.16%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6508/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6508 +/- ## ========================================== + Coverage 80.37% 80.54% +0.16% ========================================== Files 156 156 Lines 28058 28058 ========================================== + Hits 22552 22599 +47 + Misses 5506 5459 -47 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (ø)` | | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6508/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=footer). Last update [24107c2...76e7b5c](https://codecov.io/gh/huggingface/transformers/pull/6508?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,507
closed
trainer.train() fails on 'fmikaelian/flaubert-base-uncased-squad' fine-tuning SQuAD
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> transformers version: 3.0.2 Platform: Windows 10 Home v2004, Conda 4.8.3, Anaconda CLI 1.7.2 Python version: 3.7.1 PyTorch version (GPU?): 1.6.0 CUDA GPU Tensorflow version (GPU?): none Using GPU in script?: torch.set_default_tensor_type('torch.cuda.FloatTensor') Using distributed or parallel set-up in script?: dunno ### Who can help @fmikaelian @julien-c ## Information Model I am using : FlauBERT The problem arises when using the official example script. The tasks I am working on is an official GLUE/SQUaD task: fine-tuning 'fmikaelian/flaubert-base-uncased-squad' ## To reproduce ``` french_squad_train = load_dataset('json', data_files="""./French-Squad/SQuAD-v1.1-train_fr_ss999_awstart2_net.json""", field='data') french_squad_dev = load_dataset('json', data_files="""./French-Squad/SQuAD-v1.1-dev_fr_ss999_awstart2_net.json""", field='data') print (french_squad_train) ``` > {'train': Dataset(features: {'title': Value(dtype='string', id=None), 'paragraphs': [{'context': Value(dtype='string', id=None), 'qas': [{'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': [{'answer_start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}]}]}]}, num_rows: 442)} ``` model_name = 'fmikaelian/flaubert-base-uncased-squad' config = AutoConfig.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name, config=config) model.train() model.to('cuda') training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=french_squad_train, # training dataset eval_dataset=french_squad_dev # evaluation dataset ) trainer.train() ``` > HBox(children=(FloatProgress(value=0.0, description='Epoch', max=1.0, style=ProgressStyle(description_width='i… > HBox(children=(FloatProgress(value=0.0, description='Iteration', max=1.0, style=ProgressStyle(description_widt… > --------------------------------------------------------------------------- > KeyError Traceback (most recent call last) > <ipython-input-13-3435b262f1ae> in <module> > ----> 1 trainer.train() > > C:\Anaconda3\envs\xxx\lib\site-packages\transformers\trainer.py in train(self, model_path) > 490 self._past = None > 491 > --> 492 for step, inputs in enumerate(epoch_iterator): > 493 > 494 # Skip past any already trained steps if resuming training > > C:\Anaconda3\envs\xxx\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) > 226 def __iter__(self, *args, **kwargs): > 227 try: > --> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): > 229 # return super(tqdm...) will not catch exception > 230 yield obj > > C:\Anaconda3\envs\xxx\lib\site-packages\tqdm\std.py in __iter__(self) > 1128 > 1129 try: > -> 1130 for obj in iterable: > 1131 yield obj > 1132 # Update and possibly print the progressbar. > > C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\dataloader.py in __next__(self) > 361 > 362 def __next__(self): > --> 363 data = self._next_data() > 364 self._num_yielded += 1 > 365 if self._dataset_kind == _DatasetKind.Iterable and \ > > C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self) > 401 def _next_data(self): > 402 index = self._next_index() # may raise StopIteration > --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration > 404 if self._pin_memory: > 405 data = _utils.pin_memory.pin_memory(data) > > C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index) > 42 def fetch(self, possibly_batched_index): > 43 if self.auto_collation: > ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] > 45 else: > 46 data = self.dataset[possibly_batched_index] > > C:\Anaconda3\envs\xxx\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0) > 42 def fetch(self, possibly_batched_index): > 43 if self.auto_collation: > ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] > 45 else: > 46 data = self.dataset[possibly_batched_index] > > KeyError: 0 ## Expected behavior Ongoing training
08-15-2020 20:38:46
08-15-2020 20:38:46
I don't see how you preprocessed your dataset. From the code you pasted, there is no point where you actually tokenize the elements of `french_squad_train` and `french_squad_dev` (assuming `load_dataset` comes from the nlp library?)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,506
closed
Loading 'fmikaelian/flaubert-base-uncased-squad' throws unexpected, difficult to comprehend warning
## Environment info - `transformers` version: 3.0.2 - Platform: Windows 10 Home v2004, Conda 4.8.3, Anaconda CLI 1.7.2 - Python version: 3.7.1 - PyTorch version (GPU?): 1.6.0 CUDA GPU - Tensorflow version (GPU?): none - Using GPU in script?: torch.set_default_tensor_type('torch.cuda.FloatTensor') - Using distributed or parallel set-up in script?: dunno ### Who can help @fmikaelian @julien-c ## Information Model I am using : FlauBERT The problem arises when using the official example script. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: loading 'fmikaelian/flaubert-base-uncased-squad' ## To reproduce Steps to reproduce the behavior: ``` model_name = 'fmikaelian/flaubert-base-uncased-squad' #flaubert/flaubert_base_cased #flaubert/flaubert_large_cased config = AutoConfig.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name, config=config) ``` > Some weights of the model checkpoint at fmikaelian/flaubert-base-uncased-squad were not used when initializing FlaubertForQuestionAnsweringSimple: ['qa_outputs.start_logits.dense.weight', 'qa_outputs.start_logits.dense.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_1.weight'] > - This IS expected if you are initializing FlaubertForQuestionAnsweringSimple from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). > - This IS NOT expected if you are initializing FlaubertForQuestionAnsweringSimple from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). > Some weights of FlaubertForQuestionAnsweringSimple were not initialized from the model checkpoint at fmikaelian/flaubert-base-uncased-squad and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias'] > You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ## Expected behavior Not throwing a warning.
08-15-2020 20:30:56
08-15-2020 20:30:56
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,505
closed
[doc] typos
@sgugger
08-15-2020 19:42:29
08-15-2020 19:42:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=h1) Report > Merging [#6505](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `0.44%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6505/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6505 +/- ## ========================================== - Coverage 80.38% 79.93% -0.45% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22554 22428 -126 - Misses 5504 5630 +126 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6505/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=footer). Last update [24107c2...be6a262](https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,504
closed
[doc] fix invalid env vars
- remove invalid `ENV_` prefix. - add a few `:` while at it @sgugger
08-15-2020 19:35:04
08-15-2020 19:35:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=h1) Report > Merging [#6504](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `2.17%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6504/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6504 +/- ## ========================================== - Coverage 80.38% 78.20% -2.18% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22554 21943 -611 - Misses 5504 6115 +611 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `66.00% <0.00%> (-3.06%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6504/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=footer). Last update [24107c2...8dda746](https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,503
closed
tf BERT model produced by convert_graph_to_onnx has unclear or wrong input shapes
I tried to create a minimal notebook to reproduce [this example nb](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb), but for TF. when I run ```python convert(framework="tf", model="bert-base-cased", output="onnx-test-tf/bert-base-cased.onnx", opset=11) ``` the output is ``` ONNX opset version set to: 11 Loading pipeline (model: bert-base-cased, tokenizer: bert-base-cased) HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… HBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_… HBox(children=(FloatProgress(value=0.0, description='Downloading', max=526681800.0, style=ProgressStyle(descri… Some weights of the model checkpoint at bert-base-cased were not used when initializing TFBertModel: ['mlm___cls', 'nsp___cls'] - This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFBertModel were initialized from the model checkpoint at bert-base-cased. If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predictions without further training. Creating folder onnx-test-tf /!\ Please note TensorFlow doesn't support exporting model > 2Gb /!\ Using framework TensorFlow: 2.1.0, keras2onnx: 1.7.0 Found input input_ids with shape: {0: 'batch', 1: 'sequence'} Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'} Found input attention_mask with shape: {0: 'batch', 1: 'sequence'} Found output output_0 with shape: {0: 'batch', 1: 'sequence'} Found output output_1 with shape: {0: 'batch'} WARNING:tensorflow:AutoGraph could not transform <bound method TFBertModel.call of <transformers.modeling_tf_bert.TFBertModel object at 0x7f3ddec46810>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertModel.call of <transformers.modeling_tf_bert.TFBertModel object at 0x7f3ddec46810>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f3f0427cb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING: AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f3f0427cb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3dddcc8390>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3dddcc8390>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3dddcc8a50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3dddcc8a50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3dddcd08d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3dddcd08d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd46f890>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd46f890>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd477510>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd477510>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd477a90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd477a90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd48c710>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd48c710>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd498390>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd498390>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd498910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd498910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddceea590>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddceea590>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcef3210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcef3210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcef3790>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcef3790>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddcf08410>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddcf08410>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcf08f90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddcf08f90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcf12610>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddcf12610>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd42d2d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd42d2d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd42df10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd42df10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd4364d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd4364d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd449150>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd449150>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd449d90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd449d90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd452350>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd452350>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd45cf90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd45cf90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3e6c10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3e6c10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3f21d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3f21d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3fde10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3fde10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd405a90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd405a90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd40e050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd40e050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd419c50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd419c50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3a48d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3a48d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3a4e50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3a4e50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3b5ad0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3b5ad0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3be750>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd3be750>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3becd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd3becd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3cf950>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f3ddd3cf950>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd35e5d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f3ddd35e5d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd35eb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f3ddd35eb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f3ddcddda90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f3ddcddda90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 tf executing eager_mode: True tf.keras model eager_mode: False The ONNX operator number change on the optimization: 2575 -> 1670 ``` when I run inference on input ``` {'input_ids': [array([ 101, 146, 1169, 1631, 1103, 3974, 117, 1169, 1128, 136, 102], dtype=int32)], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)]} ``` I got the error ```python Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nix/store/xws61xnjc03fjiwfh7ci5cwgg1chmp3l-python3.7-onnxruntime-1.4.0/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 110, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: token_type_ids for the following indices index: 1 Got: 11 Expected: 7 Please fix either the inputs or the model. ``` When I run it via the C API of onnxruntime, I got this error message: ``` Error: OrtError(Status { error_code: InvalidArgument, error_msg: "Got invalid dimensions for input: attention_mask for the following indices\n index: 1 Got: 418 Expected: 7\n Please fix either the inputs or the model." }) ``` When I print out the model input shapes, I see ``` input 0: "attention_mask" ["N", 7] Int32 input 1: "input_ids" ["N", 7] Int32 input 2: "token_type_ids" ["N", 7] Int32 ``` It is not clear to me how I should prepare the input to run it with onnxruntime. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - `transformers` version: 3.0.2 - Platform: Linux-5.4.57-x86_64-with-glibc2.2.5 - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [x] the official example scripts: (give details below) https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb ## To reproduce ```python from transformers.convert_graph_to_onnx import convert output_model_path = "onnx-test-tf/bert-base-cased.onnx" convert(framework="tf", model="bert-base-cased", output=output_model_path, opset=11) from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') encoded = tokenizer("I can feel the magic, can you?", add_special_tokens=True) import numpy inputs_onnx = {k_: [numpy.array(v_, dtype=numpy.int32)] for k_, v_ in encoded.items()} from onnxruntime import InferenceSession, SessionOptions, get_all_providers sess_options = onnxruntime.SessionOptions() session = onnxruntime.InferenceSession(output_model_path, sess_options, providers=['CPUExecutionProvider']) results = session.run(None, inputs_onnx) ```
08-15-2020 17:29:34
08-15-2020 17:29:34
it works when I use "this is a test." as the input text because it results in 7 tokens. now the question is why the model expects a fixed number, which is 7, of tokens as input? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@Zhen-hao sees I meet the same problem. does your problem have been fixed? <|||||>> @Zhen-hao sees I meet the same problem. does your problem have been fixed? I didn't fix it. also didn't check with the last released version.