repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
6,202
closed
Cannot fine tune my distilbart-cnn-12-6 model because of cuda memory
I'm trying to fine tune my model like this: ``` import os os.environ['PYTHONPATH'] += ":/content/transformers/examples" %cd "/content/transformers/examples" !python /content/transformers/examples/seq2seq/finetune.py \ --learning_rate=3e-5 \ --fp16 \ --gpus 1 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.1 \ --sortish_sampler \ --data_dir '/content/dataset' \ --train_batch_size=1\ --eval_batch_size=1\ --output_dir=distilbart_multi_news \ --num_train_epochs 1 \ --model_name_or_path /content/model/best_tfmr ``` But even with a batch size of 1 I get this error: File "/content/transformers/examples/seq2seq/finetune.py", line 344, in <module> main(args) File "/content/transformers/examples/seq2seq/finetune.py", line 322, in main logger=logger, File "/content/transformers/examples/lightning_base.py", line 330, in generic_train trainer.fit(model) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit self.single_gpu_train(model) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train self.run_pretrain_routine(model) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1076, in run_pretrain_routine False) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 279, in _evaluate output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 452, in evaluation_forward output = model.validation_step(*args) File "/content/transformers/examples/seq2seq/finetune.py", line 136, in validation_step return self._generative_step(batch) File "/content/transformers/examples/seq2seq/finetune.py", line 163, in _generative_step generated_ids = self.model.generate(input_ids=source_ids, attention_mask=source_mask, use_cache=True,) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 459, in generate model_specific_kwargs=model_specific_kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 638, in _generate_beam_search outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py", line 1005, in forward lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1676, in linear output = input.matmul(weight.t()) RuntimeError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 11.17 GiB total capacity; 10.59 GiB already allocated; 91.81 MiB free; 10.66 GiB reserved in total by PyTorch) Any idea what to do?
08-02-2020 15:12:47
08-02-2020 15:12:47
Hi - it looks like the memory blowup occurs during beam search (presumably during eval). In `_generative_step` in `fine_tune.py`, feel free to play around with the `num_beams` parameter in the `self.model.generate` call. I can confirm that explicitly setting `num_beams=1` works on a V100 16GB. I believe the default is 5. beam search is very memory & computation intensive. Maybe for your final model evaluations, you can find a bigger GPU or just do on CPU. Greedy decoding should be OK during model development. Also, if you look at the available distilled BART models (https://huggingface.co/sshleifer/distilbart-xsum-12-1), you'll see some options with fewer params than distilbart-cnn-12-6 (i.e., distilbart-cnn-12-1) please let me know if this works!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,201
closed
Update model card
08-02-2020 11:44:33
08-02-2020 11:44:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=h1) Report > Merging [#6201](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `0.06%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6201/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6201 +/- ## ========================================== - Coverage 79.65% 79.58% -0.07% ========================================== Files 146 146 Lines 26607 26607 ========================================== - Hits 21194 21176 -18 - Misses 5413 5431 +18 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.55% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.76%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=footer). Last update [82a0e2b...e752ac3](https://codecov.io/gh/huggingface/transformers/pull/6201?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,200
closed
Update model card
08-02-2020 11:44:13
08-02-2020 11:44:13
transformers
6,199
closed
Update model card
08-02-2020 11:43:46
08-02-2020 11:43:46
transformers
6,198
closed
Update model card
08-02-2020 11:43:34
08-02-2020 11:43:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=h1) Report > Merging [#6198](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `0.80%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6198/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6198 +/- ## ========================================== - Coverage 79.65% 78.85% -0.81% ========================================== Files 146 146 Lines 26607 26607 ========================================== - Hits 21194 20981 -213 - Misses 5413 5626 +213 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=footer). Last update [82a0e2b...937eea4](https://codecov.io/gh/huggingface/transformers/pull/6198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,196
closed
cleanup torch unittests
a large group of tests has been modified according to the request in https://github.com/huggingface/transformers/issues/5973 If this is what's needed then just running this magic perl sequence should take care of most of them: ``` perl -pi -e 's|^\s+self.check_loss_output\(result\)\n||' tests/test_modeling_bert.py perl -0777 -pi -e 's|^\s+def check_loss_output\(self, result\):[\s\n]+ self.parent.assertListEqual\(list\(result\["loss"\].size\(\)\), \[]\)\s*\n|\n|msg' tests/test_modeling_bert.py perl -0777 -pi -e 's#self.parent.assertListEqual\( [\s\n]* list\((result\w*)\[" ([^"]+) "\].(?:shape|size\(\))\),[\s\n]+\[ ( [^\]]* ) \],? [\s\n]* \) #self.parent.assertEqual($1.$2.shape, ($3))#xmsg' tests/test_modeling_bert.py ``` (edit, adjusted for multiple various inputs) well, add: ``` make style ``` to fix the style. problem: not all results are objects, some are plain `dict` and can't be called with .key_name. See my comment below. @sshleifer
08-02-2020 06:15:28
08-02-2020 06:15:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=h1) Report > Merging [#6196](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82a0e2b67ec94d28b20e24b3393644002bbd0d4b&el=desc) will **decrease** coverage by `1.15%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6196/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6196 +/- ## ========================================== - Coverage 79.65% 78.49% -1.16% ========================================== Files 146 146 Lines 26607 26607 ========================================== - Hits 21194 20886 -308 - Misses 5413 5721 +308 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.76%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=footer). Last update [82a0e2b...f559e11](https://codecov.io/gh/huggingface/transformers/pull/6196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I added one more `check_loss_output` that was missing, no changes otherwise. CI is randomly failing again...
transformers
6,195
closed
Encoder decoder config docs
As discussed on #5826, this PR adds more details on how to load encoder/decoder config objects from pretrained folders and how to instantiate encoder_decoder pretrained models given their corresponding configuration objects (useful for loading pre-trained models and modifying some config members for fine-tuning).
08-02-2020 06:03:43
08-02-2020 06:03:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=h1) Report > Merging [#6195](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d8dbf3b75d58667e2ecaf42b4aa076e83d034d26&el=desc) will **increase** coverage by `0.32%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6195/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6195 +/- ## ========================================== + Coverage 79.47% 79.80% +0.32% ========================================== Files 146 146 Lines 26607 26607 ========================================== + Hits 21146 21233 +87 + Misses 5461 5374 -87 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=footer). Last update [d8dbf3b...4a03156](https://codecov.io/gh/huggingface/transformers/pull/6195?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@afcruzs - thanks a lot for your PR. I changed the examples a bit trying to make sure: 1) Examples in `config` only concern the EncoderDecoderConfig 2) Examples in `model` only concern the EncoderDecoderModel sorry for meddling in your PR.<|||||>Thanks for improving the examples! I've fixed the whitespace issue
transformers
6,194
closed
longformertokenizerFast gives error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ X ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: (give details below) ## To reproduce using LongformerTokenizerFast gives error. but using LongformerTokenizer works without any issues keeping everything same ``` --------------------------------------------------------------------------- Exception Traceback (most recent call last) <ipython-input-39-263240bbee7e> in <module> ----> 1 main() <ipython-input-37-2c27a8a4db79> in main() 99 ) 100 --> 101 train_dataset = CustomDataset(data_args, tokenizer=tokenizer) if training_args.do_train else None 102 eval_dataset = CustomDataset(data_args, tokenizer=tokenizer, mode="test") if training_args.do_eval else None 103 <ipython-input-36-85278feb74ec> in __init__(self, args, tokenizer, limit_length, mode) 184 max_length=args.max_seq_length, 185 label_list=label_list, --> 186 output_mode=self.output_mode, 187 ) 188 start = time.time() /opt/conda/lib/python3.7/site-packages/transformers/data/processors/glue.py in glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode) 63 return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task) 64 return _glue_convert_examples_to_features( ---> 65 examples, tokenizer, max_length=max_length, task=task, label_list=label_list, output_mode=output_mode 66 ) 67 /opt/conda/lib/python3.7/site-packages/transformers/data/processors/glue.py in _glue_convert_examples_to_features(examples, tokenizer, max_length, task, label_list, output_mode) 133 max_length=max_length, 134 padding="max_length", --> 135 truncation=True, 136 ) 137 /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 1918 return_length=return_length, 1919 verbose=verbose, -> 1920 **kwargs, 1921 ) 1922 else: /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2103 return_length=return_length, 2104 verbose=verbose, -> 2105 **kwargs, 2106 ) 2107 /opt/conda/lib/python3.7/site-packages/transformers/tokenization_gpt2.py in _batch_encode_plus(self, *args, **kwargs) 385 ) 386 --> 387 return super()._batch_encode_plus(*args, **kwargs) 388 389 def _encode_plus(self, *args, **kwargs) -> BatchEncoding: /opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 378 else: 379 encodings = self._tokenizer.encode_batch( --> 380 batch_text_or_text_pairs, add_special_tokens=add_special_tokens, is_pretokenized=is_pretokenized 381 ) 382 /opt/conda/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in encode_batch(self, inputs, is_pretokenized, add_special_tokens) 247 raise ValueError("encode_batch: `inputs` can't be `None`") 248 --> 249 return self._tokenizer.encode_batch(inputs, is_pretokenized, add_special_tokens) 250 251 def decode(self, ids: List[int], skip_special_tokens: Optional[bool] = True) -> str: Exception: Truncation error: Specified max length is too low to respect the various constraints ```
08-02-2020 04:10:00
08-02-2020 04:10:00
@patrickvonplaten please see. if relevant <|||||>Hi @manishiitg , can you post the command/code you used to run this example ? Won't be able to re-produce from the stack-trace.<|||||>Thanks for answering @patil-suraj . As @patil-suraj said, we need some code and also the environment info (it's empty above) to better answer here :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,193
closed
Some weights not initialized in pre-trained RobertaForMaskedLM
The bug is similar to #2202. I am trying to evaluate MLM perplexity (without training/finetuning) using Roberta with `run_language_modeling.py` (from the [official example](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)). However, some weights seems to be reinitialized instead of getting loading from the pretrained Roberta checkpoint. ## To Reproduce (~~with master branch~~): ``` import logging logging.basicConfig(level=logging.INFO) from transformers import RobertaForMaskedLM _ = RobertaForMaskedLM.from_pretrained('roberta-base') ``` It gives the following warning message: ``` WARNING:transformers.modeling_utils:Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.embeddings.position_ids', 'lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` The perplexities I get on direct evaluation on Wikitext-2/103 datasets are also much higher than the official Roberta implementation from fairseq. I suspect this could be the reason.
08-01-2020 23:07:00
08-01-2020 23:07:00
Hello! These warnings are not important, as these weights are not necessary (the position IDs is a buffer that is initialized if not defined, and the lm head decoder bias already exists in the lm head decoder linear layer). On the `master` branch we've updated the warnings to only list those that could have an impact, so running your code on the current master branch results in: ```py WARNING:transformers.modeling_utils:Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` So nothing wrong here! Did you make sure you were using the exact same implementation of perplexity calculation than Fairseq?<|||||>Thanks for the clarification! The difference in `fairseq` and `transformers` ppl is coming from different implementation - `e^cross_entropy` ([in transformers](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L274)) vs `2^cross_entropy` ([in fairseq](https://github.com/pytorch/fairseq/blob/master/fairseq/utils.py#L418)). Nothing wrong here! : )<|||||>@LysandreJik Hi, did not understand "the lm head decoder bias already exists in the lm head decoder linear layer", but it still warnings that lm_head.decoder.bias is newly initialized? I'm confused, sorry. Could you elaborate more about why this is not a problem?<|||||>Hi @cloudygoose, I recommend you take a look at the following class: https://github.com/huggingface/transformers/blob/b8462b5b2ac84f63293900ae168dbde039443a22/src/transformers/models/roberta/modeling_roberta.py#L1065-L1087 The error tells you that the following weight: `lm_head.decoder.bias` was not initialized from the model checkpoint: the model checkpoint did not contain that weight. However, the weight `lm_head.bias` isn't in the error because it was correctly initialized. If you take a look at the last line of the initialization of the class above, you'll see: ```py # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` self.decoder.bias = self.bias ``` Therefore, the `lm_head.decoder.bias` weight that was not initialized is now set to the value of `self.bias`, which is correctly initialized. Let me know if something isn't clear.<|||||>Hi @LysandreJik, I am trying to make a custom model based on Roberta. I use `RobertaModel` internally. The warning is: ```bash Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModelForTokenAndSpans: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight'] - This IS expected if you are initializing RobertaModelForTokenAndSpans from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaModelForTokenAndSpans from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaModelForTokenAndSpans were not initialized from the model checkpoint at roberta-large and are newly initialized: ['roberta.embeddings.position_ids', 'classifier.weight', 'classifier.bias', 'qa_outputs.weight', 'qa_outputs.bias'] ``` `classifier` and `qa_outputs` are my own layers. I assume from the previous release (4.2.2), this warning has not been removed? Will having `roberta.embeddings.position_ids` not initialized from roberta affect things in any way?<|||||>This warning shouldn't be removed, it's telling you what it initializes randomly, and what isn't used. Apparently it's not an issue in your case since you're aware of it, so that's great! Not an issue for the position IDs, this warning should have been removed in version v4.3.2, though!<|||||>I meant the warning about for position IDs. Thanks a lot @LysandreJik :)<|||||>Happy to help!
transformers
6,192
closed
GPT2 crashing at loss.backward()
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Ubuntu - Python version: 3.6 - PyTorch version (GPU?): 1.5.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes @LysandreJik ## Information Trying to finetune GPT2 model but the GPU is crashing after `loss.backward()`. I thought it might be just my code but I ran some different code involving finetuning GPT2 and that as well crashed in the same manner. Getting this warning as well. ``` WARNING - transformers.modeling_utils - Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` A week or 2 back, everything was working fine but now the same code is crashing on `loss.backward()`.
08-01-2020 19:01:41
08-01-2020 19:01:41
Hi @vibhavagarwal5 , you can safely ignore this warning, that issue is resolved in this PR #5922 . Do you think you can post the stack-trace after the crash, and also the version and memory of the GPU used<|||||>**GPU DETAILS:** NVIDIA 2080 TI (12GB) NVIDIA-SMI 440.95.01 Driver Version: 440.95.01 CUDA Version: 10.2 **STACK TRACE:** ``` Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/87472 [00:00<?, ?it/s] File "finetune_lm.py", line 553, in <module> main() File "finetune_lm.py", line 507, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "finetune_lm.py", line 157, in train loss.backward() File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` (createCublasHandle at /opt/conda/conda-bld/pytorch_1587428266983/work/aten/src/ATen/cuda/CublasHandlePool.cpp:8) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f6cca012b5e in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0xdba405 (0x7f6ccaff9405 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #2: at::cuda::getCurrentCUDABlasHandle() + 0x94c (0x7f6ccaffa1ec in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #3: <unknown function> + 0xdafb01 (0x7f6ccafeeb01 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #4: <unknown function> + 0x1263db7 (0x7f6ccb4a2db7 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #5: THCudaTensor_addmm + 0x5c (0x7f6ccb4a84ac in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #6: <unknown function> + 0xea5f28 (0x7f6ccb0e4f28 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #7: <unknown function> + 0xdc92e8 (0x7f6ccb0082e8 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so) frame #8: <unknown function> + 0xe224d0 (0x7f6cf5c264d0 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #9: <unknown function> + 0x29f9d0e (0x7f6cf77fdd0e in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: <unknown function> + 0xe224d0 (0x7f6cf5c264d0 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #11: at::Tensor::mm(at::Tensor const&) const + 0xf0 (0x7f6cf57ea180 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #12: <unknown function> + 0x264517c (0x7f6cf744917c in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #13: torch::autograd::generated::MmBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x151 (0x7f6cf7449f81 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #14: <unknown function> + 0x2ae8215 (0x7f6cf78ec215 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #15: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x16f3 (0x7f6cf78e9513 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #16: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7f6cf78ea2f2 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #17: torch::autograd::Engine::thread_init(int) + 0x39 (0x7f6cf78e2969 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #18: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7f6cfac29558 in /home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #19: <unknown function> + 0xc819d (0x7f6d1200b19d in /mnt/c7cfa338-89cd-4d15-b0b9-f1befc9a2c68/vibhav/anaconda3/envs/vesnli/bin/../lib/libstdc++.so.6) frame #20: <unknown function> + 0x76db (0x7f6d269a16db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #21: clone + 0x3f (0x7f6d266caa3f in /lib/x86_64-linux-gnu/libc.so.6) ```<|||||>Seems like memory error, you can try running with batch size 1 and see if it still crashes.<|||||>Nope not a memory error. Still crashing with batch size 1<|||||>Which GPT-2 model are you using ? <|||||>Tried both 'gpt2' and 'gpt2-medium'. Same issue<|||||>Could you run it in on CPU, erros will be more readable.<|||||>``` raceback (most recent call last): | 0/174944 [00:00<?, ?it/s] File "finetune_lm.py", line 553, in <module> main() File "finetune_lm.py", line 507, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "finetune_lm.py", line 144, in train outputs = model(inputs, labels=labels) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 601, in forward output_hidden_states=output_hidden_states, File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 469, in forward inputs_embeds = self.wte(input_ids) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ```<|||||>So looks like error is form the embedding layer, what's the shape of your `inputs`<|||||>bs x seq_len (4x63 or 4*52 ... anything)<|||||>could you be more specific, what is your seq_len ? <|||||>Input: ``` torch.Size([4, 47]) Premise: Children smiling and waving at camera Hypothesis: There are children present [EXP] The children must be present to see them smiling and waving. [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] [EOS] ``` label: ``` torch.Size([4, 47]) [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, 'The', 'Ġchildren', 'Ġmust', 'Ġbe', 'Ġpresent', 'Ġto', 'Ġsee', 'Ġthem', 'Ġsmiling', 'Ġand', 'Ġwaving', '.', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]', '[EOS]'] ```<|||||>Did you add any new tokens to the tokenizer ? get the shape of embeddings using `model.transformer.wte.weight.shape` the first dim of shape and len of tokenizer should match. See if this asserts is True ```python3 assert modle.transformer.wte.weight.shape[0] == len(tokenizer) ``` if not then that means, your vocab size and embed input size are not matching. If you added new tokens to the vocab, you'll need to resize the token embeddings of the model. You can resize it using ```python3 model.resize_token_embeddings(len(tokenizer)) ``` <|||||>I'm doing this already.. ```python tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT) model.resize_token_embeddings(len(tokenizer)) ```<|||||>I was able to reprouce the bug only when embedding size and vocab len didn't match. `assert modle.transformer.wte.weight.shape[0] == len(tokenizer)` did this assert result in `False` ?<|||||>No, its True because I did the model.resize so this should have anyways asserted True.<|||||>For whatever reason, `transformers v2.3` is working and the latest `3.x`<|||||>Hi @vibhavagarwal5, could you provide a sample script so that we may reproduce on our side? Something with a sample text that makes it crash would be wonderful, if you have one.<|||||>I figured it out, it was due to the change in ignore_index=-100 instead of -1 in the cross entropy loss which was causing the issue. I'll close this. <|||||>Glad you could find the source of the issue!
transformers
6,191
closed
How to integrate the Pyro module with HuggingFace Transformers?
Hello, I am trying to convert the HuggingFace Transformer into a Bayesian neural network by using the `Pyro` module. I provided my code below. Everything works well except I am stuck at the line `svi_loss = svi.step(input_ids = input_ids, attention_mask = attention_mask, labels = label)`. At that line an error is generated, because after converting the HuggingFace Transformer into a Pyro model, the new model does not have any set parameter (since it is a Bayesian model...so the weights for a Pyro model are not fixed, meaning the weights are sampled from a statistical distribution). Is there any way that I can get around this issue? I have also posted the similar question on Pyro forum. Thank you, CODE: ```python import torch from torch import distributions from transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule import pyro import pyro.infer import pyro.optim import pyro.distributions as dist import pyro.nn.module as module import pyro.infer.autoguide.guides as guides from torch import nn from pyro.optim import Adam from pyro.infer import SVI from pyro.infer import Trace_ELBO from pyro.infer import Predictive # get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings # after adding the special token model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('Roberta-base') # convert the HuggingFace model into a pyro model module.to_pyro_module_(model_RobertaForMultipleChoice) for m in model_RobertaForMultipleChoice.modules(): for name, value in list(m.named_parameters(recurse=False)): setattr(m, name, module.PyroSample(prior=dist.Normal(0, 1) .expand(value.shape) .to_event(value.dim()))) # define parameters for training guide_delta = guides.AutoDelta(model_RobertaForMultipleChoice) optimizer_2 = Adam({"lr": 0.000000055}) scheduler_2 = pyro.optim.StepLR({'optimizer': optimizer_2, 'optim_args': {'lr': 0.000000055}}) svi_delta = SVI(model_RobertaForMultipleChoice, guide_delta, optimizer_2, loss=Trace_ELBO()) # training loop for m in range(num_iter): # calculate the loss and take a gradient step for svi # ERRORS OCCUR HERE svi_loss = svi.step(input_ids = input_ids, attention_mask = attention_mask, labels = label) # update the with the calculated loss total_svi_loss = total_svi_loss + svi_loss if m % log_interval == 0 and m > 0: cur_svi_loss = total_svi_loss / log_interval print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.9f} | loss {:5.4f} | ppl {:8.4f}'.format( epoch, m, int(num_lines_train/4), scheduler.get_lr()[0], cur_svi_loss, math.exp(cur_svi_loss))) total_svi_loss = 0 ```
08-01-2020 18:24:51
08-01-2020 18:24:51
please let me know how this progresses I am also interested in doing this<|||||>Hello, Would it be possible for your team to also work on GPT2LMHeadModel on this same issue? Thank you,<|||||>Hey @h56cho - did you find a solution here? Would be great if you can post code :-) <|||||>Likewise, would be very interested in what came of this.
transformers
6,190
closed
Add support for truncation argument when calling a Pipeline
# 🚀 Feature request Currently, only the `padding` argument [is supported](https://github.com/huggingface/transformers/blob/a39dfe4fb122c11be98a563fb8ca43b322e01036/src/transformers/pipelines.py#L500) when calling a pipeline, and it's not possible to pass `truncation` argument. For example, running the following code sample would raise an error: ```python import transformers as trf model = trf.pipeline(task='feature-extraction', model='bert-base-cased') output = model('a sample text', padding=False, truncation=True) ``` ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> If toggling padding is supported, then why truncation shouldn't be? ## Your contribution I think to achieve this, same as `padding`, only a `truncation` argument should be added to `_parse_and_tokenize` method and also when calling the tokenizer. If that's the case, I would be willing to work on a PR.
08-01-2020 16:04:58
08-01-2020 16:04:58
is there any workaround for this? i'm seeing the `Token indices sequence length is longer than the specified maximum sequence length for this model (... > 512). Running this sequence through the model will result in indexing errors` when using TextClassificationPipeline. (This is preventing me from upgrading to 3.x.)<|||||>routing this to @mfuntowicz @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>+1 on this<|||||>Hi, even though this has been closed as stale, without comment or supposed fix, it seems that in recent versions you can in fact pass both `truncation` and `padding` arguments to the pipeline's `__call__` method, and it will correctly use them when tokenizing. I've tested it with long texts that fail without the truncation argument, and it seems to work as expected.
transformers
6,189
closed
Support new tokenizers in distillation example
I'm creating a distilled model based on a new transformers model, and needed these two lines from examples changed to make that process easier. - Change filename of output binarized text vectors to replace '/' with '-'; for example tokenizer 'monsoon-nlp/hindi-bert' will output to a file and not create a new directory - Load max_model_input_size / max_position_embeddings from teacher_config_class and not from a hardcoded list of common tokenizers in the tokenizer class
08-01-2020 15:56:47
08-01-2020 15:56:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=h1) Report > Merging [#6189](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **increase** coverage by `1.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6189/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6189 +/- ## ========================================== + Coverage 78.29% 79.71% +1.42% ========================================== Files 146 146 Lines 26607 26607 ========================================== + Hits 20832 21210 +378 + Misses 5775 5397 -378 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-3.51%)` | :arrow_down: | | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.14% <0.00%> (+24.04%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6189/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=footer). Last update [8edfaaa...6a2d21e](https://codecov.io/gh/huggingface/transformers/pull/6189?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,188
closed
taeminlee/kogpt2 not working
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - None: hosted inference API ### Who can help @LysandreJik @julien-c @TevenLeScao ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [O] the official example scripts: Hosted Inference API testing page * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [O] my own task or dataset: Text generation, or anything ## To reproduce Steps to reproduce the behavior: 1. https://huggingface.co/taeminlee/kogpt2?text=제+이름은+홍길동 2. Or type any text 3. Model returns just the text ## Expected behavior Generated text should be returned, but isn't.
08-01-2020 15:34:30
08-01-2020 15:34:30
~It seems to work? What's happening on your end?~ Indeed, the generated text seems to be truncated.<|||||>@ksjae, I can't seem to make that model work in my environment, without relying on the inference API. ```py from transformers import AutoModelWithLMHead, pipeline from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("taeminlee/kogpt2", do_lower_case=False) print(tokenizer.tokenize("제 이름은 홍길동")) # ['ì', 'ł', 'ľ', 'Ġ', 'ì', 'Ŀ', '´', 'ë', '¦', 'Ħ', 'ì', 'Ŀ', 'Ģ', 'Ġ', 'í', 'Ļ', 'į', 'ê', '¸', '¸', 'ë', 'ı', 'Ļ'] # probably not what we're looking for print(tokenizer.decode(tokenizer.encode("제 이름은 홍길동"))) # <unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk><unk>�<unk><unk><unk><unk><unk><unk><unk><unk> # Definitely not what we're looking for ``` Do you know the author?<|||||>No, I don't. I'll try to add a new one(in training right now) though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,187
closed
add new model prophetnet
# Add new model structure [ProphetNet](https://arxiv.org/abs/2001.04063). ## Description: ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction. ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet). xProphetNet has the same model structure, but is pretrained with wikipedia 100 languages dataset as described in [xGLUE](https://arxiv.org/abs/2004.01401). xGLUE is a benchmark for corss-lingual NLU and NLG tasks. xProphetNet is also served as a baseline model for cross-lingual generation tasks in xGLUE NTG and QG. ## Usage: Take xGLUE NTG task as an example: Cross-lingual pretrained model is finetuned with English news title generation data, but inference with both English and other zero-shot language data. A quick usage is like: ``` from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') EN_SENTENCE_TO_QUESTION = "Microsoft Corporation intends to officially end free support for the Windows 7 operating system after January 14, 2020, according to the official portal of the organization. From that day, users of this system will not be able to receive security updates, which could make their computers vulnerable to cyber attacks." RU_SENTENCE_TO_QUESTION = "орпорация Microsoft намерена официально прекратить бесплатную поддержку операционной системы Windows 7 после 14 января 2020 года, сообщается на официальном портале организации . С указанного дня пользователи этой системы не смогут получать обновления безопасности, из-за чего их компьютеры могут стать уязвимыми к кибератакам." ZH_SENTENCE_TO_QUESTION = "根据该组织的官方门户网站,微软公司打算在2020年1月14日之后正式终止对Windows 7操作系统的免费支持。从那时起,该系统的用户将无法接收安全更新,这可能会使他们的计算机容易受到网络攻击。" inputs = tokenizer([EN_SENTENCE_TO_QUESTION, RU_SENTENCE_TO_QUESTION, ZH_SENTENCE_TO_QUESTION], padding=True, max_length=256, return_tensors='pt') summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True) print([tokenizer.decode(g) for g in summary_ids]) ``` Model will generate news titles like: ``` ['[SEP] Microsoft to end Windows 7 free support after January 14, 2020[SEP][PAD][PAD][PAD][PAD]', '[SEP] Microsoft намерена прекратить бесплатную поддержку Windows 7 после 14 января 2020 года[SEP]', '[SEP]微软打算终止对Windows 7操作系统的免费支持[SEP][PAD][PAD][PAD][PAD][PAD][PAD]'] ``` ## Released checkpoints: pretrained: ``` microsoft/prophetnet-large-uncased microsoft/xprophetnet-large-wiki100-cased ``` fine-tuned: ``` microsoft/prophetnet-large-uncased-cnndm microsoft/xprophetnet-large-wiki100-cased-xglue-ntg microsoft/xprophetnet-large-wiki100-cased-xglue-qg ```
08-01-2020 12:43:52
08-01-2020 12:43:52
> @sshleifer Is there anything else needed to do in order to make ProphetNet work with your seq2seq example? > > Also @mfuntowicz for pipelines I tried examples/seq2seq/finetune.py and it works with python finetune.py --do_train and --do_predict.<|||||>I will try to complete document and unit test by this week<|||||>@patrickvonplaten Thanks for your review! I learned a lot, too. @qiweizhen Please be free to contact me for discussion via WeChat if you have trouble understanding Patrick's comments or you want to have another person to double-check! Thanks for your great work!
transformers
6,186
closed
Remove inconsistency between BertTokenizer and BertTokenizerFast
# 🚀 Feature request `BertTokenizerFast` has the option to specify `strip_accents=False`. The `BertTokenizer` does not have this option. This inconsistency should be removed by adding the `strip_accents` parameter to `BertTokenizer`. ## Motivation Without adding this, the `BertTokenizer` can not be used for language models which are lowercase but have accents. In case of a language model with lowercase and with accents you are forced to load the tokenizer by this: ```python tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>", use_fast=True, strip_accents=False) ``` This will NOT work: `tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>")` And even this would not work: `tokenizer = AutoTokenizer.from_pretrained("<model_name_or_path>", strip_accents=False)` ## Your contribution With some hints I am willing to contribute.
08-01-2020 10:24:56
08-01-2020 10:24:56
Fixed with #6280
transformers
6,185
closed
Fix docstring for `BertTokenizerFast`.
- remove duplicate doc-entry for `tokenize_chinese_chars` - add doc for `strip_accents` and `wordpieces_prefix`
08-01-2020 10:06:21
08-01-2020 10:06:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=h1) Report > Merging [#6185](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a39dfe4fb122c11be98a563fb8ca43b322e01036&el=desc) will **increase** coverage by `1.25%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6185/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6185 +/- ## ========================================== + Coverage 78.34% 79.59% +1.25% ========================================== Files 146 146 Lines 26607 26607 ========================================== + Hits 20844 21178 +334 + Misses 5763 5429 -334 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.32% <ø> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `70.32% <0.00%> (-26.66%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=footer). Last update [a39dfe4...f44324f](https://codecov.io/gh/huggingface/transformers/pull/6185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,184
closed
[s2s] clean up + doc
this is a follow up to https://github.com/huggingface/transformers/pull/6149 - there was no need to add newly added options to finetune.sh - reverted that change - added a hint to users how to get all the options (--help) @sshleifer
08-01-2020 05:45:39
08-01-2020 05:45:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=h1) Report > Merging [#6184](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **increase** coverage by `0.20%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6184/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6184 +/- ## ========================================== + Coverage 78.29% 78.50% +0.20% ========================================== Files 146 146 Lines 26607 26607 ========================================== + Hits 20832 20887 +55 + Misses 5775 5720 -55 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.30% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+2.50%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6184/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=footer). Last update [8edfaaa...566b357](https://codecov.io/gh/huggingface/transformers/pull/6184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,183
closed
Fix tokenizer saving/loading with custom token objects
## Summary This PR fixes issue #5571. Pre-trained tokenizers might wrap tokens around custom types (e.g. `AddedToken` from **🤗/tokenizers**) which makes (de)serialization without additional meta information difficult. This PR uses `jsonpickle` library which particularly solves objects (de)serialization problem. A little drawback of this approach is that type information is a subject of change and such changes will break back-compatibility once happened. But this seems just like an agreement for making careful back-compatible changes.
07-31-2020 23:28:33
07-31-2020 23:28:33
Investigating the issues 🤓 <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=h1) Report > Merging [#6183](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25cec13c57656941aac3b920eeb488c1915df18&el=desc) will **increase** coverage by `0.57%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6183/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6183 +/- ## ========================================== + Coverage 79.08% 79.66% +0.57% ========================================== Files 149 147 -2 Lines 27685 26603 -1082 ========================================== - Hits 21894 21192 -702 + Misses 5791 5411 -380 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_io.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX2lvLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <100.00%> (-0.14%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.51%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (-14.29%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.47% <0.00%> (-11.88%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-2.14%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.51%)` | :arrow_down: | | ... and [54 more](https://codecov.io/gh/huggingface/transformers/pull/6183/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=footer). Last update [cdf1f7e...d159584](https://codecov.io/gh/huggingface/transformers/pull/6183?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks much better now 🎉 @n1t0 @thomwolf could you please take a glance? :)<|||||>I did a bit of refactoring to have load/dump methods reusable across the library. These changes introduce a new scope for the common File IO methods (like json pickle load/dump). Although there is an already existing `file_utils` scope, I believe it's not supposed to contain the common IO operations as the docs say (`Utilities for working with the local dataset cache.`). @LysandreJik, could you please review the changes? :)<|||||>It looks like #6026 fixed the issue. But nevertheless this PR is still relevant since it guarantees tokenizers saving/loading with objects of arbitrary types. <|||||>Hi, thanks a lot for your contribution! Unfortunately we're very strict about adding new dependencies, and this doesn't really change the existing behavior. I'm not sure I see the pros of introducing this new scope vs the cons of integrating a new dependencies + updating existing code.<|||||>Hi, thanks for replying! I respect your policy regarding the third-party dependencies, this makes sense. Anyway, the current approaches compromise tokenizers (de)serialization with introducing new objects to include (besides `AddedToken` instances). This might be a subject of future improvements once new types are introduced into the tokenizer configs, but I think it makes sense to consider generalizing the behavior. If it makes sense, I could wrap saving/loading configs into the new scope without a third-party library, though it's far from generalizing, at least adding new cases to handle will be easier. What do you think? :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,182
closed
Failing XLMModelTest
https://github.com/huggingface/transformers/runs/929962952?check_suite_focus=true FAILED tests/test_modeling_xlm.py::XLMModelTest::test_inputs_embeds - RuntimeError Failure introduced somewhere in here: ``` * d951c14a Sylvain Gugger: Model output test (#6155) - (8 hours ago) * 86caab1e Sylvain Gugger: Harmonize both Trainers API (#6157) - (8 hours ago) * 603cd81a Mehrdad Farahani: readme m3hrdadfi/albert-fa-base-v2 (#6153) - (11 hours ago) * 838dc06f Suraj Patil: parse arguments from dict (#4869) - (13 hours ago) * cf3cf304 Paul O'Leary McCann: Replace mecab-python3 with fugashi for Japanese tokenization (#6086) - (13 hours ago) * f250beb8 Stas Bekman: enable easy checkout switch (#5645) - (13 hours ago) * 7d50af4b kolk: Create README.md (#6169) - (13 hours ago) * 0034a1d2 Prajjwal Bhargava: Add Pytorch Native AMP support in Trainer (#6151) - (13 hours ago) * 7231f7b5 Funtowicz Morgan: Enable ONNX/ONNXRuntime optimizations through converter script (#6131) - (14 hours ago) * c0b93a1c Stas Bekman: correct the correction (#6163) - (23 hours ago) * a2f6d521 Stas Bekman: typos (#6162) - (24 hours ago) * f3065abd Sylvain Gugger: Doc tokenizer (#6110) - (26 hours ago) * e642c789 guillaume-be: Addition of a DialoguePipeline (#5516) - (27 hours ago) * ec026747 Lysandre Debut: Fix FlauBERT GPU test (#6142) - (30 hours ago) * 91cb9546 Sylvain Gugger: Switch from return_tuple to return_dict (#6138) - (32 hours ago) * 562b6369 Sylvain Gugger: Tf trainer cleanup (#6143) - (32 hours ago) * c127d055 Oren Amsalem: add another e.g. to avoid confusion (#6055) - (32 hours ago) * d24ea708 Oren Amsalem: Actually the extra_id are from 0-99 and not from 1-100 (#5967) - (35 hours ago) * 3212b885 Stas Bekman: [s2s] add support for overriding config params (#6149) - (2 days ago) * 54f9fbef Julien Plu: Rework TF trainer (#6038) - (2 days ago) * 3f94170a Lysandre Debut: [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614) - (2 days ago) * 8a8ae276 Sylvain Gugger: Use google style to document properties (#6130) - (2 days ago) * fc64559c Julien Plu: Fix TF CTRL model naming (#6134) - (2 days ago) * 641b873c Lysandre Debut: XLNet PLM Readme (#6121) - (2 days ago) * 8d157c93 Timo Moeller: add deepset/xlm-roberta-large-squad2 model card (#6128) - (2 days ago) * 6c002853 Funtowicz Morgan: Added capability to quantize a model while exporting through ONNX. (#6089) - (2 days ago) * 25de74cc Sylvain Gugger: Use FutureWarning to deprecate (#6111) - (3 days ago) * 640550fc Funtowicz Morgan: ONNX documentation (#5992) - (3 days ago) ``` Any idea on this @sgugger or @LysandreJik ? Otherwise I'll dig in.
07-31-2020 21:19:34
07-31-2020 21:19:34
transformers
6,181
closed
Failing ONNX Export test
https://github.com/huggingface/transformers/runs/929962952?check_suite_focus=true ``` FAILED tests/test_onnx.py::OnnxExportTestCase::test_quantize_pytorch - TypeError ```
07-31-2020 21:13:37
07-31-2020 21:13:37
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,180
closed
Fixed typo in Longformer
07-31-2020 21:11:03
07-31-2020 21:11:03
transformers
6,179
closed
HANS Dataset: Incorrect `label_list` and `label`.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: n/a - Python version: n/a - PyTorch version (GPU?): n/a - Tensorflow version (GPU?): n/a - Using GPU in script?: n/a - Using distributed or parallel set-up in script?: n/a ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> @VictorSanh ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ### Incorrect `label_list` See line: https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L259 ```python def get_labels(self): """See base class.""" return ["contradiction", "entailment", "neutral"] ``` HANS dataset has only two labels, non-entailment, and entailment, but here three are given. Similarly, when mapping from text label to label-id, the label "non-entailment" (which exists in the task but not in the afore-defined labels), the below line is used. I'm curious if this is intentional? If so, would be great to add a warning/comment as those might cause subtle errors in the future. https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L311 ```python label = label_map[example.label] if example.label in label_map else 0 ``` ### Incorrect `label` index The below line uses the last column as the label. However, the HANS dataset uses the first column for `label` https://github.com/huggingface/transformers/blob/master/examples/adversarial/utils_hans.py#L271 ```python label = line[-1] ```
07-31-2020 21:05:02
07-31-2020 21:05:02
Yes indeed! Somehow, I was sure it was corrected on the master but I guess it was only on my private branch. Let me open a PR, thanks for pointing that out @HanGuo97!<|||||>Great, appreciate the help!
transformers
6,178
closed
Why are the `device()` and `dtype()` functions in `modelling_utils.py` needed?
Hello, For BERT and RoBERTa HuggingFace pre-trained models, why are the `device()` and `dtype()` functions in `modeling_utils.py` needed? See: https://github.com/huggingface/transformers/blob/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc/src/transformers/modeling_utils.py#L158 https://github.com/huggingface/transformers/blob/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc/src/transformers/modeling_utils.py#L177 Would it be possible for my RoBERTa model to function without an error, if I modify these `device()` and `dtype()` functions in a way that they will always return `cpu` and `torch.float32` (or `torch.float64`), respectively? Also, while it is easy to modify the original code to do this, I am not sure on how to get my HuggingFace RoBERTa model to take those modified functions into effect. How can I do this? Thank you (sorry for asking so many questions),
07-31-2020 20:52:09
07-31-2020 20:52:09
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,177
closed
RoBERTa for QuestionAnswering
I am trying to replicate the example in this link https://github.com/huggingface/transformers/pull/1502/files, but I get the following error : `` ValueError Traceback (most recent call last) <ipython-input-23-823cc70a5d4f> in <module> ----> 1 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))] 2 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids])) 3 all_tokens = tokenizer.convert_ids_to_tokens(input_ids) 4 print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])) <ipython-input-23-823cc70a5d4f> in <listcomp>(.0) ----> 1 token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))] 2 start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids])) 3 all_tokens = tokenizer.convert_ids_to_tokens(input_ids) 4 print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])) ValueError: 102 is not in list `` I see the same error discussed in https://github.com/huggingface/transformers/issues/2261 Any ideas how I could resolve this issue ? Thanks in advance.
07-31-2020 20:49:29
07-31-2020 20:49:29
Hey @mchari, could you please post your environment information and a code sample that we can run to reproduce your error?<|||||>@patrickvonplaten , thanks for your reply. i am using transformers 3.0.2 that i got from a pip install. Here is the code ` from transformers import RobertaTokenizer, RobertaForQuestionAnswering import torch import tensorflow tokenizer = RobertaTokenizer.from_pretrained('roberta-base') print("Loaded tokenizer !!!") model = RobertaForQuestionAnswering.from_pretrained('roberta-base') print("Loaded QA model !!!") question = "Who was Jim Henson?" context = "Jim Henson was a nice puppet" input_text = "[CLS] " + question + " [SEP] " + context + " [SEP]" #input_text = question + " [SEP] " + context #print(tokenizer(input_text)) input_ids = tokenizer.encode(input_text) start_scores, end_scores = model(torch.tensor([input_ids])) print(input_ids) token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))] start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids])) all_tokens = tokenizer.convert_ids_to_tokens(input_ids) print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])) `<|||||>i was able to workaround the issue by using the following code that doesn't look for id 102... not sure if it is equivalent.... encoding = tokenizer.encode_plus(question,context) input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"] start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))<|||||>your workaround is correct. The link you posted above points to a very old example. Please take a look at the example of the updated model of `BertForQuestionAnswering` : https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForQuestionAnswering
transformers
6,176
closed
Adds comet_ml to the list of auto-experiment loggers
This PR does three things: * abstracts the auto-experiment-loggers' is_available() functions into an "integrations.py" file * adds comet_ml to the list of auto-loggers available * updates and reorganizes the docs slightly
07-31-2020 20:18:26
07-31-2020 20:18:26
@LysandreJik Will do! When I run `make style` it makes corrections to many other files, and also (for example) changes the import sort order on code that I didn't touch in the code that I am editing. Am I doing something wrong? Or should I commit those changes (in the files I am editing)?<|||||>Sounds weird, maybe you're missing some packages in your environment. Let me know when you're finished and I'll push the styling on your branch.<|||||>@sgugger Should have all of the review comments addressed. Thanks to everyone!<|||||>Great, thanks for iterating @dsblank!<|||||>You're welcome, and thanks for all of the work on this project! Looking forward to productive ML!
transformers
6,175
closed
Doc pipelines
Continue the improvement of the main classes documentation with pipelines. [Preview](https://67155-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/pipelines.html) of the new pipeline page. [Preview](https://67155-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/pipelines_utils.html) of the new pipeline utils page.
07-31-2020 20:15:33
07-31-2020 20:15:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=h1) Report > Merging [#6175](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d951c14ae46ee36b76981588ed6d03ab353ad766&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6175/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6175 +/- ## ======================================= Coverage 79.51% 79.52% ======================================= Files 146 146 Lines 26607 26618 +11 ======================================= + Hits 21156 21167 +11 Misses 5451 5451 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.79% <100.00%> (+0.42%)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.71% <0.00%> (-1.13%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.05% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=footer). Last update [d951c14...1ec7448](https://codecov.io/gh/huggingface/transformers/pull/6175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,174
closed
t
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-31-2020 16:14:28
07-31-2020 16:14:28
transformers
6,173
closed
My finetuned gpt2 model is taking wayy too long to generate samples, like 5-8 minutes
I fine tuned the gpt2 model using transformers, i trained it on a lyrics dataset, and after successful training, when i do model.generate(args), it takes like a hell lot of time to genrate results What Should i do?
07-31-2020 11:59:52
07-31-2020 11:59:52
Hi @Krish-Nerkar , [this](https://discuss.huggingface.co/t/speeding-up-gpt2-generation/470) might help<|||||>@patil-suraj Thanks Alot! will check those methods out
transformers
6,172
closed
🐛 Not adding `token_type_ids` when the model is `electra` (pytorch_lightning example)
### Who can help @sshleifer (examples issue) ## Information Model I am using (Bert, XLNet ...): `ELECTRA` The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## About Issue https://github.com/huggingface/transformers/blob/838dc06ff5a438159ac25f531d622e8f344476f5/examples/text-classification/run_pl_glue.py#L38-L39 As above, it seems that `token_type_ids` is not included if the `model_type=='electra'` (which also have `token_type_ids`) I think this code should be changed in other way.
07-31-2020 10:22:25
07-31-2020 10:22:25
Hi @monologg , yes toke_type_ids should be there for electra, easy fix would be somethin like ```python3 if self.config.model_type not in ["xlm", "roberta", "distilbert", "camembert", "longformer"]: inputs["token_type_ids"] = batch[2] ``` This is how it's done in the squad dataset @LysandreJik , if yes, I can open a PR<|||||>@patil-suraj, I also think that is the best to way to fix this issue:) https://github.com/huggingface/transformers/blob/838dc06ff5a438159ac25f531d622e8f344476f5/examples/text-classification/run_pl_glue.py#L98-L102 And not only in `training_step()`, also `validation_step()` has to be fixed.<|||||>Yes, totally forgot about that, thanks! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,171
closed
Update convert_pytorch_checkpoint_to_tf2.py
07-31-2020 08:37:45
07-31-2020 08:37:45
Hello! What do you want to do with this PR?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=h1) Report > Merging [#6171](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6171/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6171 +/- ## ========================================== - Coverage 79.82% 79.76% -0.06% ========================================== Files 146 146 Lines 26597 26597 ========================================== - Hits 21231 21216 -15 - Misses 5366 5381 +15 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=footer). Last update [c0b93a1...37a803b](https://codecov.io/gh/huggingface/transformers/pull/6171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> Hello! What do you want to do with this PR? Hi! The original code has the following problems: ![image](https://user-images.githubusercontent.com/61798996/89022269-51b8ca80-d354-11ea-89e0-bebe043ce0db.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,170
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
07-31-2020 07:38:28
07-31-2020 07:38:28
`j`<|||||>Y<|||||>Grgj6r<|||||>Yes mi bro<|||||>Yes mi bro <|||||>K<|||||>J
transformers
6,169
closed
Create README.md
README for MiniLM-L12-H384-uncased for QA
07-31-2020 06:54:19
07-31-2020 06:54:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=h1) Report > Merging [#6169](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `0.87%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6169/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6169 +/- ## ========================================== - Coverage 79.82% 78.94% -0.88% ========================================== Files 146 146 Lines 26597 26597 ========================================== - Hits 21231 20997 -234 - Misses 5366 5600 +234 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6169/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=footer). Last update [c0b93a1...ea1f76a](https://codecov.io/gh/huggingface/transformers/pull/6169?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @kolk!
transformers
6,168
closed
Albert pretrain datasets/ datacollator
partially fix #5984 Add supports for Albert model pretraining: - Add `AlbertTextDataset` class - Create `segment_ids` and `sentence_order_labels` attributes for sentence order prediction task - Add `DataCollatorForAlbertPretrain` class - inherited from `DataCollatorForLanguageModeling` class - create `attention_mask` for both masked and padding tokens @sgugger
07-31-2020 06:29:52
07-31-2020 06:29:52
@sgugger @LysandreJik thanks for the advice, will update soon!<|||||>Great, let us know when this PR is ready to review again!<|||||>@LysandreJik @sgugger Ready for reviewing again! Thanks for all the suggestions. I noticed that the check_code_quality test is failed however I can't see the actual failing part, please let me know if this matters. And also please let me know if there are addition modification required, thanks guys!<|||||>@LysandreJik @sgugger tests added and style check modification were done. Please help me to review this if got time, thanks! Besides, I did black reformat all the check_code_quality required files. However it could not pass the tests in CI, have no idea why this happen. ``` black --line-length 119 --target-version py35 src/transformers/data/datasets/language_modeling.py All done! ✨ 🍰 ✨ 1 file left unchanged. ```<|||||>This is because your black/isort versions aren't up to date. This is not a problem, I just pushed to your branch with the fix, but there's a remaining issue with flake8 that you will have to fix: ``` src/transformers/data/data_collator.py:253:13: F841 local variable 'attention_padding_mask' is assigned to but never used src/transformers/data/datasets/language_modeling.py:153:21: F541 f-string is missing placeholders ```<|||||>Thanks for adding the test, it's great!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=h1) Report > Merging [#6168](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a?el=desc) will **decrease** coverage by `0.38%`. > The diff coverage is `96.58%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6168/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6168 +/- ## ========================================== - Coverage 79.51% 79.13% -0.39% ========================================== Files 164 164 Lines 31022 31137 +115 ========================================== - Hits 24668 24641 -27 - Misses 6354 6496 +142 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <ø> (ø)` | | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.94% <95.23%> (+2.13%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <100.00%> (+0.88%)` | :arrow_up: | | [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <0.00%> (-0.55%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6168/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=footer). Last update [ed71c21...911b5b4](https://codecov.io/gh/huggingface/transformers/pull/6168?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> This is because your black/isort versions aren't up to date. This is not a problem, I just pushed to your branch with the fix, but there's a remaining issue with flake8 that you will have to fix: > > ``` > src/transformers/data/data_collator.py:253:13: F841 local variable 'attention_padding_mask' is assigned to but never used > src/transformers/data/datasets/language_modeling.py:153:21: F541 f-string is missing placeholders > ``` resolved. @LysandreJik Please help to review again, thanks!<|||||>Thanks for all your efforts on this!<|||||>> Thanks for all your efforts on this! thanks for reviewing!<|||||>> Very cool, I updated the style again. > > Thanks for iterating! thanks for your help!<|||||>First of all I have to admit, I am new here, so still trying to understand how different modalities work in hugging face. Going through the documents, it seems that the modifications proposed by @yl-to are only for PyTorch. Right ? Pretraining of ALBERT with TF is still not supported ? <|||||>That is correct @UmarSpa! However, models trained in PyTorch can easily be ported to TensorFlow if you're looking to serve a model using TensorFlow.
transformers
6,167
closed
fix the slow tests doc
remove unnecessary duplication wrt `RUN_SLOW=yes` @sgugger
07-31-2020 00:36:16
07-31-2020 00:36:16
Nope. They appear again, 2 paras later. Moreover, the 2 deleted instructions are themselves a problem as they are identical ;)
transformers
6,166
closed
[wip] diagnose MT metrics regression from pl 0.8.5 upgrade
**Base Command:** ```bash BS=8 GAS=4 MAX_LEN=128 python finetune.py \ --learning_rate=3e-5 \ --do_train \ --val_check_interval=0.25 \ --adam_eps 1e-06 \ --num_train_epochs 6 --src_lang en_XX --tgt_lang ro_RO \ --data_dir $ENRO_DIR \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps=$GAS \ --task translation \ --warmup_steps 500 \ --freeze_embeds \ --model_name_or_path=facebook/mbart-large-cc25 \ --label_smoothing 0.1 --freeze_embeds --gpus 1 --logger_name wandb --sortish_sampler \ $@ ``` **Clues:** - many more steps per epoch in wandb on `distillmbart` branch - lr reasonable on both branches. - loss much **higher** on `distillmbart` branch. - val_avg_bleu after ¼ epoch much higher (23 vs 19) - fp32 loss goes to NaN after .25 epochs. ( `bru_baseline_pl85_fp32`) **Suspects:** - not lr scheduler, though lr schedules differ (because steps differs I presume) - **early stopping** - maybe fp16_opt_level being used regardless of `--fp16`? (maybe scaler line in PL?) - optimizer_step unchanged besides `lr_scheduler.step` and the scheduler is clearly stepping. Feels wrong. - dataloader shuffle/`setup` - src_lens change in LineByLine ds - just a change in the way val metrics are computed? **TLDR**: can get test BLEU = 26.27 with gradient accumulation steps=1 and no early stopping: ```bash ./train_mbart_cc25_enro.sh --output_dir bru_pl85_long --label_smoothing 0.1 --freeze_embeds --logger_name wandb --sortish_sampler --fp16_opt_level O1 --gpus 1 ```
07-30-2020 23:46:57
07-30-2020 23:46:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=h1) Report > Merging [#6166](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0b93a1c7a961e30b30d02d641c9d22120ef5d73&el=desc) will **decrease** coverage by `1.38%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6166/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6166 +/- ## ========================================== - Coverage 79.82% 78.44% -1.39% ========================================== Files 146 146 Lines 26597 26597 ========================================== - Hits 21231 20863 -368 - Misses 5366 5734 +368 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+23.67%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=footer). Last update [c0b93a1...68319f0](https://codecov.io/gh/huggingface/transformers/pull/6166?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,165
closed
update min tf requirements
All of the test suite is failing w/o this update - need to re-run `pip install -e .[dev]` note that the failing tests don't show `"You need to run the TensorFlow trainer with at least the version 2.2.0, your version is {` anywhere so perhaps need some extra tweaks for the test fixtures, but then since non-tf tests fail too, the problem is in the core. Not sure if perhaps this code needs to be replaced with an assert - then the error message will always be there regardless where it's used. ``` if parse(tf.__version__).release < (2, 2, 0): logger.info( "You need to run the TensorFlow trainer with at least the version 2.2.0, your version is {}".format( tf.__version__ ) ) sys.exit(1) ``` Currently, if I run **any** test, including pytorch-only tests, with tf < 2.2 I get: ``` ____________________________________________________________ ERROR collecting tests/test_benchmark.py ____________________________________________________________ tests/test_benchmark.py:6: in <module> from transformers import AutoConfig, is_torch_available src/transformers/__init__.py:659: in <module> from .trainer_tf import TFTrainer src/transformers/trainer_tf.py:34: in <module> sys.exit(1) E SystemExit: 1 ======================================================================== warnings summary ======================================================================== ``` To reproduce: ``` pip install tensorflow==2.0.1 pytest -ra tests/test_benchmark.py ``` @jplu
07-30-2020 23:13:28
07-30-2020 23:13:28
Hello ! Which tests do not pass precisely? They are all green for me. For the log that is not displayed this is because no logger is set in `benchmark_test.py`. I will review that for later, thanks!<|||||>> Which tests do not pass precisely? They are all green for me. Most (all?) tests. Try: ``` pip install tensorflow==2.0.1 pytest -ra tests/test_benchmark.py ``` But unrelated to tests if the runtime requires `foo>=x.y.z`, then the requirements/setup need to require that exact version. A user may (1) already have `foo` installed and then runtime fails (2) a different package may require a lower version of `foo` and thus `pip` won't upgrade it to the latest version available. > For the log that is not displayed this is because no logger is set in `benchmark_test.py`. I will review that for later, thanks! That was just an example, as I said most, if not all tests fail with the same cryptic error.<|||||>OK, I see better now. Thanks!! I have fixed this in another PR by updating the piece of code you raised by an assert inside the `__init__` of the TFTrainer and now your example works fine. We do not want to fix the TensorFlow version for the entire lib but only for the trainer, at least for now.<|||||>I understand. Please let me know which PR if you'd like me to re-test, or when it gets merged and I will re-test then. Thank you, @jplu <|||||>Now the fix is merged :)<|||||>Thank you for remembering to ping me. I re-tested with master and the tests now work with tensorflow==2.0.1 - thank you very much, @jplu
transformers
6,164
closed
RoBERTa ``tokenizer.decode`` does not produce the same sentence.
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-74-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No. ### Who can help @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce This code example should reproduce the issue: ```python3 from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-base') s = """Meanwhile, Tucci's 'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible 'straight guys' Cuddy and Wilson.""" outputs = tokenizer(s) input_ids = outputs['input_ids'] ss = tokenizer.decode(input_ids, skip_special_tokens=True) print('s='+s) print('ss='+ss) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect ``s`` and ``ss`` should be exactly the same. However, they are not. The outputs are: ```bash s=Meanwhile, Tucci's 'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible 'straight guys' Cuddy and Wilson. ss=Meanwhile, Tucci's'straight guy', the emphatic doctor Seger, is not developed into a more interesting character, like the fallible'straight guys' Cuddy and Wilson. ``` There are two spaces missing before ``'straight guy'``. I am not sure if this behavior is expected or it is a bug. The thing is I want to use the sentence produced by the ``decode`` function and I find the output is not exactly the same as the original sentence. Thanks for the help! <!-- A clear and concise description of what you would expect to happen. -->
07-30-2020 22:32:05
07-30-2020 22:32:05
Seems to be an edge case from cleaning up tokenization on decoding : https://github.com/huggingface/transformers/blob/c0b93a1c7a961e30b30d02d641c9d22120ef5d73/src/transformers/tokenization_utils_base.py#L2688 --- For this specific case, a work-around can be : `ss = tokenizer.decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)` --- But I think it's a bug. Is there any way to improve `clean_up_tokenization()` function ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,163
closed
correct the correction
proved to be a different file, so extra path corrections. @sgugger
07-30-2020 21:43:34
07-30-2020 21:43:34
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=h1) Report > Merging [#6163](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a2f6d521c1d7ebd7e079bc62bee014c8d00b2547&el=desc) will **increase** coverage by `1.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6163/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6163 +/- ## ========================================== + Coverage 78.59% 79.61% +1.02% ========================================== Files 146 146 Lines 26597 26597 ========================================== + Hits 20904 21176 +272 + Misses 5693 5421 -272 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.18% <0.00%> (-34.62%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.68%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.19% <0.00%> (+63.97%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6163/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=footer). Last update [a2f6d52...4d3f303](https://codecov.io/gh/huggingface/transformers/pull/6163?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,162
closed
typos
07-30-2020 20:20:06
07-30-2020 20:20:06
transformers
6,161
closed
Padding Strategy Code missing an else case (maybe?)
## Environment info - `transformers` version: 3.0.2 - Platform: macOS 10.15.5 - Python version: 3.7 - PyTorch version (GPU?): 1.5 GPU-Yes - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help tokenizers: @mfuntowicz Summarization: @sshleifer T5: @patrickvonplaten ## Information Model I am using (T5 via Autotokenizer): The problem arises when using: `tokenizer([line], max_length=max_length, padding='max_length' if pad_to_max_length else False, truncation=True, return_tensors=return_tensors, **extra_kw)` In batch encoding, the latest code decides on a padding strategy: `_get_padding_truncation_strategies( self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs ):` ` elif padding is not False: if padding is True: padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch elif not isinstance(padding, PaddingStrategy): padding_strategy = PaddingStrategy(padding)` While calling the tokenizer, instead of 'max_length' I first gave the actual PaddingStrategy.MAX_LENGTH Enum as argument, but the above code throws an error as 'padding_strategy' is not defined. ## To reproduce Call the tokenizer as: `tokenizer([line], max_length=max_length, padding=PaddingStrategy.MAX_LENGTH if pad_to_max_length else False, truncation=True, return_tensors=return_tensors, **extra_kw)` ## Expected behavior The PaddingStrategy enum should be assigned no issue. ##Suggested Solution ` elif padding is not False: if padding is True: padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch elif not isinstance(padding, PaddingStrategy): padding_strategy = PaddingStrategy(padding) else: padding_strategy = padding` It's a one line fix basically, I can raise a PR for the same, unless PaddingStrategy wasn't designed to be used directly?
07-30-2020 20:11:14
07-30-2020 20:11:14
This issue also applies to the `truncation` parameter. I assumed the enums are supposed to be used directly because the release notes (https://github.com/huggingface/transformers/releases/tag/v3.0.0) explicitly mention the `TensorType` enum, which is defined right below the `PaddingStrategy` and `TruncationStrategy` enums. I agree that this is a problem that should be fixed, if the enums are meant to be used.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I think we should fix this, wdyt @LysandreJik ?<|||||>I believe this was already fixed by https://github.com/huggingface/transformers/pull/7610<|||||>Nice, thanks!
transformers
6,160
closed
run_squad.py eval metrics meaning
I am having difficulty understanding what exactly the best_f1 and best_exact scores that are outputted in the run_squad.py evaluation mean. (The scores are computed in the squad_metrics script, found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py)). What are the "scores" the best calculations are working with, what do the metrics represent, and when/is the best_threshold value employed during training? Thank you!
07-30-2020 20:10:19
07-30-2020 20:10:19
The model assign correctness probability to every answer it produce. if this probability crossing the threshold it means that the model predict their is no answer to the question. The threshold is picked as the threshold that achieve the best f1/exact score on the dev set. The best f1/exact is the result achieved with the best threshold found. Unfortunately, their is a [bug](https://github.com/huggingface/transformers/pull/7319) in run_squad.py code which was used for the training and evaluation of most of the models in the library and the results you see in their model cards are incorrect. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,159
closed
OSError: Unable to load weights from pytorch checkpoint file.
Hello, When I am trying to load the `Roberta-large` pre-trained model, I get the following error: ```python model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('roberta-large', output_hidden_states = True) OUT: OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` How can I solve this issue? Thank you,
07-30-2020 18:35:08
07-30-2020 18:35:08
Hi! Do you mind pasting your environment information here so that we may take a look?<|||||>Try to delete cache directory files.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey, I trained my model on GPT2-small but I am not able to load it! It gives off the following error: Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' OSError: Unable to load weights from pytorch checkpoint file for '/mounted/models/train-on-test1/' at '/mounted/models/train-on-test1/pytorch_model.bin' If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. @leeivan @LysandreJik <|||||>Please open a new issue with your specific problem, alongside all the information related to your environment as asked in the template. Thank you.<|||||>Same as <https://github.com/huggingface/transformers/issues/6620>, i guess it is because the checkpoint file is not a true checkpoint file: such as a git lfs file.<|||||>#6159, #6970, #6620 are all same issue. In my case I cloned the checkpoint file using git lfs and issue was resolved. Earlier I had used pointer to avoid git lfs however it gave this error. For some changing torch, transformer, tokenizer versions helped. Also you can go through the docs and check some from_pretrained parameters like force_download, from_tf and try.<|||||>For futuer visitors: [check this](https://discuss.pytorch.org/t/getting-an-error-unpicklingerror-invalid-load-key-v-in-pytorch-model-deploying-in-streamlit/107768/4)
transformers
6,158
closed
Add CircleCI config to run TPU tests.
For every incoming commit, this PR will create a Docker image containing the commit's latest code and will run that Docker image on Google Kubernetes Engine on a TPU.
07-30-2020 18:18:07
07-30-2020 18:18:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=h1) Report > Merging [#6158](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec0267475c16a1913e64cb4f81fd54d153e3d815&el=desc) will **decrease** coverage by `1.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6158/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6158 +/- ## ========================================== - Coverage 79.38% 78.36% -1.03% ========================================== Files 146 146 Lines 26454 26454 ========================================== - Hits 21001 20730 -271 - Misses 5453 5724 +271 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <0.00%> (+25.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+29.90%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=footer). Last update [ec02674...a662ce9](https://codecov.io/gh/huggingface/transformers/pull/6158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>`check_code_quality` failed with: ``` torch from https://files.pythonhosted.org/packages/38/53/914885a93a44b96c0dd1c36f36ff10afe341f091230aad68f7228d61db1e/torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl#sha256=7669f4d923b5758e28b521ea749c795ed67ff24b45ba20296bc8cff706d08df8 (from transformers==3.0.2): Expected sha256 7669f4d923b5758e28b521ea749c795ed67ff24b45ba20296bc8cff706d08df8 Got 36bbf4ab202de410d764b9156f3925b7d7037ad046f20690e576725a3826a2ac ``` I don't see how this latest commit could have caused this. I'll retry later<|||||>See https://github.com/huggingface/transformers/pull/6219
transformers
6,157
closed
Harmonize both Trainers API
As discussed after the latest rework of TFTrainer. Also removed references to "master" processes in our API to go to main with deprecation warnings.
07-30-2020 18:07:04
07-30-2020 18:07:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=h1) Report > Merging [#6157](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec0267475c16a1913e64cb4f81fd54d153e3d815&el=desc) will **increase** coverage by `0.40%`. > The diff coverage is `67.47%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6157/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6157 +/- ## ========================================== + Coverage 79.38% 79.79% +0.40% ========================================== Files 146 146 Lines 26454 26607 +153 ========================================== + Hits 21001 21230 +229 + Misses 5453 5377 -76 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | | | [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `32.05% <0.00%> (+1.56%)` | :arrow_up: | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `13.09% <ø> (-3.05%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.14% <23.94%> (-1.83%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.36% <84.34%> (+0.86%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <88.88%> (+0.44%)` | :arrow_up: | | [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `69.23% <100.00%> (+2.96%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.83% <100.00%> (+1.39%)` | :arrow_up: | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6157/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=footer). Last update [603cd81...d95e283](https://codecov.io/gh/huggingface/transformers/pull/6157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,156
closed
should mBART-large-en-ro have decoder_start_token_id by default?
Hypothesis: since the argument `prepend_bos` is set to "False" in fairseq/examples/README.md, mbart-large-en-ro does not need `decoder_start_token_id`. TODO: - create branch that deletes `decoder_start_token_id`. Setting it to None in the config might not be enough. - verify that decoder_start_token_id is in fact not being used by setting a breakpoint in `generate`. - run_eval.py on wmt-en-ro/test and see if BLEU is >= 26.46, the score with decoder_start_token_id=250020.
07-30-2020 16:35:52
07-30-2020 16:35:52
Hi @sshleifer, I'd like to contribute and help out here if still needed. My thinking is to remove ```decoder_start_token_id``` from run_eval.py and generation_utils.py and change the following code: https://github.com/huggingface/transformers/blob/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb/src/transformers/generation_utils.py#L403-L409 to: input_ids = torch.full( (effective_batch_size * num_beams, 1), 250020, dtype=torch.long, device=next(self.parameters()).device, )<|||||>I dont think that change will do anything since decoder_start_token_id = 250020. What I would do is change the 250020 to a bos_token_id (0, I think) or a pad_token_id (1) and see what the BLEU score is. <|||||>Ah yes that makes sense. I tried those two and the eos_token_id and got the following results: ID | BLEU Score -- | -- eos_token_id (2) | 28.22 decoder_start_token_id (250020) | 28.06 pad_token_id (1) | 26.79 bos_token_id (0) | 26.01 <|||||>Super interesting, thanks for running that. It seems like I should change decoder_start_token_id in the mbart-large-en-ro config to 2. Do you have opinions on mbart-large-cc25?<|||||>No problem! Yes I think configuring decoder_start_token_id to 2 is a good idea. Unfortunately, I'm getting the same issues you're getting with mbart-large-cc25 (output's in English not Romanian and missing the first word when I use bos_token_id or 250020 and gibberish with eos/pad_token_id) and don't understand why that's the case. I'll investigate and post any useful findings. <|||||>I think I fixed this another way in #6526 on master ``` python run_eval.py facebook/mbart-large-en-ro $ENRO_DIR/test.source eos_baseline_enro_test_generations.txt \ --reference_path $ENRO_DIR/test.target \ --score_path baseline_test_bleu_eos.json --bs 32 --task translation --fp16 ``` => {'bleu': 26.81} ``` python run_eval.py facebook/mbart-large-en-ro $ENRO_DIR/test.source \ eos_baseline_enro_test_generations.txt --reference_path $ENRO_DIR/test.target \ --score_path baseline_test_bleu_eos.json --bs 32 --task translation --fp16 \ --decoder_start_token_id 2 ``` {'bleu': 11.57} (and takes 40 mins!) in the original fairseq I get 26.83.<|||||>Gunna close this since the score is now basically the same as fairseq. Thanks for your help!
transformers
6,155
closed
Model output test
Step 2 of the strategy for the new model outputs as outlined on the [forum](https://discuss.huggingface.co/t/new-model-output-types/195/8). Use the `return_dict` argument introduced in #6138 in all tests and remove all unpacking from the tests.
07-30-2020 15:59:51
07-30-2020 15:59:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=h1) Report > Merging [#6155](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `0.10%`. > The diff coverage is `75.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6155/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6155 +/- ## ========================================== + Coverage 78.35% 78.46% +0.10% ========================================== Files 146 146 Lines 26454 26454 ========================================== + Hits 20729 20758 +29 + Misses 5725 5696 -29 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <66.66%> (-0.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `95.49% <100.00%> (-0.21%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-2.24%)` | :arrow_down: | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (ø)` | | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <0.00%> (ø)` | | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6155/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=footer). Last update [91cb954...33ebdb9](https://codecov.io/gh/huggingface/transformers/pull/6155?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,154
closed
Hidden State Embedding-Transformers
Hi everybody, I want to use Bert model to get the embedding for a sentence after I fine-tuned it with raw texts. I was wondering if is that possible or not and anybody can help me with that?
07-30-2020 14:52:16
07-30-2020 14:52:16
Hi! Did you have a look at [this older issue](https://github.com/huggingface/transformers/issues/1950)?<|||||>> Hi! Did you have a look at [this older issue](https://github.com/huggingface/transformers/issues/1950)? Yes I did, but my concern is that if I want to fine-tune it on my raw text data (language model with LM head) then how should I use it for sentence embedding? Can I just remove the LM head of it?<|||||>I think this is not necessary since, according to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), you can access every hidden states of your model without removing the LM head.<|||||>> I think this is not necessary since, according to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm), you can access every hidden state of your model without removing the LM head. Yes, thank you for your reply, I figured it out the language model network can be split into two parts which are ``` self.bert = BertModel(config) self.cls = BertOnlyMLMHead(config) ``` then I just need to get the output from self.bert if I want to access the hidden states.
transformers
6,153
closed
readme m3hrdadfi/albert-fa-base-v2
model_card readme for m3hrdadfi/albert-fa-base-v2
07-30-2020 11:04:31
07-30-2020 11:04:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=h1) Report > Merging [#6153](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d24ea708d742263efe4f4b8d525402f2d916c96c&el=desc) will **increase** coverage by `1.88%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6153/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6153 +/- ## ========================================== + Coverage 77.19% 79.08% +1.88% ========================================== Files 146 146 Lines 26403 26403 ========================================== + Hits 20382 20880 +498 + Misses 6021 5523 -498 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (+5.76%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `81.00% <0.00%> (+14.00%)` | :arrow_up: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6153/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=footer). Last update [d24ea70...1adb2ce](https://codecov.io/gh/huggingface/transformers/pull/6153?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That's great, thanks for sharing this very detailed model card 🤗 ➡️ **[model page](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)** Would you like to add sample inputs for Persian, either to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (open a Pull request) or to your specific model card?<|||||>> That's great, thanks for sharing this very detailed model card 🤗 > > ➡️ **[model page](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)** > > Would you like to add sample inputs for Persian, either to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (open a Pull request) or to your specific model card? Yes, sure, why not! I have added a couple of samples to `DefaultWidget.ts` and opened a PL!
transformers
6,152
closed
Using BertWordPiece Tokenizer
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details The Bert WordPiece tokenizer only saves a vocab file on saving the model while other tokenizers such as Byte Level BPE also save a merges file. When I try to call the model after saving it `RobertaTokenizerFast.from_pretrained("./EsperBERTo_italian", max_len=512)` I get the following error `OSError: Model name './EsperBERTo_italian' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './EsperBERTo_italian' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. `
07-30-2020 09:36:22
07-30-2020 09:36:22
Hi! Are you trying to load a BERT vocabulary in a RoBERTa tokenizer? This unfortunately won't work, as the mechanisms between the WordPiece and Byte level BPE are inherently different.<|||||>Hi, I could not find documentation for it so I thought id try. Another thing I wanted to ask is that I am working on a dataset of recipes so I have a list of ingredients in order. I now remove random ingredients and predict them using various models. I have tried Seq2Seq models and Roberta to solve this problem but both give poor results. In my opinion, this problem is somewhat similar to NLP problems but significantly different because tokenizing like BERT does not give any advantage and creates more problems. Do you have any architecture in mind that might be more suited to tackle this problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,151
closed
Add Pytorch Native AMP support in Trainer
Pytorch 1.6 introduces Native AMP support. This eliminates the need to build and install Apex and provides improvements over problems highlighted in [Apex #818](https://github.com/NVIDIA/apex/issues/818) and flexibility. This is the recommended way to use AMP. With this PR, Trainer will automatically use Pytorch's native AMP if 1.6 version is installed, otherwise, it will use Apex. This PR will close [#6115](https://github.com/huggingface/transformers/issues/6115).
07-30-2020 07:45:36
07-30-2020 07:45:36
transformers
6,150
closed
🐛 T5 Tokenizer ignores \n \t characters and more than one whitespace together
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (master) - Platform: Linux-4.9.0-12-amd64-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @sshleifer ## Information T5 Tokenizer is based out of SentencePiece and in sentencepiece Whitespace is treated as a basic symbol. But huggingface tokenizers just ignores more than one whitespace. Consider all the following examples tokenize to the same thing. ``` from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") print(tokenizer.tokenize("Hi there I'm good")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there I'm good")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there I'm good\n")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there \n I'm good\n")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there \n I'm good\n")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there \n \t I'm good\n")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] print(tokenizer.tokenize("Hi there\nI'm good")) >> ['▁Hi', '▁there', '▁I', "'", 'm', '▁good'] ``` All these examples should tokenize to different representations. Also ignoring newline outright means that all applications that use newlines fail. Model I am using (Bert, XLNet ...): T5 <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> All the whitespaces have different tokenizations
07-30-2020 04:53:04
07-30-2020 04:53:04
@TevenLeScao @sshleifer @patrickvonplaten This looks like a serious problem with the T5 Tokenizer. Is this behavior expected?<|||||>Closing this issue as sentencepiece for T5 removes more than one whitespace as a standard https://github.com/google-research/text-to-text-transfer-transformer/issues/390#issuecomment-688417703
transformers
6,149
closed
[s2s] add support for overriding config params
add support for overriding model params: ``` python finetune.py --encoder_layerdrop 0.1 --decoder_layerdrop 0.1 --dropout 0.1 --attention_dropout 0.1 ``` as requested at https://github.com/huggingface/transformers/issues/6018 `README.md` seems to be mostly the editor removing superfluous whitespace - not sure why github shows it - normally it doesn't. The only added doc section is https://github.com/stas00/transformers/blob/seq2seq-train_params-1/examples/seq2seq/README.md#finetuning-training-params
07-30-2020 03:01:12
07-30-2020 03:01:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=h1) Report > Merging [#6149](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **increase** coverage by `1.32%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6149/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6149 +/- ## ========================================== + Coverage 78.35% 79.68% +1.32% ========================================== Files 146 146 Lines 26403 26403 ========================================== + Hits 20689 21039 +350 + Misses 5714 5364 -350 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6149/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=footer). Last update [54f9fbe...5476a9e](https://codecov.io/gh/huggingface/transformers/pull/6149?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>The code looks perfect.<|||||>> I'm surprised you didn't need to change the `CHEAP_ARGS` constant in the tests. because the new args are optional? Unless you mean something else. ...Working on the tests. <|||||>Added tests as suggested. <|||||>good alias ```bash sty () { make style flake8 examples templates tests src utils } ```
transformers
6,148
closed
tokenize cache for examples/language-modeling
# 🚀 Feature request I find transformer already has cache for tokenized result in examples/token-classification. I think language-modeling which with a much larger dataset also need that
07-30-2020 02:56:11
07-30-2020 02:56:11
as it takes me about 7 minutes to tokenize a train set (size is 400w) every time I start training. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,147
closed
the documents for transformer don't work
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## i find the new document for transformer can't work, https://huggingface.co/transformers/v2.5.0/model_doc/bert.html#bertmodel ; the class link don't work <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-30-2020 02:44:55
07-30-2020 02:44:55
The link works, but the documentation was not properly done at this version. You should check a more recent version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,146
closed
🌟 Mirostat decoding algorithm
# 🌟 Mirostat: A Perplexity-Controlled Neural Text Decoding Algorithm ## Description Paper : https://arxiv.org/pdf/2007.14966.pdf Abstract : > [...] We use this analysis to design a feedbackbased adaptive top-k text decoding algorithm called mirostat that generates text (of any length) with a redetermined value of perplexity, and thereby high-quality text without any tuning. [...] Mirostat avoids both traps: experiments show that cross-entropy has a near-linear relation with repetition in generated text. This relation is almost independent of the sampling method but slightly dependent on the model used. Hence, for a given language model, control over perplexity also gives control over repetitions. ## Open source status * [x] the model implementation is available: https://github.com/basusourya/mirostat * [x] the model weights are available: _Not applicable_ * [x] who are the authors: @basusourya
07-30-2020 01:45:13
07-30-2020 01:45:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,145
closed
TOKENIZER: truncation not working for batch
## Environment info - `transformers` version: 2.11.0 - Platform: Linux-4.15.0-106-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz examples/distillation: @VictorSanh ## Information Model I am using: DistilBertForSequenceClassification. The tokenizer does not truncate when I pass a list of strings. It only works when I pass a single string. ## To reproduce Copy/paste (or just read) code below. ```python from transformers import DistilBertTokenizer tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") sentence = "Submit a bug report to help us improve transformers." output = tokenizer(sentence, padding=True, truncation=True, max_length=4) print(output['input_ids']) # 4 tokens as expected from *max_length=4* # out: [101, 12040, 1037, 102] # now let's test with multiple sentences sentences = [ "Submit a bug report to help us improve transformers.", "Benchmark a part of this library and share your results" ] output = tokenizer(sentences, padding=True, truncation=True, max_length=4) print(output['input_ids']) # output is returning all tokens, it is not truncating to max_length! # out: [[101, 12040, 1037, 11829, 3189, 2000, 2393, 2149, 5335, 19081, 1012, 102, 0], # [101, 6847, 10665, 1037, 2112, 1997, 2023, 3075, 1998, 3745, 2115, 3463, 102]] ``` ## Expected ```python # output truncated to max_length (4 as in the example) # out: [[101, 12040, 1037, 102], # [101, 6847, 10665, 102]] ```
07-30-2020 00:59:28
07-30-2020 00:59:28
I have updated to version 3.0.2 now and the tokenizer is working properly. I am closing this issue.
transformers
6,144
closed
Question-Answering pipeline doesn't work anymore with long text
Transformers version: 3.0.2 The question-answering models don't seem to work anymore with long text, any reason why this is happening? I have tried with the default model in `pipeline` as well as with specific models. e.g __Sample Code:__ ``` from transformers import pipeline nlp_qa = pipeline('question-answering') # 1st try nlp_qa = pipeline('question-answering', model='deepset/roberta-base-squad2') # 2nd try context = """ Coronaviruses are a large family of viruses which may cause illness in animals or humans. In humans, several coronaviruses are known to cause respiratory infections ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS). The most recently discovered coronavirus causes coronavirus disease COVID-19. COVID-19 is the infectious disease caused by the most recently discovered coronavirus. This new virus and disease were unknown before the outbreak began in Wuhan, China, in December 2019. COVID-19 is now a pandemic affecting many countries globally. The most common symptoms of COVID-19 are fever, dry cough, and tiredness. Other symptoms that are less common and may affect some patients include aches and pains, nasal congestion, headache, conjunctivitis, sore throat, diarrhea, loss of taste or smell or a rash on skin or discoloration of fingers or toes. These symptoms are usually mild and begin gradually. Some people become infected but only have very mild symptoms. Most people (about 80%) recover from the disease without needing hospital treatment. Around 1 out of every 5 people who gets COVID-19 becomes seriously ill and develops difficulty breathing. Older people, and those with underlying medical problems like high blood pressure, heart and lung problems, diabetes, or cancer, are at higher risk of developing serious illness. However, anyone can catch COVID-19 and become seriously ill. People of all ages who experience fever and/or cough associated with difficulty breathing/shortness of breath, chest pain/pressure, or loss of speech or movement should seek medical attention immediately. If possible, it is recommended to call the health care provider or facility first, so the patient can be directed to the right clinic. People can catch COVID-19 from others who have the virus. The disease spreads primarily from person to person through small droplets from the nose or mouth, which are expelled when a person with COVID-19 coughs, sneezes, or speaks. These droplets are relatively heavy, do not travel far and quickly sink to the ground. People can catch COVID-19 if they breathe in these droplets from a person infected with the virus. This is why it is important to stay at least 1 meter) away from others. These droplets can land on objects and surfaces around the person such as tables, doorknobs and handrails. People can become infected by touching these objects or surfaces, then touching their eyes, nose or mouth. This is why it is important to wash your hands regularly with soap and water or clean with alcohol-based hand rub. Practicing hand and respiratory hygiene is important at ALL times and is the best way to protect others and yourself. When possible maintain at least a 1 meter distance between yourself and others. This is especially important if you are standing by someone who is coughing or sneezing. Since some infected persons may not yet be exhibiting symptoms or their symptoms may be mild, maintaining a physical distance with everyone is a good idea if you are in an area where COVID-19 is circulating. """ nlp_qa(context=context, question='What is a coronavirus ?') ``` __Error Message:__ ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-15-ddac1f9cb68e> in <module>() ----> 1 nlp_qa(context=context, question='What is a coronavirus ?') 1 frames /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0) 1314 ), 1315 } -> 1316 for s, e, score in zip(starts, ends, scores) 1317 ] 1318 KeyError: 0 ``` This used to work before version 3 I remember, would really appreciate some help on this.
07-29-2020 21:48:05
07-29-2020 21:48:05
Also if I look back at my code, ``` !pip install transformers==2.11.0 ``` ![image](https://user-images.githubusercontent.com/3448263/88862614-609d6100-d21e-11ea-951e-a2903b0be693.png) Still works for me with a larger context (same code as above). Any idea which is the default model being used there and if that would still work for transfomers 3.x ? <|||||>@LysandreJik , @sshleifer would be great if you could look into this, assign this to the right folks.<|||||>Assigned @mfuntowicz, the master of pipelines. He's in holidays right now, so I'll try to look into it in the coming days.<|||||>It isn't just long contexts. I was running some QA on SQuAD2.0 and came across an instance where I received that error for a given context and question but the context is not that long. ``` from transformers import pipeline model_path = "twmkn9/distilbert-base-uncased-squad2" hfreader = pipeline('question-answering', model=model_path, tokenizer=model_path, device=0) context = """ The Norman dynasty had a major political, cultural and military impact on medieval Europe and even the Near East. The Normans were famed for their martial spirit and eventually for their Christian piety, becoming exponents of the Catholic orthodoxy into which they assimilated. They adopted the Gallo-Romance language of the Frankish land they settled, their dialect becoming known as Norman, Normaund or Norman French, an important literary language. The Duchy of Normandy, which they formed by treaty with the French crown, was a great fief of medieval France, and under Richard I of Normandy was forged into a cohesive and formidable principality in feudal tenure. The Normans are noted both for their culture, such as their unique Romanesque architecture and musical traditions, and for their significant military accomplishments and innovations. Norman adventurers founded the Kingdom of Sicily under Roger II after conquering southern Italy on the Saracens and Byzantines, and an expedition on behalf of their duke, William the Conqueror, led to the Norman conquest of England at the Battle of Hastings in 1066. Norman cultural and military influence spread from these new European centres to the Crusader states of the Near East, where their prince Bohemond I founded the Principality of Antioch in the Levant, to Scotland and Wales in Great Britain, to Ireland, and to the coasts of north Africa and the Canary Islands. """ question2 = "Who assimilted the Roman language?" hfreader(question=question2, context=context) ``` ### Error Message: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-144-45135f680e80> in <module>() ----> 1 hfreader(question=question2, context=context) 1 frames /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0) 1314 ), 1315 } -> 1316 for s, e, score in zip(starts, ends, scores) 1317 ] 1318 KeyError: 0 ``` But if I changed the question and keep the same context, the pipeline completes the execution. ``` question1 = "Who was famed for their Christian spirit?" hfreader(question=question1, context=context) ``` ### Output ``` {'answer': 'Normans', 'end': 127, 'score': 0.5337043597899815, 'start': 120} ```<|||||>Thanks @melaniebeck for this, even I encountered this just earlier today. Would definitely be great if the team can figure out how these could be resolved in v3.x for transformers.<|||||>i also encountered this issue (keyerror : 0) it's not even long text (about 8-12 words length) sometime it occured when i'm changing some word in the question with oov word ``` rv = self.dispatch_request() 0|QA | File "/home/samsul/.local/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request 0|QA | return self.view_functions[rule.endpoint](**req.view_args) 0|QA | File "/home/samsul/question-answering/app.py", line 23, in search 0|QA | answer = nlp({'question': question,'context': context}) 0|QA | File "/home/samsul/.local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__ 0|QA | for s, e, score in zip(starts, ends, scores) 0|QA | File "/home/samsul/.local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp> 0|QA | for s, e, score in zip(starts, ends, scores) 0|QA | KeyError: 0 ```<|||||>Hello! There has been a few fixes on the pipelines since version v3.0.2 came out. I can reproduce this issue on v3.0.1 and v3.0.2, but not on the master branch, as it has probably been fixed already. Could you try installing from source (`pip install git+https://github.com/huggingface/transformers`) and let me know if that fixes your issue?<|||||>hi @LysandreJik seems the problem still occurred but now its keyerror 17 **input** ``` !pip install git+https://github.com/huggingface/transformers from transformers import pipeline nlp = pipeline('question-answering',model='a-ware/xlmroberta-squadv2',device=0) nlp({'question': "siapa istri samsul?",'context': "nama saya samsul, saya adalah suami raisa"}) ``` **Error** ``` /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs) 1676 ), 1677 } -> 1678 for s, e, score in zip(starts, ends, scores) 1679 ] 1680 /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0) 1676 ), 1677 } -> 1678 for s, e, score in zip(starts, ends, scores) 1679 ] 1680 KeyError: 17 ``` i also try the case from @dipanjanS (the first post) still got some error: ``` /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <dictcomp>(.0) 1636 with torch.no_grad(): 1637 # Retrieve the score for the context tokens only (removing question tokens) -> 1638 fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} 1639 start, end = self.model(**fw_args)[:2] 1640 start, end = start.cpu().numpy(), end.cpu().numpy() ValueError: expected sequence of length 384 at dim 1 (got 317) ``` <|||||>https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/pipelines.py#L1618 is causing this issue. Padding to max_length solves this problem. Currently, if the text is long, the final span is not padded to the max_seq_len of the model.<|||||>Yes agreed I think that is related to the recent code push based on the PR linked earlier. Would be great if this could be looked into HF team! On Tue, Aug 11, 2020 at 11:18 PM Binoy Dalal <[email protected]> wrote: > > https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/pipelines.py#L1618 > <https://mailtrack.io/trace/link/26fa516997f20e87e713b4c04065c74bbadf3226?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fblob%2Ff6cb0f806efecb64df40c946dacaad0adad33d53%2Fsrc%2Ftransformers%2Fpipelines.py%23L1618&userId=3535544&signature=c1f087ce57177138> > is causing this issue. Padding to max_length solves this problem. > Currently, if the text is long, the final span is not padded to the > max_seq_len of the model. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://mailtrack.io/trace/link/4b6aa40826e8c36d7aebe9207d4f60b6bd245a74?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F6144%23issuecomment-672130943&userId=3535544&signature=a98099ac20ab30b6>, > or unsubscribe > <https://mailtrack.io/trace/link/2613e10aaae39303a4e72607615d815ac84ac486?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA2J3R3U2QJ6XORN26GC5JTSAF767ANCNFSM4PMDZHVQ&userId=3535544&signature=aef7c99cd5c66eaa> > . > <|||||>Solved by https://github.com/huggingface/transformers/issues/6875<|||||>Awesome thanks folks!
transformers
6,143
closed
Tf trainer cleanup
New version of #6015 Will harmonize the public customization hooks and document everything properly once this is merged.
07-29-2020 20:37:01
07-29-2020 20:37:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=h1) Report > Merging [#6143](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **increase** coverage by `1.02%`. > The diff coverage is `25.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6143/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6143 +/- ## ========================================== + Coverage 78.35% 79.38% +1.02% ========================================== Files 146 146 Lines 26403 26416 +13 ========================================== + Hits 20689 20970 +281 + Misses 5714 5446 -268 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.14% <25.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=footer). Last update [54f9fbe...46b6cb4](https://codecov.io/gh/huggingface/transformers/pull/6143?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,142
closed
Fix FlauBERT GPU test
07-29-2020 18:59:21
07-29-2020 18:59:21
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=h1) Report > Merging [#6142](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f9fbeff822ec0547fd23d0338654456925f6b7&el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6142/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6142 +/- ## ========================================== - Coverage 78.35% 78.32% -0.04% ========================================== Files 146 146 Lines 26403 26403 ========================================== - Hits 20689 20679 -10 - Misses 5714 5724 +10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `86.61% <100.00%> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6142/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=footer). Last update [54f9fbe...7034997](https://codecov.io/gh/huggingface/transformers/pull/6142?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,141
closed
Bug in language_modeling.py calling tokenizer.num_special_tokens_to_add
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.4.0-1081-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. tokenizers: @mfuntowicz --> ## Information Model I am using GPT-2 The class TextDataset call tokenizer.num_special_tokens_to_add(pair=False) but the correct argument is called is_pair. I assume the bugfix corresponds to transformers repo. ## To reproduce https://github.com/huggingface/transformers/blob/e49393c3617e877f0370f7bad7c7e823808c5bfb/src/transformers/data/datasets/language_modeling.py#L27 https://github.com/huggingface/tokenizers/blob/master/bindings/python/tokenizers/implementations/base_tokenizer.py#L20 I'm using transformers 3.0.2 and tokenizers 0.8.1rc1 (try to update but says its incompatible) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
07-29-2020 18:41:26
07-29-2020 18:41:26
I was using a wrong approch, it works if I train the tokenizer, save de params, and load into a FastTokenizer impl
transformers
6,140
closed
Copyright date and owner not filled out in LICENSE file
In transformers/LICENSE the copyright date and owner is not filled out. https://github.com/huggingface/transformers/blob/master/LICENSE#L179-L190
07-29-2020 17:58:01
07-29-2020 17:58:01
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Thanks @Meadosc for raising this, we're not sure if this is a requirement of the license in order to add the copyright date and owner. We only saw this issue now but think it would be good to reopen the issue (as it was auto-closed by the bot) to be addressed by the repository maintainers if possible!
transformers
6,139
closed
Applying hugging face transformer in sequence labeling problem
Hello.. Thanks for your great framework. Btw, what I'd like to know is that can I apply these hugging face transformer models in **sequence labeling** problems like **part of speech tagging** and **word segmentation**(because I see only **ner** model in example folder). If I can, **how** can I do that?? Can I get some helps like **example scripts** about how to apply these transformers for sequence labeling problems??
07-29-2020 17:07:25
07-29-2020 17:07:25
you just use the ner code for POS tagging problem, the only difference is the set of target classes
transformers
6,138
closed
Switch from return_tuple to return_dict
This is the first step in the change of model outputs as described on [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8). This PR removes the argument `return_tuple` and introduces `return_dict` (that works the other way round) and all models now return tuple by default (100% full backward compatibility) unless you opt-in the new model output types with `return_dict=True`. The model output class is changed to the dict-like one that should work equally well for TensorFlow. I have normally updated all examples in the docs to instantiate the model with `return_dict=True` but more docs will follow in other PRs. For the tests, I have set `return_dict=True` in one of the common tests just to make sure it actually works. Step 2 (in a follow-up PR) will be to use it in all tests. Step 3 is then going to update the TensorFlow models to use this `ModelOutput`.
07-29-2020 17:03:44
07-29-2020 17:03:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=h1) Report > Merging [#6138](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **increase** coverage by `0.99%`. > The diff coverage is `71.96%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6138/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6138 +/- ## ========================================== + Coverage 78.49% 79.48% +0.99% ========================================== Files 146 146 Lines 26335 26441 +106 ========================================== + Hits 20671 21017 +346 + Misses 5664 5424 -240 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | | | [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <ø> (ø)` | | | [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.96% <ø> (-0.04%)` | :arrow_down: | | [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <0.00%> (ø)` | | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.54% <8.66%> (+0.06%)` | :arrow_up: | | [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `23.47% <11.11%> (-0.63%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `80.56% <68.08%> (-1.65%)` | :arrow_down: | | ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/6138/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=footer). Last update [8a8ae27...60928b0](https://codecov.io/gh/huggingface/transformers/pull/6138?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sgugger thanks very much for this PR! `return_dict` seems to work with the `from_pretrained()` method for models, but what if I didn't want to use `from_pretrained()` and simply instantiated the model from scratch as follows: ``` config_class = GPT2Config model_class = GPT2DoubleHeadsModel config = config_class.from_pretrained("gpt2") model = model_class(config) ``` I still want to be able to use `return_dict`. How would I go about doing that? It looks like I could pass `return_dict` explicitly in the `forward()` for the from-scratch case. However, I want the `forward()` call in my code to be consistent across the from-scratch and the `from_pretrained()` settings, in order to decouple the model instantiation from the actual trainer loop. How should this be handled? Would the solution be something like this: ``` config_class = GPT2Config model_class = GPT2DoubleHeadsModel config = config_class.from_pretrained("gpt2", use_return_dict=True) model = model_class(config) ``` I tried this solution but it didn't work, it gave me the following error: ``` >>> from transformers import GPT2Config >>> config = GPT2Config.from_pretrained("gpt2", use_return_dict=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 312, in from_pretrained return cls.from_dict(config_dict, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 406, in from_dict setattr(config, key, value) AttributeError: can't set attribute ```<|||||>The right line is: ``` config = config_class.from_pretrained("gpt2", return_dict=True) ``` `use_return_dict` is an inner attribute that combines `return_dict` and `torchscript` (since torchscript is incompatible with `return_dict=True`)
transformers
6,137
closed
StopIteration error when using HuggingFace Transformer models
Hello, I am trying to use the RobertaForMultipleChoice model, and when I try to compute the mc_loss, the following StopIteration error is generated: ```python >>> mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] Traceback (most recent call last): File "STAT946_final_project_code_v4.py", line 625, in <module> success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval) File "STAT946_final_project_code_v4.py", line 415, in main_function_diag_normal best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal) File "STAT946_final_project_code_v4.py", line 342, in train_loop optimizer, scheduler, log_interval, svi, guide, epoch) File "STAT946_final_project_code_v4.py", line 237, in train_mc_head mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward output_hidden_states=output_hidden_states, File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype first_tuple = next(gen) StopIteration ``` The error seem to be generated from the HuggingFace code below: ```python @property def device(self) -> device: try: return next(self.parameters()).device except StopIteration: # For nn.DataParallel compatibility in PyTorch 1.5 def find_tensor_attributes(module: nn.Module) -> List[Tuple[str, Tensor]]: tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] return tuples gen = self._named_members(get_members_fn=find_tensor_attributes) first_tuple = next(gen) return first_tuple[1].device ``` What is the cause of this error? and how can I fix it? Thank you
07-29-2020 16:42:42
07-29-2020 16:42:42
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,136
closed
frequent checkpoints have worse performance
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> Hi All, I often notice an issue when training a model and evaluate on dev set. Usually we may evaluate on the dev after each epoch, let's call this as setting A; But we often want to check the system more often on the dev set, so we may evaluate for example 1/5 epoch; let's call this as setting B. What I noticed is that A and B will get totally different performance in the end. Since B checks more often, I supposed that B can get the same or at least very close performance with A then evaluate at 5/5 of the training set, 10/5 of the training set, etc. But they are very different. For example, when I train textual entailment model on RTE dataset, A can give me about 86% accuracy on dev, but B can only give about 80%. What's the issue here? thanks ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-29-2020 16:25:57
07-29-2020 16:25:57
hi, can you post the link on stackoverflow Btw, I also face this issue when working with an RTE dataset and have raised an issue here..[https://github.com/huggingface/transformers/issues/5863](url). My dev values after each epoch don't match up when the total number of epoch changes. Now its making me wonder if its RTE specific. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,135
closed
How to combine the encoded representations of two transformers
Say if I have two transformers models operating on two different domains, what is a good way to combine the features?
07-29-2020 16:20:07
07-29-2020 16:20:07
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,134
closed
Fix TF CTRL model naming
This PR fixes an issue with the naming of some layers in the TensorFlow version of CTRL.
07-29-2020 16:11:33
07-29-2020 16:11:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=h1) Report > Merging [#6134](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/641b873c1341f553b40fd82c990b80884b585f0b&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6134/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6134 +/- ## ========================================== - Coverage 78.64% 78.63% -0.01% ========================================== Files 146 146 Lines 26326 26333 +7 ========================================== + Hits 20704 20708 +4 - Misses 5622 5625 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.84% <100.00%> (+0.05%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.44% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6134/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=footer). Last update [641b873...1bc31c0](https://codecov.io/gh/huggingface/transformers/pull/6134?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,133
closed
bart-large-mnli-yahoo-answers model card
07-29-2020 15:24:57
07-29-2020 15:24:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=h1) Report > Merging [#6133](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6133/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6133 +/- ## ========================================== + Coverage 77.77% 77.86% +0.08% ========================================== Files 146 146 Lines 26326 26326 ========================================== + Hits 20476 20499 +23 + Misses 5850 5827 -23 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (-5.27%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=footer). Last update [6c00285...5c85f49](https://codecov.io/gh/huggingface/transformers/pull/6133?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Feel free to merge whenever ready
transformers
6,132
closed
MBartTokenizerTrimmed to support truncated embeddings
Motivation: The embeddings table for MBART is huge, but only ~40K of the entries are used/finetuned for most wmt tasks. See https://github.com/pytorch/fairseq/issues/2120 - needs vocab.json (fairseq Dictionary) - needs to call `encode_as_pieces` with restricted vocabulary. I will take this.
07-29-2020 14:39:44
07-29-2020 14:39:44
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,131
closed
Enable ONNX/ONNXRuntime optimizations through converter script
Introduce `--optimize` CLI argument and `optimize()` method to allow ONNXRuntime to operates all the possible optimizations on the raw ONNX IR. Added documentation for this parameter in the ONNX/ONNXRuntime section on the doc.
07-29-2020 14:07:02
07-29-2020 14:07:02
cc @tianleiwu @yufenglee 💪 <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=h1) Report > Merging [#6131](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.71%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6131/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6131 +/- ## ========================================== + Coverage 77.77% 78.49% +0.71% ========================================== Files 146 146 Lines 26326 26326 ========================================== + Hits 20476 20664 +188 + Misses 5850 5662 -188 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.69% <0.00%> (-6.52%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (-4.88%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=footer). Last update [6c00285...7cb55ae](https://codecov.io/gh/huggingface/transformers/pull/6131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,130
closed
Use google style to document properties
It's cleaner this way and avoid redundancy.
07-29-2020 14:01:22
07-29-2020 14:01:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=h1) Report > Merging [#6130](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c002853a68906a5b1c2dd2ebb416770f1fc322b&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6130/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6130 +/- ## ======================================= Coverage 77.77% 77.78% ======================================= Files 146 146 Lines 26326 26328 +2 ======================================= + Hits 20476 20478 +2 Misses 5850 5850 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <ø> (ø)` | | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `98.62% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `70.32% <0.00%> (-26.66%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=footer). Last update [6c00285...0c01513](https://codecov.io/gh/huggingface/transformers/pull/6130?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,129
closed
Add new pre-trained models BERTweet and PhoBERT
I'd like to add pre-trained [BERTweet](https://github.com/VinAIResearch/BERTweet/) and [PhoBERT](https://github.com/VinAIResearch/PhoBERT/) models to the `transformers` library. Users now can use these models directly from `transformers`. E.g: bertweettokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base") bertweetmodel = BertweetModel.from_pretrained("vinai/bertweet-base") phoberttokenizer = PhobertTokenizer.from_pretrained("vinai/phobert-large") phobertmodel = PhobertModel.from_pretrained("vinai/phobert-large") [BERTweet: A pre-trained language model for English Tweets](https://github.com/VinAIResearch/BERTweet/) [PhoBERT: Pre-trained language models for Vietnamese](https://github.com/VinAIResearch/PhoBERT/)
07-29-2020 12:59:40
07-29-2020 12:59:40
> I'd like to add pre-trained [BERTweet](https://github.com/VinAIResearch/BERTweet/) and [PhoBERT](https://github.com/VinAIResearch/PhoBERT/) models to the `transformers` library. > > Users now can use these models directly from `transformers`. E.g: > > ``` > bertweettokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base") > bertweetmodel = BertweetModel.from_pretrained("vinai/bertweet-base") > > phoberttokenizer = PhobertTokenizer.from_pretrained("vinai/phobert-large") > phobertmodel = PhobertModel.from_pretrained("vinai/phobert-large") > ``` > > [BERTweet: A pre-trained language model for English Tweets](https://github.com/VinAIResearch/BERTweet/) > [PhoBERT: Pre-trained language models for Vietnamese](https://github.com/VinAIResearch/PhoBERT/) Whether I can get any support from huggingface w.r.t. this pull request @julien-c ? Thanks.<|||||>Hello @datquocnguyen ! As you've said, BERTweet and PhoBERT reimplement the RoBERTa model without adding any special behavior. I don't think it's necessary to reimplement them then, is it? Uploading them on the hub should be enough to load them into RoBERTa architectures, right?<|||||>Hi @LysandreJik They use different tokenizers (i.e. fastBPE), so we cannot load their tokenizers using RoBERTa. Please see a loading example using RoBERTa: https://github.com/VinAIResearch/BERTweet#transformers An issue related to this is at: #5965 <|||||>I hope both BERTweet and PhoBERT could be incorporated into `transformers` in a similar manner to as their counterparts (e.g. CamemBERT and FlauBERT). @LysandreJik Please let me know what I can do for this. Thanks.<|||||>Yes, I understand, that makes sense. There shouldn't be any issue in incorporating them into `transformers`.<|||||>I've taken a quick look at it, and it looks very cool! Something that we can maybe do better, is regarding the tokenizers: - They're currently untested, but they're the main contribution of this PR so they definitely should be tested. - If possible, we would like not to add an additional dependency (in this case FastBPE). It would be great to leverage the already existing library `huggingface/tokenizers` - On that front, given it's a BPE tokenizer, it should be easy enough to leverage the OpenAI GPT (not GPT-2) tokenizer, which seems very similar. It might even be possible to load the vocab/merge files directly in `OpenAIGPTTokenizer`. Let me know what you think!<|||||>Haven't tried it directly, but as seen with @n1t0, since you're not doing any fancy pre-processing it might be as simple as the following: ```py class PhobertTokenizerFast(PreTrainedTokenizerFast): vocab_files_names = VOCAB_FILES_NAMES pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES model_input_names = ["attention_mask"] def __init__(self, vocab_file, merges_file, unk_token="<unk>", **kwargs): kwargs.setdefault("unk_token", unk_token) super().__init__( CharBPETokenizer(vocab_file=vocab_file, merges_file=merges_file, unk_token=unk_token, lowercase=False, bert_normalizer=False, split_on_whitespace_only=True), **kwargs, ) ```<|||||>Thanks very much @LysandreJik I will revise the code following your comments and inform you as soon as I complete it. <|||||>@datquocnguyen Yeah, these models are cool. Lovin' it. I think we can try to figure out how to convert `fastBPE` formats to our compatible format before adding it directly to our dependency (I believe `XLM` uses `fastBPE`). so would you hold on a little when we try to figure it out? We have to be cautious when adding dependencies! Thanks! cc @LysandreJik <|||||>Yes. Thanks @JetRunner <|||||>some tokenizer function (decode, convert_ids_to_tokens) hasn't implemented for PhoBertTokenizer right?<|||||>@datquocnguyen Thank you for this pull request. I tried the Bertweet model and met a problem that the tokenizer encoded special symbols like "\<pad\>" not as a whole token. Instead, it would split the string into characters like "< p a d >". I fixed the problem by modifying the code at `` as below: ```python --- a/BERTweet/transformers/tokenization_bertweet.py +++ b/BERTweet/transformers/tokenization_bertweet.py @@ -242,9 +242,14 @@ class BertweetTokenizer(PreTrainedTokenizer): text = self.normalizeTweet(text) return self.bpe.apply([text])[0].split() - def convert_tokens_to_ids(self, tokens): - """ Converts a list of str tokens into a list of ids using the vocab.""" - return self.vocab.encode_line(" ".join(tokens), append_eos=False, add_if_not_exist=False).long().tolist() + def _convert_token_to_id(self, token): + #""" Converts a list of str tokens into a list of ids using the vocab.""" + #return self.vocab.encode_line(" ".join(tokens), append_eos=False, add_if_not_exist=False).long().tolist() + return self.vocab.encode_line(token, append_eos=False, add_if_not_exist=False).long().tolist()[0] + + @property + def vocab_size(self) -> int: + return len(self.vocab) ``` From my understanding, to encode a sentence, the order of the interfaces called in this case are `PreTrainedTokenizerBase::encode` ->`PreTrainedTokenizer::_encode_plus` ->`PreTrainedTokenizer::convert_tokens_to_ids` ->`PreTrainedTokenizer::_convert_token_to_id_with_added_voc` ->`BertweetTokenizer::_convert_token_to_id` for non-special tokens or `PreTrainedTokenizer::added_tokens_encoder` for special tokens. So in the class `BertweetTokenizer`, it should implement the interface `_convert_token_to_id` rather than `convert_tokens_to_ids`.<|||||>I will have a look soon. Thanks @Miopas.<|||||>**I have just tried "BertweetTokenizer" and got this error:** "ImportError: cannot import name 'BertweetTokenizer' from 'transformers' (/home/apps/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)" **Is there any solution to it?** **I have also tried:** tokenizer2 = BertTokenizer.from_pretrained("vinai/bertweet-base") trained = tokenizer2.encode("oops!! pelosi & dems admit numbers submitted to cbo are false! someurl #tcot #tlot #sgp #hcr #p2") and got: trained = [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None] **Is there any solution to it?** thks!<|||||>Hi @LysandreJik @JetRunner It's for your information both PhoBERT and BERTweet are now can be used in the Auto mode and without an external dependency fastBPE. Please help review this pull request (7/8 successful checks). Thanks a lot. @justinphan3110 @Miopas @SergioBarretoJr Please update the repository. Both models should work now. Thanks.<|||||>[run_tests_torch_and_tf.output.txt](https://github.com/huggingface/transformers/files/5174353/run_tests_torch_and_tf.output.txt) Hi @LysandreJik @JetRunner @julien-c @sshleifer I am wondering whether I can get a support from huggingface to incorporate BERTweet and PhoBERT into the `transformers` master branch ? There is only a failed test of `FAILED tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline - ...` for `run_tests_torch_and_tf` which is not related to BERTweet and PhoBERT, thus out of my control. So my pull request could not pass this test (please see details in the attachment file). Please could you help review my pull request? Thank you very much. <|||||>Dear @LysandreJik, Please can you kindly help to add the PhoBERT model as I really want to use it with your great `transformers` tool in a Vietnamese text challenge? <|||||>@datquocnguyen Thanks for your contribution. We discussed internally and given that the modeling part of both BERTweet and PhoBERT is out-of-the-box RoBERTa, we would like to avoid duplicating model files, and instead support them by leveraging https://github.com/huggingface/transformers/pull/6995 i.e. we would only need to add files for the tokenizers (and associated tests) Potentially we could also help to make those new tokenizers more general/configurable. What do you think? <|||||>@julien-c That sounds a nice idea. Please inform me when the new configuration type is integrated into the master branch. I will then adapt our tokenizers & config files for it. Thanks!<|||||>Hi @datquocnguyen, the PR @julien-c linked is now merged! This should greatly simplify your PR, in that you only need to contribute your tokenizers as well as their tests. Let us know if you can make the change!<|||||>Hi @LysandreJik thanks for your information. Yes, I will make the change soon (it should be done early next week).<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=h1) Report > Merging [#6129](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0cbcdb05b39e6c81db049d2b4d7dfc5d823210d?el=desc) will **decrease** coverage by `0.24%`. > The diff coverage is `71.14%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6129/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6129 +/- ## ========================================== - Coverage 80.32% 80.08% -0.25% ========================================== Files 168 170 +2 Lines 32285 32642 +357 ========================================== + Hits 25932 26140 +208 - Misses 6353 6502 +149 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bertweet.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydHdlZXQucHk=) | `63.18% <63.18%> (ø)` | | | [src/transformers/tokenization\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `83.45% <83.45%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.34% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.06% <100.00%> (+0.26%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: | | ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6129/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=footer). Last update [b0cbcdb...257b9f1](https://codecov.io/gh/huggingface/transformers/pull/6129?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@datquocnguyen can you also upload your model files on https://huggingface.co/vinai/bertweet-base I still get this error: > ⚠️ Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. <|||||>@datquocnguyen I looked a the PR and looking forward to this merge. I have a few suggestions: 1. I find the Phobert and Bertweet models to be quite similar. This makes the tokenizers also similar so we should not need a seperate tokenizer for both. Given that both these tokenizers just load fastBPE tokenizer data format, we can simply call them fastBPETokenizer. 2. Looking at this other code which also uses fastBPE <sup>[1]</sup> can't we just follow it to convert the fastBPE tokenizer files to the huggingface format. - You can easily convert your `bpe.codes` into `merges.txt` file and then use the Roberta tokenizer. - The format is the same and you only need to drop the 3rd column in your BPE.codes and add a top line for comment. - In your code you are not even using the last column values. - Your `merges.txt` can have the following as the first line `#version: 1` (look at merges.txt file of Roberta <sup>[2]</sup>) [1]: https://github.com/huggingface/transformers/blob/b23d3a5ad4aa08decd10671f85be5950767dd052/model_cards/allegro/herbert-klej-cased-v1/README.md [2]: https://huggingface.co/roberta-base#list-files<|||||>Hi @napsternxg The model had been already uploaded to https://huggingface.co/vinai/bertweet-base. For now, you would have to install `transformers` from our development branch (as it has not merged to the master branch of `transformers` yet). Did you try the following commands? - Python version >= 3.6 - [PyTorch](http://pytorch.org/) version >= 1.4.0 - Install `transformers` from our development branch: - `git clone https://github.com/datquocnguyen/transformers.git` - `cd transformers` - `pip install --upgrade .` - Install `emoji`: `pip3 install emoji` Thanks for your suggestions. BertweetTokenizer is specifically designed to work on Tweet data, incorporating a TwitterTokenizer while PhoBERT does not. Note that both our `vocab.txt` and `bpe.codes` are also used in loading our models in `fairseq`. So I would prefer to keep them intact rather than converting them into another format. <|||||>Btw, I should mention that BERTweet is accepted as an EMNLP-2020 demo paper while PhoBERT gets a slot in the Findings of EMNLP-2020 volume. Please help review this pull request so that others might benefit from using them directly from the master branch of `transformers`. Thanks. @LysandreJik @JetRunner @julien-c All checks have passed and you only need to merge files for the tokenizers and associated tests.<|||||>Thanks that makes sense. @datquocnguyen I was trying to use it from the models website. My suggestion on the bpe.codes file was not to remove it but to generate the merges.txt file from it, which will make it compatible with the huggingface tokenizer. <|||||>@napsternxg Please remove your "transformers" cache folder from `~/.cache/torch` and reinstall `transformers` from our development branch. I am sure that `bertweet` would work smoothly: ```python import torch from transformers import AutoModel, AutoTokenizer bertweet = AutoModel.from_pretrained("vinai/bertweet-base") tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base") # INPUT TWEET IS ALREADY NORMALIZED! line = "SC has first two presumptive cases of coronavirus , DHEC confirms HTTPURL via @USER :cry:" input_ids = torch.tensor([tokenizer.encode(line)]) with torch.no_grad(): features = bertweet(input_ids) # Models outputs are now tuples ```<|||||>@datquocnguyen great work and I am looking forward to seeing the PR gets merged so that I can use the models directly from the huggingface transformers.<|||||>Will merge today unless @julien-c, @JetRunner have comments.<|||||>LGTM, do not hesitate to make the tokenizers as generic/configurable as possible, but this can be in a subsequent PR<|||||>Any news on it? when Phobert available on HuggingFace? <|||||>It's been available since September: ```py from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base") model = AutoModelForMaskedLM.from_pretrained("vinai/phobert-base") ``` You can see the model card [here](https://huggingface.co/vinai/phobert-base).<|||||>> It's been available since September: > > ```python > from transformers import AutoTokenizer, AutoModelForMaskedLM > > tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base") > > model = AutoModelForMaskedLM.from_pretrained("vinai/phobert-base") > ``` > > You can see the model card [here](https://huggingface.co/vinai/phobert-base). But i don't see here https://huggingface.co/transformers/pretrained_models.html How can i integrate with Rasa NLU sir? Thank you<|||||>PhoBERT is based off of the RoBERTa implementation, so you can load it in a `RobertaForMaskedLM` model. The tokenizer is custom, so you should load it through the `PhobertTokenizer`. I have never used Rasa NLU, so I can't help you much here. Your best option would be to open a thread on our [forum](https://discuss.huggingface.co) with an example of how you do things for other models, so as not to flood this PR. You can ping me on the thread (@lysandre).
transformers
6,128
closed
add deepset/xlm-roberta-large-squad2 model card
07-29-2020 12:18:53
07-29-2020 12:18:53
transformers
6,127
closed
Initializing XLMRobertaTokenizer using pretrained tokenizer expects serialized vocab
Hi, I am training an XLMRoberta model from scratch on Hindi. I am using a sentencepiece tokenizer trained exclusively on monolingual data following the steps mentioned in the [tokenizers repository](https://github.com/huggingface/tokenizers/tree/704cf3fdd2f607ead58a561b892b510b49c301db/bindings/python#using-the-provided-tokenizers). This results in the creation of `vocab.json` and `merges.txt`. However when I try to initialize the tokenizer using `XLMRobertaTokenizer.from_pretrained` I get an error saying ```assumed 'models/sentencepiece' was a path, a model identifier, or url to a directory containing vocabulary files named ['sentencepiece.bpe.model'] but couldn't find such vocabulary files at this path or url. ``` I am assuming this is a serialized file based on [huggingface.co model](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-roberta-base-sentencepiece.bpe.model) but don't know how to serialize my vocab.json file. I have already tried using `pickle` and `numpy` Versions used: transformers: 2.9.1 tokenizers: 0.7.0
07-29-2020 11:45:28
07-29-2020 11:45:28
Hi! The XLM-R tokenizer only accepts SentencePiece files, which cannot be created yet with the `tokenizers` library (soon!). You should use the official SentencePiece library for that.
transformers
6,126
closed
Add decoding inputs to generate
# 🚀 Feature request Add decoding inputs to generate ## Motivation When generating with encoder-decoder, one may want to insert context for the decoder. I'm currently working on summarization given that I know some parts of the gt. But other ideas can come to mind. ## Your contribution ``` @torch.no_grad() def generate( self, input_ids: Optional[torch.LongTensor] = None, max_length: Optional[int] = None, min_length: Optional[int] = None, do_sample: Optional[bool] = None, early_stopping: Optional[bool] = None, num_beams: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, repetition_penalty: Optional[float] = None, bad_words_ids: Optional[Iterable[int]] = None, bos_token_id: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, length_penalty: Optional[float] = None, no_repeat_ngram_size: Optional[int] = None, num_return_sequences: Optional[int] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_start_token_id: Optional[int] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, **model_specific_kwargs ) -> torch.LongTensor: ``` r""" Generates sequences for models with a LM head. The method currently supports greedy decoding, beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling. Adapted in part from Facebook's XLM beam search code_. .. _Facebook's XLM beam search code: https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529 Parameters: input_ids: (`optional`) `torch.LongTensor` of shape `(batch_size, sequence_length)` The sequence used as a prompt for the generation. If `None` the method initializes it as an empty `torch.LongTensor` of shape `(1,)`. max_length: (`optional`) int The max length of the sequence to be generated. Between `min_length` and infinity. Default to 20. min_length: (`optional`) int The min length of the sequence to be generated. Between 0 and infinity. Default to 0. do_sample: (`optional`) bool If set to `False` greedy decoding is used. Otherwise sampling is used. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`. early_stopping: (`optional`) bool if set to `True` beam search is stopped when at least `num_beams` sentences finished per batch. Defaults to `False` as defined in `configuration_utils.PretrainedConfig`. num_beams: (`optional`) int Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1. temperature: (`optional`) float The value used to module the next token probabilities. Must be strictly positive. Default to 1.0. top_k: (`optional`) int The number of highest probability vocabulary tokens to keep for top-k-filtering. Between 1 and infinity. Default to 50. top_p: (`optional`) float The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. Must be between 0 and 1. Default to 1. repetition_penalty: (`optional`) float The parameter for repetition penalty. Between 1.0 and infinity. 1.0 means no penalty. Default to 1.0. pad_token_id: (`optional`) int Padding token. Default to specicic model pad_token_id or None if it does not exist. bos_token_id: (`optional`) int BOS token. Defaults to `bos_token_id` as defined in the models config. eos_token_id: (`optional`) int EOS token. Defaults to `eos_token_id` as defined in the models config. length_penalty: (`optional`) float Exponential penalty to the length. Default to 1. no_repeat_ngram_size: (`optional`) int If set to int > 0, all ngrams of size `no_repeat_ngram_size` can only occur once. bad_words_ids: (`optional`) list of lists of int `bad_words_ids` contains tokens that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use `tokenizer.encode(bad_word, add_prefix_space=True)`. num_return_sequences: (`optional`) int The number of independently computed returned sequences for each element in the batch. Default to 1. attention_mask (`optional`) obj: `torch.LongTensor` of same shape as `input_ids` Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: ``1`` for tokens that are NOT MASKED, ``0`` for MASKED tokens. Defaults to `None`. `What are attention masks? <../glossary.html#attention-mask>`__ decoder_start_token_id=None: (`optional`) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to `None` and is changed to `BOS` later. use_cache: (`optional`) bool If `use_cache` is True, past key values are used to speed up decoding if applicable to model. Defaults to `True`. model_specific_kwargs: (`optional`) dict Additional model specific kwargs will be forwarded to the `forward` function of the model. Return: output: `torch.LongTensor` of shape `(batch_size * num_return_sequences, sequence_length)` sequence_length is either equal to max_length or shorter if all batches finished early due to the `eos_token_id` Examples:: tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache. outputs = model.generate(max_length=40) # do greedy decoding print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True))) tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer model = AutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from S3 and cache. input_context = 'The dog' input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog' for i in range(3): # 3 output sequences were generated print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True))) tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache. input_context = 'The dog' input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3) # 3 generate sequences using by sampling for i in range(3): # 3 output sequences were generated print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True))) tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer model = AutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from S3 and cache. input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True))) tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer model = AutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from S3 and cache. input_context = 'My cute dog' # "Legal" is one of the control codes for ctrl bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']] input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated """ # We cannot generate if the model does not have a LM head if self.get_output_embeddings() is None: raise AttributeError( "You tried to generate sequences with a model that does not have a LM Head." "Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )" ) max_length = max_length if max_length is not None else self.config.max_length min_length = min_length if min_length is not None else self.config.min_length do_sample = do_sample if do_sample is not None else self.config.do_sample early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping use_cache = use_cache if use_cache is not None else self.config.use_cache num_beams = num_beams if num_beams is not None else self.config.num_beams temperature = temperature if temperature is not None else self.config.temperature top_k = top_k if top_k is not None else self.config.top_k top_p = top_p if top_p is not None else self.config.top_p repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty no_repeat_ngram_size = ( no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size ) bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids num_return_sequences = ( num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences ) decoder_start_token_id = ( decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id ) if input_ids is not None: batch_size = input_ids.shape[0] # overriden by the input batch_size else: batch_size = 1 assert isinstance(max_length, int) and max_length > 0, "`max_length` should be a strictly positive integer." assert isinstance(min_length, int) and min_length >= 0, "`min_length` should be a positive integer." assert isinstance(do_sample, bool), "`do_sample` should be a boolean." assert isinstance(early_stopping, bool), "`early_stopping` should be a boolean." assert isinstance(use_cache, bool), "`use_cache` should be a boolean." assert isinstance(num_beams, int) and num_beams > 0, "`num_beams` should be a strictly positive integer." assert temperature > 0, "`temperature` should be strictly positive." assert isinstance(top_k, int) and top_k >= 0, "`top_k` should be a positive integer." assert 0 <= top_p <= 1, "`top_p` should be between 0 and 1." assert repetition_penalty >= 1.0, "`repetition_penalty` should be >= 1." assert input_ids is not None or ( isinstance(bos_token_id, int) and bos_token_id >= 0 ), "If input_ids is not defined, `bos_token_id` should be a positive integer." assert pad_token_id is None or ( isinstance(pad_token_id, int) and (pad_token_id >= 0) ), "`pad_token_id` should be a positive integer." assert (eos_token_id is None) or ( isinstance(eos_token_id, int) and (eos_token_id >= 0) ), "`eos_token_id` should be a positive integer." assert length_penalty > 0, "`length_penalty` should be strictly positive." assert ( isinstance(no_repeat_ngram_size, int) and no_repeat_ngram_size >= 0 ), "`no_repeat_ngram_size` should be a positive integer." assert ( isinstance(num_return_sequences, int) and num_return_sequences > 0 ), "`num_return_sequences` should be a strictly positive integer." assert ( bad_words_ids is None or isinstance(bad_words_ids, list) and isinstance(bad_words_ids[0], list) ), "`bad_words_ids` is either `None` or a list of lists of tokens that should not be generated" if input_ids is None: assert isinstance(bos_token_id, int) and bos_token_id >= 0, ( "you should either supply a context to complete as `input_ids` input " "or a `bos_token_id` (integer >= 0) as a first token to start the generation." ) input_ids = torch.full( (batch_size, 1), bos_token_id, dtype=torch.long, device=next(self.parameters()).device, ) else: assert input_ids.dim() == 2, "Input prompt should be of shape (batch_size, sequence length)." # not allow to duplicate outputs when greedy decoding if do_sample is False: if num_beams == 1: # no_beam_search greedy generation conditions assert ( num_return_sequences == 1 ), "Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1" else: # beam_search greedy generation conditions assert ( num_beams >= num_return_sequences ), "Greedy beam search decoding cannot return more sequences than it has beams. Please set num_beams >= num_return_sequences" # create attention mask if necessary # TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140 if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids): attention_mask = input_ids.ne(pad_token_id).long() elif attention_mask is None: attention_mask = input_ids.new_ones(input_ids.shape) # set pad_token_id to eos_token_id if not set. Important that this is done after # attention_mask is created if pad_token_id is None and eos_token_id is not None: logger.warning( "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id) ) pad_token_id = eos_token_id # current position and vocab size if hasattr(self.config, "vocab_size"): vocab_size = self.config.vocab_size elif ( self.config.is_encoder_decoder and hasattr(self.config, "decoder") and hasattr(self.config.decoder, "vocab_size") ): vocab_size = self.config.decoder.vocab_size # set effective batch size and effective batch multiplier according to do_sample if do_sample: effective_batch_size = batch_size * num_return_sequences effective_batch_mult = num_return_sequences else: effective_batch_size = batch_size effective_batch_mult = 1 if self.config.is_encoder_decoder: if decoder_start_token_id is None: decoder_start_token_id = bos_token_id assert ( decoder_start_token_id is not None ), "decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation" assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self) assert callable(self.get_encoder), "{} should be a method".format(self.get_encoder) # get encoder and store encoder outputs encoder = self.get_encoder() encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask) # Expand input ids if num_beams > 1 or num_return_sequences > 1 if self.config.is_encoder_decoder: if decoder_input_ids is not None: input_ids = decoder_input_ids else: # create empty decoder_input_ids input_ids = torch.full( (effective_batch_size * num_beams, 1), decoder_start_token_id, dtype=torch.long, device=next(self.parameters()).device, ) cur_len = 1 assert ( batch_size == encoder_outputs[0].shape[0] ), f"expected encoder_outputs[0] to have 1st dimension bs={batch_size}, got {encoder_outputs[0].shape[0]} " # expand batch_idx to assign correct encoder output for expanded input_ids (due to num_beams > 1 and num_return_sequences > 1) expanded_batch_idxs = ( torch.arange(batch_size) .view(-1, 1) .repeat(1, num_beams * effective_batch_mult) .view(-1) .to(input_ids.device) ) # expand encoder_outputs encoder_outputs = (encoder_outputs[0].index_select(0, expanded_batch_idxs), *encoder_outputs[1:]) else: encoder_outputs = None cur_len = input_ids.shape[-1] assert ( cur_len < max_length ), f"The context has {cur_len} number of tokens, but `max_length` is only {max_length}. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`" if num_return_sequences > 1 or num_beams > 1: input_ids_len = input_ids.shape[-1] input_ids = input_ids.unsqueeze(1).expand(batch_size, effective_batch_mult * num_beams, input_ids_len) attention_mask = attention_mask.unsqueeze(1).expand( batch_size, effective_batch_mult * num_beams, attention_mask.shape[-1] ) input_ids = input_ids.contiguous().view( effective_batch_size * num_beams, input_ids_len ) # shape: (batch_size * num_return_sequences * num_beams, cur_len) attention_mask = attention_mask.contiguous().view( effective_batch_size * num_beams, attention_mask.shape[-1] ) # shape: (batch_size * num_return_sequences * num_beams, cur_len) if num_beams > 1: output = self._generate_beam_search( input_ids, cur_len=cur_len, max_length=max_length, min_length=min_length, do_sample=do_sample, early_stopping=early_stopping, temperature=temperature, top_k=top_k, top_p=top_p, repetition_penalty=repetition_penalty, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, batch_size=effective_batch_size, num_return_sequences=num_return_sequences, length_penalty=length_penalty, num_beams=num_beams, vocab_size=vocab_size, encoder_outputs=encoder_outputs, attention_mask=attention_mask, use_cache=use_cache, decoder_attention_mask=decoder_attention_mask, model_specific_kwargs=model_specific_kwargs, ) else: output = self._generate_no_beam_search( input_ids, cur_len=cur_len, max_length=max_length, min_length=min_length, do_sample=do_sample, temperature=temperature, top_k=top_k, top_p=top_p, repetition_penalty=repetition_penalty, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, batch_size=effective_batch_size, encoder_outputs=encoder_outputs, attention_mask=attention_mask, use_cache=use_cache, decoder_attention_mask=decoder_attention_mask, model_specific_kwargs=model_specific_kwargs, ) return output def _generate_no_beam_search( self, input_ids, cur_len, max_length, min_length, do_sample, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, pad_token_id, eos_token_id, batch_size, encoder_outputs, attention_mask, use_cache, decoder_attention_mask, model_specific_kwargs, ): """ Generate sequences for each example without beam search (num_beams == 1). All returned sequence are generated independantly. """ # length of generated sentences / unfinished sentences unfinished_sents = input_ids.new(batch_size).fill_(1) sent_lengths = input_ids.new(batch_size).fill_(max_length) past = (encoder_outputs, None) if encoder_outputs is not None else None while cur_len < max_length: model_inputs = self.prepare_inputs_for_generation( input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs ) model_inputs['decoder_attention_mask'] = decoder_attention_mask outputs = self(**model_inputs) next_token_logits = outputs[0][:, -1, :] scores = self.postprocess_next_token_scores( scores=next_token_logits, input_ids=input_ids, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, cur_len=cur_len, min_length=min_length, max_length=max_length, eos_token_id=eos_token_id, repetition_penalty=repetition_penalty, batch_size=batch_size, num_beams=1, ) # if model has past, then set the past variable to speed up decoding if self._use_cache(outputs, use_cache): past = outputs[1] if do_sample: # Temperature (higher temperature => more likely to sample low probability tokens) if temperature != 1.0: scores = scores / temperature # Top-p/top-k filtering next_token_logscores = top_k_top_p_filtering(scores, top_k=top_k, top_p=top_p) # Sample probs = F.softmax(next_token_logscores, dim=-1) next_token = torch.multinomial(probs, num_samples=1).squeeze(1) else: # Greedy decoding next_token = torch.argmax(next_token_logits, dim=-1) # update generations and finished sentences if eos_token_id is not None: # pad finished sentences if eos_token_id exist tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents) else: tokens_to_add = next_token # add token and increase length by one input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1) cur_len = cur_len + 1 if eos_token_id is not None: eos_in_sents = tokens_to_add == eos_token_id # if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool() sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len) # unfinished_sents is set to zero if eos in sentence unfinished_sents.mul_((~eos_in_sents).long()) # stop when there is a </s> in each sentence, or if we exceed the maximul length if unfinished_sents.max() == 0: break # extend attention_mask for new generated input if only decoder if self.config.is_encoder_decoder is False: attention_mask = torch.cat( [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1 ) return input_ids def _generate_beam_search( self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, pad_token_id, eos_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask, use_cache, decoder_attention_mask, model_specific_kwargs, ): """ Generate sequences for each example with beam search. """ # generated hypotheses generated_hyps = [ BeamHypotheses(num_beams, max_length, length_penalty, early_stopping=early_stopping) for _ in range(batch_size) ] # scores for each sentence in the beam beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) # for greedy decoding it is made sure that only tokens of the first beam are considered to avoid sampling the exact same tokens three times if do_sample is False: beam_scores[:, 1:] = -1e9 beam_scores = beam_scores.view(-1) # shape (batch_size * num_beams,) # cache compute states past = (encoder_outputs, None) if encoder_outputs is not None else None # done sentences done = [False for _ in range(batch_size)] while cur_len < max_length: model_inputs = self.prepare_inputs_for_generation( input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs ) model_inputs['decoder_attention_mask'] = decoder_attention_mask outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size) next_token_logits = outputs[0][:, -1, :] # (batch_size * num_beams, vocab_size) # if model has past, then set the past variable to speed up decoding if self._use_cache(outputs, use_cache): past = outputs[1] if self.config.is_encoder_decoder and do_sample is False: # TODO (PVP) still a bit hacky here - there might be a better solution next_token_logits = self.adjust_logits_during_generation( next_token_logits, cur_len=cur_len, max_length=max_length ) scores = F.log_softmax(next_token_logits, dim=-1) # (batch_size * num_beams, vocab_size) scores = self.postprocess_next_token_scores( scores=scores, input_ids=input_ids, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, cur_len=cur_len, min_length=min_length, max_length=max_length, eos_token_id=eos_token_id, repetition_penalty=repetition_penalty, batch_size=batch_size, num_beams=num_beams, ) assert scores.shape == (batch_size * num_beams, vocab_size), "Shapes of scores: {} != {}".format( scores.shape, (batch_size * num_beams, vocab_size) ) if do_sample: _scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size) # Temperature if temperature != 1.0: _scores = _scores / temperature # Top-p/top-k filtering _scores = top_k_top_p_filtering( _scores, top_k=top_k, top_p=top_p, min_tokens_to_keep=2 ) # (batch_size * num_beams, vocab_size) # re-organize to group the beam together to sample from all beam_idxs _scores = _scores.contiguous().view( batch_size, num_beams * vocab_size ) # (batch_size, num_beams * vocab_size) # Sample 2 next tokens for each beam (so we have some spare tokens and match output of greedy beam search) probs = F.softmax(_scores, dim=-1) next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) # (batch_size, num_beams * 2) # Compute next scores next_scores = torch.gather(_scores, -1, next_tokens) # (batch_size, num_beams * 2) # sort the sampled vector to make sure that the first num_beams samples are the best next_scores, next_scores_indices = torch.sort(next_scores, descending=True, dim=1) next_tokens = torch.gather(next_tokens, -1, next_scores_indices) # (batch_size, num_beams * 2) else: next_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size) # re-organize to group the beam together (we are keeping top hypothesis accross beams) next_scores = next_scores.view( batch_size, num_beams * vocab_size ) # (batch_size, num_beams * vocab_size) next_scores, next_tokens = torch.topk(next_scores, 2 * num_beams, dim=1, largest=True, sorted=True) assert next_scores.size() == next_tokens.size() == (batch_size, 2 * num_beams) # next batch beam content next_batch_beam = [] # for each sentence for batch_idx in range(batch_size): # if we are done with this sentence, add a pad token if done[batch_idx]: assert ( len(generated_hyps[batch_idx]) >= num_beams ), "Batch can only be done if at least {} beams have been generated".format(num_beams) assert ( eos_token_id is not None and pad_token_id is not None ), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined" next_batch_beam.extend([(0, pad_token_id, 0)] * num_beams) # pad the batch continue # next sentence beam content, this will get added to next_batch_beam next_sent_beam = [] # next tokens for this sentence for beam_token_rank, (beam_token_id, beam_token_score) in enumerate( zip(next_tokens[batch_idx], next_scores[batch_idx]) ): # get beam and token IDs beam_id = beam_token_id // vocab_size token_id = beam_token_id % vocab_size effective_beam_id = batch_idx * num_beams + beam_id # add to generated hypotheses if end of sentence if (eos_token_id is not None) and (token_id.item() == eos_token_id): # if beam_token does not belong to top num_beams tokens, it should not be added is_beam_token_worse_than_top_num_beams = beam_token_rank >= num_beams if is_beam_token_worse_than_top_num_beams: continue generated_hyps[batch_idx].add( input_ids[effective_beam_id].clone(), beam_token_score.item(), ) else: # add next predicted token since it is not eos_token next_sent_beam.append((beam_token_score, token_id, effective_beam_id)) # once the beam for next step is full, don't add more tokens to it. if len(next_sent_beam) == num_beams: break # Check if we are done so that we can save a pad step if all(done) done[batch_idx] = done[batch_idx] or generated_hyps[batch_idx].is_done( next_scores[batch_idx].max().item(), cur_len ) # update next beam content assert len(next_sent_beam) == num_beams, "Beam should always be full" next_batch_beam.extend(next_sent_beam) assert len(next_batch_beam) == num_beams * (batch_idx + 1), "We should have added num_beams each step" # stop when we are done with each sentence if all(done): break # sanity check / prepare next batch assert len(next_batch_beam) == batch_size * num_beams beam_scores = beam_scores.new([x[0] for x in next_batch_beam]) beam_tokens = input_ids.new([x[1] for x in next_batch_beam]) beam_idx = input_ids.new([x[2] for x in next_batch_beam]) # re-order batch and update current length input_ids = input_ids[beam_idx, :] input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1) cur_len = cur_len + 1 # re-order internal states if past is not None: past = self._reorder_cache(past, beam_idx) # extend attention_mask for new generated input if only decoder if self.config.is_encoder_decoder is False: attention_mask = torch.cat( [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1 ) # finalize all open beam hypotheses and add to generated hypotheses for batch_idx in range(batch_size): if done[batch_idx]: continue # test that beam scores match previously calculated scores if not eos and batch_idx not done if eos_token_id is not None and all( (token_id % vocab_size).item() != eos_token_id for token_id in next_tokens[batch_idx] ): assert torch.all( next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( next_scores[:, :num_beams][batch_idx], beam_scores.view(batch_size, num_beams)[batch_idx], ) # need to add best num_beams hypotheses to generated hyps for beam_id in range(num_beams): effective_beam_id = batch_idx * num_beams + beam_id final_score = beam_scores[effective_beam_id].item() final_tokens = input_ids[effective_beam_id] generated_hyps[batch_idx].add(final_tokens, final_score) # depending on whether greedy generation is wanted or not define different output_batch_size and output_num_return_sequences_per_batch output_batch_size = batch_size if do_sample else batch_size * num_return_sequences output_num_return_sequences_per_batch = 1 if do_sample else num_return_sequences # select the best hypotheses sent_lengths = input_ids.new(output_batch_size) best = [] # retrieve best hypotheses for i, hypotheses in enumerate(generated_hyps): sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0]) for j in range(output_num_return_sequences_per_batch): effective_batch_idx = output_num_return_sequences_per_batch * i + j best_hyp = sorted_hyps.pop()[1] sent_lengths[effective_batch_idx] = len(best_hyp) best.append(best_hyp) # shorter batches are padded if sent_lengths.min().item() != sent_lengths.max().item(): assert pad_token_id is not None, "`Pad_token_id` has to be defined" sent_max_len = min(sent_lengths.max().item() + 1, max_length) decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id) # fill with hypothesis and eos_token_id if necessary for i, hypo in enumerate(best): decoded[i, : sent_lengths[i]] = hypo if sent_lengths[i] < max_length: decoded[i, sent_lengths[i]] = eos_token_id else: # none of the hypotheses have an eos_token assert (len(hypo) == max_length for hypo in best) decoded = torch.stack(best).type(torch.long).to(next(self.parameters()).device) return decoded `
07-29-2020 10:50:18
07-29-2020 10:50:18
transformers
6,125
closed
problem about geting hidden_states using TFBertModel
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Pycharm - Python version: 3.6 - PyTorch version (GPU?): None - Tensorflow version (GPU?): 2.2.0 GPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> @LysandreJik @jplu ## Information Model I am using Bert: I want to the outpts of **TFBertModel** class inclue **hidden_states**. unfortunately, I use two method provided by transformers' document north can achieve this goal. The first method: **output_hidden_states=True** is passed to call(). code as follow: ``` class Model(tf.keras.Model): def __init__(self,init): super(Model,self).__init__() conf = transformers.BertConfig.from_json_file("model/chinese_L-12_H-768_A-12/config.json") self.bertmodel = transformers.TFBertModel.from_pretrained("bert-base-chinese") @tf.function def call(self, inputs, training=None, mask=None): out_bert = self.bertmodel(inputs,output_hidden_states=True) return out_bert if __name__ == "__main__": tokenizer = transformers.BertTokenizer("model/chinese_L-12_H-768_A-12/vocab.txt") text_2 = tokenizer.batch_encode_plus(["你买啊,买了就是成都人", "你来啊,来了就是深圳人"], max_length=20, pad_to_max_length=True) print(text_2) model = Model() out = model([tf.convert_to_tensor(text_2["input_ids"]),tf.convert_to_tensor(text_2['attention_mask'])]) print("out",out) ``` the print in consol only two tensor :**last_hidden_state** and **pooler_output** the second method: set **config.output_hidden_states=True**. code as follow: config.json: ``` { "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "type_vocab_size": 2, "vocab_size": 21128, "output_hidden_states": true } ``` I set **"output_hidden_states": true** at last line. code: ``` class Model(tf.keras.Model): def __init__(self,init): super(Model,self).__init__() conf = transformers.BertConfig.from_json_file("model/chinese_L-12_H-768_A-12/config.json") self.bertmodel = transformers.TFBertModel.from_pretrained("bert-base-chinese",config=conf) @tf.function def call(self, inputs, training=None, mask=None): out_bert = self.bertmodel(inputs) return out_bert if __name__ == "__main__": tokenizer = transformers.BertTokenizer("model/chinese_L-12_H-768_A-12/vocab.txt") text_2 = tokenizer.batch_encode_plus(["你买啊,买了就是成都人", "你来啊,来了就是深圳人"], max_length=20, pad_to_max_length=True) print(text_2) model = Model() out = model([tf.convert_to_tensor(text_2["input_ids"]),tf.convert_to_tensor(text_2['attention_mask'])]) print("out",out) ``` the print in consol is same as method one. but if I cancel **@tf.function** in front of call(). the print is what I expect. ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
07-29-2020 10:48:15
07-29-2020 10:48:15
Hello! I should be solved here https://github.com/huggingface/transformers/pull/5468. Nevertheless, you have to avoid using boolean tensors for `output_hidden_states` and `output_attentions` otherwise it won't work.<|||||>Hello! could you tell me more details about how to avoid using boolean tensors? or how to achieve this goal in version 3.0.2 ? How could I got your fixed code? or How long time we can see your fixed code in new version? thanks for your answers.<|||||>Fix has been merged in master!<|||||>Should be fixed on `master`<|||||>@jianrui1995 Hi, could you tell me how you solved the discrepancy in the behaviour with or without @tf.function, I am facing the same issue, tf version: 2.3.0 and transformers 3.0.2 ``` class WrappedModel(tf.Module): def __init__(self): super(WrappedModel, self).__init__() self.model = TFDistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True) # @tf.function def __call__(self, x): return self.model(x) ```<|||||>I didn't solved this problem in version 3.0.2 but I got the bug code in **line 765 modeling_tf_tuils.py** code as follow: ``` def cast_bool_to_primitive(bool_variable, default_tensor_to_true=False): """Function arguments can be inserted as boolean tensor and bool variables to cope with keras serialization we need to cast `output_attentions` to correct bool if it is a tensor Args: default_tensor_to_true: bool, if tensor should default to True in case tensor has no numpy attribute """ # if bool variable is tensor and has numpy value if tf.is_tensor(bool_variable): if hasattr(bool_variable, "numpy"): return bool(bool_variable.numpy()) elif default_tensor_to_true: return True # else variable is bool return bool_variable ``` the code witch calling the method in **line 407 in modeling_tf_bert.py** ``` if cast_bool_to_primitive(output_hidden_states) is True: all_hidden_states = all_hidden_states + (hidden_states,) ``` according to tensorflow2's document, if you add **tf.tfunction**, tf will change the model from enger to graph by Autograph. The method's return True will change to bool tensor. it is different from true. according the jplu's answer, this problem has been solved in master. you colud get the master version.<|||||>@jianrui1995 Thanks for your response, yes I was able to do it with the updated code in master. Was more curious about the version of transformers that has the support for this, since maintainability is an issue.<|||||>This part is still a work in progress, what there is in master is just a tiny workaround and doesn't work for several cases. We will push an update when we will have found a proper solution.
transformers
6,124
closed
convert_pytorch_checkpoint_to_tf2.py AttributeError: cls.seq_relationship.weight not found in PyTorch model
Hi all. I'm trying to convert a pre-trained BERT model from PyTorch into TF2 format and facing a problem. Model being converted: DeepPavlov/rubert-base-cased. I downloaded all files into a local folder. File convert_pytorch_checkpoint_to_tf2.py from master branch - I downloaded it separately. The command I'm running: `python convert_pytorch_checkpoint_to_tf2.py --tf_dump_path ./rubert-base-cased_tf2/ --model_type bert --pytorch_checkpoint_path ./rubert-base-cased-pt/pytorch_model.bin --config_file ./rubert-base-cased-pt/config.json` And the stacktrace: ``` Traceback (most recent call last): File "convert_pytorch_checkpoint_to_tf2.py", line 364, in <module> convert_all_pt_checkpoints_to_tf( File "convert_pytorch_checkpoint_to_tf2.py", line 298, in convert_all_pt_checkpoints_to_tf convert_pt_checkpoint_to_tf( File "convert_pytorch_checkpoint_to_tf2.py", line 209, in convert_pt_checkpoint_to_tf tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path) File "/Users/denisstenyushkin/.virtualenvs/tf2_pytorch/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 92, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model( File "/Users/denisstenyushkin/.virtualenvs/tf2_pytorch/lib/python3.8/site-packages/transformers/modeling_tf_pytorch_utils.py", line 166, in load_pytorch_weights_in_tf2_model raise AttributeError("{} not found in PyTorch model".format(name)) AttributeError: cls.seq_relationship.weight not found in PyTorch model ``` Versions: - `transformers` version: 3.0.2 - Platform: macOS-10.15.5-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
07-29-2020 09:45:23
07-29-2020 09:45:23
Hey @DenisStenyushkin, It seems like you have the PyTorch model trained with our library, so you can simply do: ```python from transformers import TFBertModel model = TFBertModel.from_pretrained("./rubert-base-cased-pt", from_pt=True) model.save("./rubert-base-cased") # this adds a TF model file (tf_model.h5) to your directory ``` Let me know if this does not solve your problem!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,123
closed
Create README.md
07-29-2020 09:13:25
07-29-2020 09:13:25
Thanks!
transformers
6,122
closed
[T5Tokenizer] add prepare_seq2seq_batch method
This PR adds `prepare_seq2seq_batch` method to `T5Tokenizer` as per the proposal in #6080 @sshleifer
07-29-2020 09:10:12
07-29-2020 09:10:12
@sshleifer , @sgugger I have made changes regarding the suggestions. Thanks !<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=h1) Report > Merging [#6122](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92f8ce2ed65f23f91795ce6eafb8cce1e226cd38&el=desc) will **increase** coverage by `0.08%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6122/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6122 +/- ## ========================================== + Coverage 78.51% 78.59% +0.08% ========================================== Files 146 146 Lines 26326 26347 +21 ========================================== + Hits 20669 20708 +39 + Misses 5657 5639 -18 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `96.73% <100.00%> (+0.96%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.09% <0.00%> (-4.88%)` | :arrow_down: | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=footer). Last update [92f8ce2...a84bb5b](https://codecov.io/gh/huggingface/transformers/pull/6122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sshleifer , @patrickvonplaten , all green :)
transformers
6,121
closed
XLNet PLM Readme
Add information on XLNet and its PLM objective in the language-modeling README.
07-29-2020 09:00:49
07-29-2020 09:00:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=h1) Report > Merging [#6121](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92f8ce2ed65f23f91795ce6eafb8cce1e226cd38&el=desc) will **decrease** coverage by `0.73%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6121/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6121 +/- ## ========================================== - Coverage 78.51% 77.77% -0.74% ========================================== Files 146 146 Lines 26326 26326 ========================================== - Hits 20669 20476 -193 - Misses 5657 5850 +193 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+12.87%)` | :arrow_up: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=footer). Last update [92f8ce2...f465894](https://codecov.io/gh/huggingface/transformers/pull/6121?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,120
closed
Don't see how to use correct padding with QA pipeline
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details In some cases, when using a **QuestionAnsweringPipeline**, I get an error similar to the following: *** ValueError: expected sequence of length 384 at dim 1 (got 260) I've traced the problem to line 1496 of pipelines.py, and the change to [this commit](https://github.com/huggingface/transformers/commit/896300177bf9f35feac4698370212a80a5ab6138). The troubled line is: fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} Basically, when trying to convert this **v** to a tensor an error is thrown because not every array in **v** has the same length - because the padding strategy has been changed (if I comment out the padding strategy in the call to **squad_convert_examples_to_features** then the default value `"max_length"` takes effect and there is no problem). I guess this was done as some sort of optimization, but I'm not really sure how to use it. Every other argument to **squad_convert_examples_to_features** will use a *kwarg*, but this one does not. Maybe it should use a *kwarg* like everything else, so if you need the padding (or don't want to have to deal with it) you can set the **padding_strategy** as you like? Or am I missing something? ### Minimal code to reproduce: from transformers import QuestionAnsweringPipeline, AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained('twmkn9/distilbert-base-uncased-squad2') tokenizer = AutoTokenizer.from_pretrained('twmkn9/distilbert-base-uncased-squad2') # I've omitted the context for brevity. You can, for example, take the **plot** section from [the matrix](https://en.wikipedia.org/wiki/The_Matrix) context = """...""" pipeline({"question":"what was Neo's job?", "context": context}) # error as described above <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-29-2020 08:49:52
07-29-2020 08:49:52
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,119
closed
🐛 Empty TypeError on BartTokenizerFast.decode(tensor)
## Environment info `transformers` `3.0.2` ### Who can help Summarization: @sshleifer ## To reproduce ```python import torch from transformers import BartTokenizerFast t = BartTokenizerFast.from_pretrained('facebook/bart-large') x = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2]) t.decode(x) ``` will throw an empty `TypeError` : ``` File "/home/me/.venv/summarization/lib/python3.6/site-packages/tokenizers/implementations/base_tokenizer.py", line 267, in decode return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens) TypeError ``` To reproduce : [Colab Notebook](https://colab.research.google.com/drive/1bnP8TvmRrHrMD-7H2MOQQSE8QrhCb7SC?usp=sharing) ## Expected behavior No error thrown, like with regular `Tokenizer` : ```python import torch from transformers import BartTokenizer t = BartTokenizer.from_pretrained('facebook/bart-large') x = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2]) t.decode(x) ``` > `<s> has not at who one short</s>`
07-29-2020 08:18:43
07-29-2020 08:18:43
I noticed this is due to `x` being a tensor. It works fine with a list : ```python import torch from transformers import BartTokenizerFast t = BartTokenizerFast.from_pretrained('facebook/bart-large') x = [0, 34, 45, 23, 54, 65, 765, 2] t.decode(x) ``` > `<s> has not at who one short</s>` --- So the current work-around is to first convert to a list : ```python import torch from transformers import BartTokenizerFast t = BartTokenizerFast.from_pretrained('facebook/bart-large') x = torch.tensor([0, 34, 45, 23, 54, 65, 765, 2]) t.decode(x.tolist()) ```<|||||>Tokenizer bug. @mfuntowicz is this expected behavior?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,118
closed
Is `guid` allowed to be None in `InputExample`?
## Environment -`transformers` version: 3.0.2 Omitted the rest because they most likely don't affect this issue. ## To reproduce ```py from transformers.data.processors.utils import SingleSentenceClassificationProcessor processor = SingleSentenceClassificationProcessor(labels=["lbl1", "lbl2"]) processor.add_examples(texts_or_text_and_labels=["example1", "example2"]) # There's a default ids=None print(processor[0]) ``` prints ```InputExample(guid=None, text_a='example1', text_b=None, label=None)``` If `guid` is allowed to be None, that should be reflected in the type annotation(and documentation) of `InputExample`. If not, then `ids` should not be allowed to be `None`. ### Who can help @sgugger because it's a documentation issue, @thomwolf because of `git blame`.
07-29-2020 06:19:40
07-29-2020 06:19:40
The guid is indeed optional, we can add this to the type annotation. We can't add the default `= None` however because it's before `text_a` in the dataclass, which is not optional.<|||||>Thanks! I'll probably shape up a bunch of type annotations into a PR sometime soon, so I'll make `guid` Optional(but without a default) in that PR if noone gets to it before me.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,117
closed
Using control codes for finetuning
Hi I have a use case of style conditioned generation, where I ask the LM to generate a sentence based on the control code I provide. CTRL is pretty suitable for that task. Can you tell me how to use control codes for fine-tuning as well as inference? It should be the same as any CLM like GPT2 but I want to specifically know about the style and control code conditioning. How should be the data format and other stuffs
07-29-2020 03:44:35
07-29-2020 03:44:35
@julien-c <|||||>It'd be best to re-read the paper and original implem, but I think you just prepend a control code to each of your samples. Cc'ing @keskarnitish for information. PS/ for general questions, please use https://discuss.huggingface.co!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,116
closed
No button for creating new post at the forum.
![image](https://user-images.githubusercontent.com/4702353/88753699-76b01080-d18f-11ea-80c3-905a006ad31a.png) discuss.huggingface.co
07-29-2020 03:35:07
07-29-2020 03:35:07
![image](https://user-images.githubusercontent.com/4702353/88774445-0b2d6980-d1b6-11ea-88ba-e993190b434c.png) <|||||>I approved your post there @guotong1988
transformers
6,115
closed
Usage of Pytorch Native AMP in place of apex (Pytorch 1.6) in Trainer
# 🚀 Feature request It would be nice to remove the Apex dependency for `fp16` training and use the native [Pytorch's AMP methods](https://github.com/pytorch/pytorch/releases) in the `Trainer` method. Pytorch recommends Apex users to switch to it's native implementation, even [Apex does it so](https://github.com/NVIDIA/apex/issues/818). Moreover, it will eliminate the need for users to build apex by themselves. ## Your contribution I am happy to submit a PR if you think it would be a good addition. Please let me know.
07-29-2020 03:08:06
07-29-2020 03:08:06
I had a query, Pytorch examples show loss being calculated as : ``` with autocast(): output = model(input) loss = loss_fn(output, target) scaler.scale(loss).backward() ``` But in all `SequenceClassification` and other models, loss is calculated in the `forward pass`. We can use `@autocast` decorator on the forward pass as the docs suggest, but this introduce so many changes for one feature. Maybe, there's a workaround. Does computing loss in `autocast` scope affect the loss itself when `backward` is called upon it ?<|||||>Hi there, Note that we won't pin the version of PyTorch to 1.6 minimum, so the use of native mixed precision will have to be controlled by a test on the pytorch version (basically use native mixed precision when the version allows it and use apex otherwise). Otherwise, I don't think the loss being computed inside the model should be a problem, the line would probably be ``` with autocast(): outputs = model(**inputs) loss = outputs[0] ``` inside Trainer but I haven't run tests yet. You're welcome to try to work on a PR with this, otherwise it is on my TODO for when I have time (hopefully in the next few weeks).<|||||>Hi Sylvain, I've opened up a PR. I know that pinning of version won't be done. To address, I've addressed this issue the same way as we handle `scheduler.get_lr()`.
transformers
6,114
closed
namespace object has no attribute to "enc_only"
# ❓ Questions & Help when I running the distillation.py then File "E:/transformers-master/examples/seq2seq/distillation.py", line 370, in create_module elif args.enc_only: AttributeError: 'Namespace' object has no attribute 'enc_only' how can I deal with this problem?? thx a lot. <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-29-2020 02:55:00
07-29-2020 02:55:00
If you do not want to use encoder only, I think it is fine to just comment that elif clause out<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,113
closed
🌟 BigBird
# 🌟 New model addition ## Model description Paper : https://arxiv.org/pdf/2007.14062.pdf Abstract : > Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. ## Open source status * [ ] the model implementation is available: *No* * [ ] the model weights are available: *No* * [ ] who are the authors: *?*
07-29-2020 01:45:45
07-29-2020 01:45:45
When will be getting this model?<|||||>Until the weights and code are not published I think we won't focus too much on adding the model<|||||>I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases. Please let me know if anyone is interested for the same. **Project 1 :** BigBert for Genomics Research<|||||>> I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases. > Please let me know if anyone is interested for the same. > **Project 1 :** BigBert for Genomics Research I'll be up for this project<|||||>I'll be up for this project too. I got a slightly different use case idea, tho. :) <|||||>@sathvikask0 I am super interesting about the **BigBird for Genomics Research**. Are you planning to release the fixed-length embedding part as well?<|||||>I'm also doing some research on using Google BigBird for genomics research. There's a competition going on right now and we can definitely leverage BigBird for genomics sequencing. <|||||>@sathvikask0 @nikhilbyte @seduerr91 What if we could meet together and talk about the BigBert implementation for Genomics Research?<|||||>Sure do you want to set up a google meet?<|||||>I'm in.<|||||>Hello @nikhilbyte @seduerr91 @ptynecki are we still doing this, I want to be a part of it!<|||||>> Hello @nikhilbyte @seduerr91 @ptynecki are we still doing this, I want to be a part of it! I'm up for this. Let me know how to connect with you.<|||||>@patrickvonplaten actually you can read on the paper (appendix E, section E.4) that for summarization, "For the large size model, we lift weight from the state-of-the-art Pegasus model [107], which is pretrained using an objective designed for summarization task". Do you think it would be possible to include the new architecture, using the weights already available of `google/pegasus-large`?<|||||>Is there an official code base by now? <|||||>As soon as weights and codebase is out, we'll integrate! But it does not make much sense IMO to do it before that<|||||>> I am planning to start a small tight group of individuals who will work on implementing research papers for proper business use cases. > Please let me know if anyone is interested for the same. > **Project 1 :** BigBert for Genomics Research I would like to join the effort as well<|||||>It seems BigBird official [code](https://github.com/google-research/bigbird) and [pretrained models](https://console.cloud.google.com/storage/browser/bigbird-transformer) are finally out (well partially). The code seems to be written for TPUs mainly so not sure how easy to port to huggingface. Also I see a keras based BigBird implementation as part of [Tensorflow official models](https://github.com/tensorflow/models/tree/master/official/nlp/projects), which might be easier to port. So let's start working on it!<|||||>will try to allocate some time next week to start porting the model :-) <|||||>Can you please add me to this group, I would also like to work on this project.<|||||>@patrickvonplaten, do you know when it will be ready? 🐦 <|||||>Any update?<|||||>Has there been any progress on this? :)<|||||>@patrickvonplaten I see #10183 is passing all its checks, is it close to being able to merge? Looking forward to using with my project!<|||||>Hi, it will be merged by next week.<|||||>Is this model available before this weekend?<|||||>@DarthAnakin BidBird is available as of this morning on the `master` branch and will be in the next release<|||||>@LysandreJik Thanks!<|||||>@LysandreJik very excited to see this complete. When will the next release happen?<|||||>We expect to do it early next week!<|||||>Any plans to add a Fast Tokenizer for this model ? I would be happy to help integrate it. @patrickvonplaten <|||||>@tanmaylaud we would welcome an effort to add a fast tokenizer for this model!<|||||>Thanks a lot! Is any example script available at the moment? I'm particularly looking for summarization. <|||||>Also looking for summarization support. Seems it needs a Pegasus decoder to work. I see a few such BigBird->Pegasus models at https://huggingface.co/vasudevgupta, following from discussions at https://github.com/huggingface/transformers/pull/10991<|||||>@vasudevgupta7 is working very hard on it to merge it soon :-) Think, it should be ready in ~2 weeks the latest<|||||>Hi all, BigBird pegasus is available in master of 🤗Transformers now. Give it a try ...
transformers
6,112
closed
Is there any way that I can use the HuggingFace Transformers as Pyro models?
Hello, `Pyro` is a Python module that allows users to convert a given (frequentist) neural network into a Bayesian neural network. I can convert a HuggingFace Transformer into a Pyro model like below: ```python import torch from torch import distributions from transformers import RobertaTokenizer, RobertaForMultipleChoice import pyro import pyro.infer import pyro.optim import pyro.distributions as dist import pyro.nn.module as module from torch import nn from pyro.infer import SVI # get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token model_RobertaForMultipleChoice = RobertaForMultipleChoice.from_pretrained('roberta-large', output_hidden_states = True) module.to_pyro_module_(model_RobertaForMultipleChoice) # Now we can attempt to be fully Bayesian: for m in model_RobertaForMultipleChoice.modules(): for name, value in list(m.named_parameters(recurse=False)): setattr(m, name, module.PyroSample(prior=dist.Normal(0, 1) .expand(value.shape) .to_event(value.dim()))) # define parameters for training guide_delta = guides.AutoDelta(model_RobertaForMultipleChoice) ``` But when I try to compute the mc_loss from this Bayesian Transformer, Python generates an error: ```python mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] Traceback (most recent call last): File "STAT946_final_project_code_v4.py", line 625, in <module> success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval) File "STAT946_final_project_code_v4.py", line 415, in main_function_diag_normal best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal) File "STAT946_final_project_code_v4.py", line 342, in train_loop optimizer, scheduler, log_interval, svi, guide, epoch) File "STAT946_final_project_code_v4.py", line 237, in train_mc_head mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward output_hidden_states=output_hidden_states, File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype first_tuple = next(gen) StopIteration ``` Is there any way that I can compute the mc_loss in the regular way after converting HuggingFace Transformer into a Bayesian Transformer? Thank you.,
07-29-2020 00:48:12
07-29-2020 00:48:12
transformers
6,111
closed
Use FutureWarning to deprecate
As discussed, `DeprecationWarning` -> `FutureWarning`
07-28-2020 21:49:32
07-28-2020 21:49:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=h1) Report > Merging [#6111](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b1c8b76907ad605c7b25bb12580cb46d70207b7a&el=desc) will **increase** coverage by `0.65%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6111/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6111 +/- ## ========================================== + Coverage 77.21% 77.86% +0.65% ========================================== Files 146 146 Lines 26325 26325 ========================================== + Hits 20327 20499 +172 + Misses 5998 5826 -172 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (+1.16%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (+1.00%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.71% <0.00%> (+35.71%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6111/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=footer). Last update [b1c8b76...fb6e785](https://codecov.io/gh/huggingface/transformers/pull/6111?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,110
closed
Doc tokenizer
Improve the documentation of tokenizers, following what was done for the models last week, mainly: - make sure all docstrings of public functions are properly formatted for sphinx - make sure all args are properly documented - add or fix type hints wherever necessary The methods/classes that are not in the main `__init__` are all in the page `internal/tokenization_utils.html`. I had added `SpecialTokensMixin` in the __init__ of transformers a while ago to easily document it, but it can be removed from there now if we want. I rewrote a few dosctrings here and there so pinging @n1t0 and @mfuntowicz to make sure I didn't write anything bad. [Preview](https://65719-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/tokenizer.html) of the tokenization page. [Preview](https://65719-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/tokenization_utils.html) of the tokenization utils page.
07-28-2020 21:24:30
07-28-2020 21:24:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=h1) Report > Merging [#6110](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/11792d7826854979bb532b6da09bc3796b09ea6a&el=desc) will **decrease** coverage by `1.54%`. > The diff coverage is `95.71%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6110/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6110 +/- ## ========================================== - Coverage 78.73% 77.19% -1.55% ========================================== Files 146 146 Lines 26314 26353 +39 ========================================== - Hits 20719 20342 -377 - Misses 5595 6011 +416 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <ø> (ø)` | | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.39% <71.42%> (-1.71%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <88.88%> (+0.04%)` | :arrow_up: | | [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <100.00%> (ø)` | | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.90% <100.00%> (ø)` | | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <100.00%> (+0.05%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <100.00%> (ø)` | | | ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6110/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=footer). Last update [11792d7...80a44a8](https://codecov.io/gh/huggingface/transformers/pull/6110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,109
closed
StopIteration error in RobertaForMultipleChoice
Hello, I am trying to execute the line below for my `RobertaForMultipleChoice` model: ```python # retrieve the resulting mc_loss mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] ``` but this generates the following error: ```python Traceback (most recent call last): File "STAT946_final_project_code_v4.py", line 623, in <module> success_rate_list_diag_normal = main_function_diag_normal('/home/ec2-user/test.txt', 'test_ans_num.txt', num_iter, log_interval) File "STAT946_final_project_code_v4.py", line 414, in main_function_diag_normal best_model_RobertaForMultipleChoice_diag_normal = train_loop(model_RobertaForMultipleChoice, tokenizer, optimizer_1, scheduler_1, log_interval, svi_diag_normal, guide_diag_normal, best_model_RobertaForMultipleChoice_diag_normal) File "STAT946_final_project_code_v4.py", line 341, in train_loop optimizer, scheduler, log_interval, svi, guide, epoch) File "STAT946_final_project_code_v4.py", line 236, in train_mc_head mc_loss = model(input_ids = input_ids, attention_mask = attention_mask, labels = mc_labels)[0] File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 441, in forward output_hidden_states=output_hidden_states, File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/pyro/nn/module.py", line 413, in __call__ return super().__call__(*args, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 732, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 228, in get_extended_attention_mask extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 159, in dtype first_tuple = next(gen) StopIteration ``` How can I get around this type of error? Thank you,
07-28-2020 20:29:57
07-28-2020 20:29:57
transformers
6,108
closed
allenai/longformer-large-4096 unavailable
For some reason, I'm unable to download allenai/longformer-large-4096. Everything was working an hour ago, but all of a sudden I get the error included below. It's still listed on https://huggingface.co/models?search=allenai%2Flongformer-large-4096. I'm not sure what's up. Any ideas? ``` Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 242, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ptcc/run_glue.py", line 246, in <module> main() File "/ptcc/run_glue.py", line 123, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py", line 203, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 251, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'allenai/longformer-large-4096'. Make sure that: - 'allenai/longformer-large-4096' is a correct model identifier listed on 'https://huggingface.co/models' - or 'allenai/longformer-large-4096' is the correct path to a directory containing a config.json file Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 242, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/ptcc/run_glue.py", line 246, in <module> main() File "/ptcc/run_glue.py", line 123, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py", line 203, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 251, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'allenai/longformer-large-4096'. Make sure that: - 'allenai/longformer-large-4096' is a correct model identifier listed on 'https://huggingface.co/models' - or 'allenai/longformer-large-4096' is the correct path to a directory containing a config.json file ```
07-28-2020 20:05:14
07-28-2020 20:05:14
The issue seems to have resolved itself. So, I'm closing the issue.
transformers
6,107
closed
Where do the Masked Language Model perform mask on the input data
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I am trying to pre-train a bert from scratch with my own word set using only the Masked Language Model. I have a trouble finding where exactly the code masks 15% of the words and replaces it with 80% mask, 10% random, and 10% original. I noticed that the input "labels" kind of refers to the places where words are masked. Does it mean that when I preprocess the data, I need to masked it myself and then indicates the position of masks in the "labels" input? If so, is "labels" the only input that would be affect? Is there any other input variables, such as the "masked_lm_positions" and "masked_lm_ids" in google-bert that I need to take care of? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
07-28-2020 19:46:47
07-28-2020 19:46:47
Hey @SusanSun8, we are trying to move "non-bugs" questions to the forum: https://discuss.huggingface.co/ . Could you maybe post your question there again? Here is the code that is responsible for Masked Language Modeling: https://github.com/huggingface/transformers/blob/f6cb0f806efecb64df40c946dacaad0adad33d53/src/transformers/data/data_collator.py#L107.
transformers
6,106
closed
Weird Behavior on XLNetTokenizer after new tokens added
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: ubuntu - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (GPU) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz ## Information I use XLNetTokenizer (pretrained) and added a new token in it. After that, the output from ```tokenizer.tokenize``` looks weird. (I am not sure if the problem comes from ```transformers``` or ```tokenizers```, but I post here anyway.) ## To reproduce ``` from transformers import XLNetTokenizer tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') test = 'This is so awesome!!@username!' out = tokenizer.tokenize(test) # without new tokens added, @username is broken down as expected out >> ['▁This', '▁is', '▁so', '▁awesome', '!!', '@', 'user', 'name', '!'] ``` ``` from tokenizers import AddedToken # introduce new tokens (with white-space on right only) new_tokens = AddedToken( '@username', single_word = True, lstrip = False, rstrip = True ) tokenizer.add_tokens(new_tokens) out = tokenizer.tokenize(test) # weird result about the white-space around new tokens out >> ['▁This', '▁is', '▁so', '▁awesome', '!!', '@username', '▁', '!'] ``` In here, there are two places weird: 1. new tokens "@username" has a white-space on the left, so ""!!@username" should not be broken down into "!!", "@username" (I think it should be broken down into "!!", "@", "user", "name", "!") 2. I am a bit confused on why there is a white-space produced after "@username" token. (i.e. '@username', '▁', '!') And oddly, when I encode and decode the sentence back, the white-space token after "@username" is not translated to actual whitespace. (Also note there is a white space added before "@username" which mean the new token is correctly identified to have a white-space on the left): ``` enc = tokenizer.encode(test, add_special_tokens = False) dec = tokenizer.decode(enc) # in encoding stage, the 2nd last token is whitespace enc >> [122, 27, 102, 8729, 7675, 32000, 17, 136] # in decoding stage, the white-space disappear dec >> This is so awesome!! @username! ```
07-28-2020 19:34:10
07-28-2020 19:34:10
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,105
closed
Recursive error calling generate in forward
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Python: 3.7.6 ## Question Here is the training loop: ```python def sd_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch sd_dataset = SDAbstractsDataset('/path/to/sd_samples_64.csv') training_args = TrainingArguments( output_dir='/path/to/finetuned_gpt2', do_train=True, per_device_train_batch_size=4, learning_rate=1e-3, num_train_epochs=1 ) model = GPT2FinetunedWithNgrams.from_pretrained('gpt2') trainer = Trainer( model=model, args=training_args, train_dataset=sd_dataset, data_collator = sd_data_collator ) trainer.train() ``` Here's the model class and its `forward` method: ```python class GPT2FinetunedWithNgrams(GPT2LMHeadModel): def __init__(self, config): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token def load_ngrams_model(self, ngrams_model_path): self.ngrams_model = NGrams(ngrams_model_path) def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): output = self.generate(input_ids=input_ids, max_length=474) decoded_output = self.tokenizer.decode(output[0], skip_special_tokens=True) ``` Here's the whole error. It's really lengthy and I cut out the repetitions: ```python Some weights of GPT2FinetunedWithNgrams were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/1 [00:00<?, ?it/s] Iteration: 0%| | 0/16 [00:00<?, ?it/s]Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence . . . File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 480, in generate model_specific_kwargs=model_specific_kwargs, File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 520, in _generate_no_beam_search outputs = self(**model_inputs) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/path/to/ric-2020/text_gen_w_transformers/finetune_gpt2.py", line 33, in forward File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 350, in generate "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id) . . . File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1390, in warning self._log(WARNING, msg, args, **kwargs) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1514, in _log self.handle(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1524, in handle self.callHandlers(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1594, in callHandlers lastResort.handle(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 894, in handle self.emit(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1025, in emit msg = self.format(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 869, in format return fmt.format(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 608, in format record.message = record.getMessage() File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 360, in getMessage def getMessage(self): RecursionError: maximum recursion depth exceeded while calling a Python object ``` My guess is the `self.generate()` being called within the model produces the recursion problem. I found this problematic because the `generate` method has some awesome functionality for beam search, greedy search, top-k, etc. To overcome this, I added a flag to `generate` called `is_finetuning_current_model`: ```python @torch.no_grad() def generate( self, input_ids: Optional[torch.LongTensor] = None, max_length: Optional[int] = None, min_length: Optional[int] = None, do_sample: Optional[bool] = None, early_stopping: Optional[bool] = None, num_beams: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, repetition_penalty: Optional[float] = None, bad_words_ids: Optional[Iterable[int]] = None, bos_token_id: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, length_penalty: Optional[float] = None, no_repeat_ngram_size: Optional[int] = None, num_return_sequences: Optional[int] = None, attention_mask: Optional[torch.LongTensor] = None, decoder_start_token_id: Optional[int] = None, use_cache: Optional[bool] = None, is_finetuning_current_model: Optional[bool] = None, **model_specific_kwargs ) -> torch.LongTensor: ``` propagated this down to the `num_beams` check: ```python if num_beams > 1: output = self._generate_beam_search( input_ids, cur_len=cur_len, max_length=max_length, min_length=min_length, do_sample=do_sample, early_stopping=early_stopping, temperature=temperature, top_k=top_k, top_p=top_p, repetition_penalty=repetition_penalty, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, batch_size=effective_batch_size, num_return_sequences=num_return_sequences, length_penalty=length_penalty, num_beams=num_beams, vocab_size=vocab_size, encoder_outputs=encoder_outputs, attention_mask=attention_mask, use_cache=use_cache, is_finetuning_current_model=is_finetuning_current_model, model_specific_kwargs=model_specific_kwargs ) else: output = self._generate_no_beam_search( input_ids, cur_len=cur_len, max_length=max_length, min_length=min_length, do_sample=do_sample, temperature=temperature, top_k=top_k, top_p=top_p, repetition_penalty=repetition_penalty, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=bad_words_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, batch_size=effective_batch_size, encoder_outputs=encoder_outputs, attention_mask=attention_mask, use_cache=use_cache, is_finetuning_current_model=is_finetuning_current_model, model_specific_kwargs=model_specific_kwargs ) ``` updated `_generate_no_beam_search` and `_generate_beam_search` with the following: ```python if is_finetuning_current_model: outputs = self.generate_text_while_finetuning(**model_inputs) else: outputs = self(**model_inputs) ``` For my model class, I just added the `generate_text_while_finetuning` method and set the `is_finetuning_current_model`: ```python class GPT2FinetunedWithNgrams(GPT2LMHeadModel): def __init__(self, config): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token def load_ngrams_model(self, ngrams_model_path): self.ngrams_model = NGrams(ngrams_model_path) def generate_text_while_finetuning(self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None,): transformer_outputs = self.transformer( input_ids, past=past, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) hidden_states = transformer_outputs[0] lm_logits = self.lm_head(hidden_states) outputs = (lm_logits,) + transformer_outputs[1:] return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions) def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): output = self.generate(input_ids=input_ids, max_length=474, is_finetuning_current_model=True) decoded_output = self.tokenizer.decode(output[0], skip_special_tokens=True) ``` This seems to resolve the recursive error and produces an expected `decoded_output` for me. My usecase is using GPT2 with a different loss function to finetune it on a particular domain corpus. I imagine other people would be doing something similar for GPT2 and other models, so I tested this approach just using `GPT2LMHeadModel` and got the same expected results. My question is do contributors think I should open up a bug report for this?
07-28-2020 19:28:52
07-28-2020 19:28:52
cool workaround! I think you might have a cleaner solution, potentially, if you compose instead of inheriting from `GPT2WithLMHead`. This is not worthy of a bug report, (what's the bug), but it could be an interesting proposal/project for examples/ if it works well on a task with a public dataset. Could I hear more about your task? Are you successfully backpropagating through beam search?<|||||>Hi @sshleifer, thanks for your reply! I wasn't quite sure if it would warrant a bug report or feature suggestion (or neither). Thanks for clearing that up. The task I am doing is text generation. I have a dataset of scientific abstracts that I want to finetune the GPT2 pretrained model on to generate similar abstracts. However, I wanted to replace the loss with a loss from a N-grams model I have. The procedure looks something like this: - Feed sample abstract into Pre-trained GPT2. - Generate a sequence of specified length based off that sample. - Calculate the loss using the N-grams model I have and use that loss for backpropagation. Basically I am replacing the loss function found in `GPT2LMHeadModel` with my own and utilizing the `generate` method in `GPT2Pretrained` to generate new abstracts. I was doing the generation one token at a time using a naive method, but the `generate` method is so handy for the generation that I really wanted to utilize it (and all the hard work the HF team has put in). I have not tried to backpropagate yet. You'll notice most of the arguments that go into `Trainer` are pretty lousy. Right now, I just want to see if it will start training with no errors. I hope to try to do some more thoughtful training later this week.<|||||>Any further thoughts on this?<|||||>Nope. Excited to see what code modifications are required to get this working!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,104
closed
Fix zero-shot pipeline single seq output shape
Fixes zero shot pipelines bug that returns sequence as a list rather than a str when a single sequence is passed as a list.
07-28-2020 18:39:03
07-28-2020 18:39:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=h1) Report > Merging [#6104](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **decrease** coverage by `1.54%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6104/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6104 +/- ## ========================================== - Coverage 77.77% 76.23% -1.55% ========================================== Files 146 146 Lines 26325 26325 ========================================== - Hits 20474 20068 -406 - Misses 5851 6257 +406 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `78.50% <ø> (ø)` | | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.82% <0.00%> (-77.59%)` | :arrow_down: | | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `9.90% <0.00%> (-76.24%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `17.22% <0.00%> (-72.24%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.00% <0.00%> (-35.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6104/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=footer). Last update [06834bc...23060b9](https://codecov.io/gh/huggingface/transformers/pull/6104?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,103
closed
rename prepare_translation_batch -> prepare_seq2seq_batch
cc @patil-suraj Starts work on #6080 , which suggests that all seq2seq tokenizers expose a `prepare_seq2seq_batch` method. TODO: - add common test enforcing API consistency.
07-28-2020 18:24:59
07-28-2020 18:24:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=h1) Report > Merging [#6103](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66fa8ceaeaa6fe12f1bd4a5e6b0a924f59f715d9&el=desc) will **decrease** coverage by `0.47%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6103/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6103 +/- ## ========================================== - Coverage 79.90% 79.42% -0.48% ========================================== Files 153 153 Lines 27877 27879 +2 ========================================== - Hits 22276 22144 -132 - Misses 5601 5735 +134 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <100.00%> (ø)` | | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.14% <100.00%> (+6.97%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.16% <0.00%> (+1.61%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6103/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=footer). Last update [66fa8ce...56b0bf4](https://codecov.io/gh/huggingface/transformers/pull/6103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Didn't know that `prepare_translation_batch` was already in master. Guess I'm not a huge fan of such a helper function in general and for me, it's a pure convenience function that does not really add functionality to the lib. Think we should lower maintenance costs and reduce the risk of future breaking backward compatibility by not adding such functions to the python tokenizers. But I don't have the best insight into the tokenizers. Maybe @LysandreJik and @thomwolf can have a better opinion here.<|||||>The better argument is not about convenience, but about managing special tokens when they are different on the encoder and decoder side. It's very hard to have finetuning code that supports multiple models if the tokenizers don't handle special tokens/language codes for you. <|||||>same spurious pabee failure as #6421 , merging!
transformers
6,102
closed
Fix deebert tests
07-28-2020 17:59:03
07-28-2020 17:59:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=h1) Report > Merging [#6102](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6102/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6102 +/- ## ======================================= Coverage 77.77% 77.77% ======================================= Files 146 146 Lines 26325 26325 ======================================= + Hits 20474 20475 +1 + Misses 5851 5850 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6102/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=footer). Last update [06834bc...c00d3b1](https://codecov.io/gh/huggingface/transformers/pull/6102?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).