repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
6,802
closed
Only access loss tensor every logging_steps
* tensor.item() was being called every step. This must not be done for XLA:TPU tensors as it's terrible for performance causing TPU<>CPU communication at each step. On RoBERTa MLM for example, it reduces step time by 30%, should be larger for smaller step time models/tasks. * Train batch size was not correct in case a user uses the `per_gpu_train_batch_size` flag * Avg reduce loss accross eval shards * Log TPU debug metrics before last epoch break
08-28-2020 18:47:10
08-28-2020 18:47:10
Style issue will be solved with merge @sgugger <|||||>> Thanks for fixing this! I'd remove the first change in the logs though. Thanks for the review! Done.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=h1) Report > Merging [#6802](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9336086ab5d232cccd9512333518cf4299528882?el=desc) will **decrease** coverage by `0.42%`. > The diff coverage is `89.47%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6802/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6802 +/- ## ========================================== - Coverage 80.32% 79.89% -0.43% ========================================== Files 157 157 Lines 28589 28739 +150 ========================================== - Hits 22963 22960 -3 - Misses 5626 5779 +153 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <ø> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <ø> (+63.80%)` | :arrow_up: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <ø> (ø)` | | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.28% <ø> (-0.05%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.23% <46.66%> (-0.43%)` | :arrow_down: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `90.69% <89.18%> (-1.14%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <94.59%> (+2.19%)` | :arrow_up: | | [src/transformers/configuration\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | | | ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=footer). Last update [9336086...2b981cd](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,801
closed
Model card for primer/BART-Squad2
Adds model card (model is already uploaded).
08-28-2020 16:52:12
08-28-2020 16:52:12
transformers
6,800
closed
t5 model should make decoder_attention_mask
<!-- This line specifies which issue to close after the pull request is merged. --> This undoes an aggressive change that was merged in #6654. I should discuss w Patrick first when he returns.
08-28-2020 16:44:45
08-28-2020 16:44:45
transformers
6,799
closed
Marian distill scripts + integration test
- Fix 2 integration tests in examples - Add new integration test for marian distillation script - add 2 marian distillation scripts (but dont document them, will do that once they are slightly better).
08-28-2020 16:42:07
08-28-2020 16:42:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=h1) Report > Merging [#6799](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **increase** coverage by `0.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6799/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6799 +/- ## ========================================== + Coverage 79.01% 79.11% +0.10% ========================================== Files 157 157 Lines 28739 28739 ========================================== + Hits 22707 22736 +29 + Misses 6032 6003 -29 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <0.00%> (-0.52%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=footer). Last update [02d09c8...d6e38c4](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,798
closed
[s2s] round runtime in run_eval
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
08-28-2020 15:41:42
08-28-2020 15:41:42
transformers
6,797
closed
2 Slow Test Failures That Sam Can Fix
Both related to fp16 1) ``` if torch.cuda.is_available(): testargs += ["--fp16", "--gpus=1"] with patch.object(sys, "argv", testargs): > result = run_pl_glue.main() ``` 2) ``` examples/seq2seq/test_bash_script.py::test_train_mbart_cc25_enro_script ``` needs rerun/ cannot use fp16
08-28-2020 15:34:43
08-28-2020 15:34:43
transformers
6,796
closed
AttributeError: 'MarianTokenizer' object has no attribute 'prepare_translation_batch'
Problem 1: When I load tokenizer from local directory. And use it according MarianMT tutorial. There will be an error occurs. AttributeError: 'MarianTokenizer' object has no attribute 'prepare_translation_batch' Problem 2: When I downloads models and tokenizers from the web, such as running 'tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")'. There will be some errors. RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
08-28-2020 12:43:29
08-28-2020 12:43:29
Which tutorial? its called `prepare_seq2seq_batch` now.<|||||>"prepare_seq2seq_batch" works. Thanks a lot. I follow this tutorial, https://huggingface.co/transformers/model_doc/marian.html Where can I find the new user manual for MarianMT model? Thank you. <|||||>https://huggingface.co/transformers/master/model_doc/marian.html<|||||>Have they change it again and add a maximum length?<|||||>I am getting: `AttributeError: 'MarianTokenizer' object has no attribute 'prepare_seq2seq_batch'` I changed it to `prepare_translation_batch` and it works<|||||>It fails again... and changing to `prepare_seq2seq_batch` throws the deprecation warning...
transformers
6,795
closed
Maybe global step should be initialized to 0
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 in script:https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py location: line 143
08-28-2020 12:33:06
08-28-2020 12:33:06
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,794
closed
Unexpected behavior encoding token_type_ids in GPT models
The issues #4922 and #1339 have been closed, but bug is still present in the `GPT2Model` code resulting in unexpected behaviour as described in #4922 . I am trying to use token_type_ids to annotate different languages being input to a multilingual model. I am using my own tokenization code which outputs token_type_ids indicating the language of the tokens. I have run into unexpected behaviour where the embeddings for the first few tokens in my vocabulary are also being used for my token type embeddings. I am using version 3.0.2. The issue was marked as closed since the GPT2 Tokenizer was no longer outputting token_type_ids, but the `GPT2Model` class still accepts token_type_ids in its forward method. # Expected Behavior Either: 1. addition of a separate `nn.Embedding` matrix for token_type_ids [here](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L352) (such as [in BERTModel](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_bert.py#L153)) 2. discourage or throw a warning when `token_type_ids` are passed to GPTModel instances
08-28-2020 12:13:42
08-28-2020 12:13:42
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,793
closed
Update README of my model
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
08-28-2020 11:48:44
08-28-2020 11:48:44
transformers
6,792
closed
F16 support for DistilBert
# 🚀 Feature request DIstilBert model donesn't support f16 or mixed precision training. The hard-coded uses of float32 from the tensorflow implementation of BERT and ELECTRA has been fixed by #3320 . The same problem in DistilBert needs a fix.
08-28-2020 09:39:10
08-28-2020 09:39:10
Fixed by #6858
transformers
6,791
closed
Enable wandb logging for Ray Tune HPO runs
<!-- This line specifies which issue to close after the pull request is merged. --> Currently wandb logging does not work with Ray Tune, as each process tries to call `wandb.log` without initializing wandb first. This PR fixes this by wrapping the objective in Ray Tune's `wandb_mixin`, which makes sure each trial initializes wandb and logs to it.
08-28-2020 09:30:00
08-28-2020 09:30:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=h1) Report > Merging [#6791](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **increase** coverage by `0.41%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6791/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6791 +/- ## ========================================== + Coverage 79.36% 79.78% +0.41% ========================================== Files 157 157 Lines 28569 28578 +9 ========================================== + Hits 22675 22800 +125 + Misses 5894 5778 -116 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `52.90% <0.00%> (+39.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `35.93% <0.00%> (-59.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-8.71%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: | | ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=footer). Last update [930153e...001a17f](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>making sure that @borisdayma sees this PR.<|||||>Thanks, I've not played too much with ray tune yet but there seems to be 2 ways to integrate through Ray Tune libraries as per [the docs](https://docs.wandb.com/library/integrations/ray-tune). However ideally, the setup, logging, etc would be handled directly by `Trainer` existing functions for clarity and concision (and also to support all existing loggers). Handling multiple logging runs should be done within `hyperparameter_search` if possible. Could the setup methods be wrapped in a new function and called during the search, in order to avoid duplicating the same logic. For wandb, forcing a new run just requires `wandb.init(reinit=True)` so it works both in notebooks and scripts. Note: Use this argument **only** while using `hyperparameter_search` as users can currently call manually `wandb.init` before (for example when using pytorch-lightning, sweeps, or keras + huggingface), making the call within the `Trainer` a "noop" (because it does not have `reinit=True`).<|||||>Thanks for your comments. @borisdayma, simply moving the logger setup to `train()` would do the trick in any case, as it is called from the hyperparameter search methods. This should also work for Optuna, not only for Ray Tune. I created a PR for that here: #6850. Is this what you meant?<|||||>Closed in favor of #6850.
transformers
6,790
closed
What is the size of the context window in the 'openai-gpt' pre-trained model?
What is the size of the context window in the 'openai-gpt' pre-trained model? # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-28-2020 09:17:02
08-28-2020 09:17:02
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,789
closed
How to fine-tune T5 with some additional special tokens ?
I wanna to fine-tune T5 with seq2seq task, but there are some special tokens in this seq2seq task. T5 performs bad without these tokens. How could I use some additional special tokens to fine-tune T5 with scripts https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_t5.sh?
08-28-2020 08:44:33
08-28-2020 08:44:33
Let's say the tokens you want to add are <some_token_1> and <some_token_2> (including angle brackets) ``` from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") tokenizer.add_tokens(['<some_token_1>', '<some_token_2'>]) ```<|||||>I want to add this line in addition to modifying the tokenizer for the model to work with the new tokenizer: `model.resize_token_embeddings(len(tokenizer))`
transformers
6,788
closed
Update multilingual passage rereanking model card
Fix range of possible score, add inference .
08-28-2020 06:56:22
08-28-2020 06:56:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=h1) Report > Merging [#6788](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **decrease** coverage by `2.98%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6788/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6788 +/- ## ========================================== - Coverage 79.36% 76.38% -2.99% ========================================== Files 157 157 Lines 28569 28569 ========================================== - Hits 22675 21822 -853 - Misses 5894 6747 +853 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `35.93% <0.00%> (-59.38%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `81.42% <0.00%> (-12.86%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=footer). Last update [930153e...3482a3e](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,787
closed
Dear transformers team, how could I use bert to NER task?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: Dear transformers team, I have train and test corpus with BIO tags, like below: The O patient O was O noted O with O first O depressive O episode O at O aged O 36 O . O How could I use bert to train my data to produce models and to predict the BIO tags of test data? The resource have many programs, but I have no ideas that which program is what I need. I would like to use google colab to run the program that could save python environmental problems. Could you offer the tutorial of Bert NER task? Thank you Best regards;
08-28-2020 04:28:59
08-28-2020 04:28:59
Hi, Have a look at the following script from Huggingface: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py It shows how you can finetune a pretrained BERT-model for NER. Remember to take a look at the utility code as well, since this is the code preparing and creating your features (and tensors in general). https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py Regards<|||||> > Hi, > > Have a look at the following script from Huggingface: > > https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py > > It shows how you can finetune a pretrained BERT-model for NER. > Remember to take a look at the utility code as well, since this is the code preparing and creating your features (and tensors in general). > > https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py > > Regards Thanks for your response very much. Actually I am not good at python programming. run_ner.py could create a new pretrained model or I could us it to fine tune a existed model to my target NER task? What are the functions of these two programs? And what parameters should I set? What are the formats of train and test data? I also could not find these two programs in the download folder. ![image](https://user-images.githubusercontent.com/16911126/93067378-ba15fe00-f6ad-11ea-99ab-e4fd87742e0b.png) By the way, could you offer the tutorial of colab format of NER task just like below link? https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/ It is complicated and consuming to set virtual environment in PC. Thank you Best regards; <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,786
closed
T5 batch inference same input data gives different outputs?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hi, I am using a T5 model from a tf check point. I have set model.eval(), but every time I forward the exactly same input data, the outputs were always different. here is my code: ``` import torch from transformers import * tokenizer = T5Tokenizer.from_pretrained('t5-base') config = T5Config.from_pretrained('t5-base') model = T5ForConditionalGeneration.from_pretrained( "../model/t5-base/model.ckpt-1004000", from_tf=True, config=config) model.eval() def prediction(documents, query): querys = [query] * len(documents) encoded_encoder_inputs = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") encoded_decoder_inputs = tokenizer(querys, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): outputs = model(input_ids=encoded_encoder_inputs["input_ids"], labels=encoded_decoder_inputs["input_ids"], attention_mask=encoded_encoder_inputs["attention_mask"]) batch_logits = outputs[1] print(batch_logits) documents = ['my dog is cute!', "so am I."] query = "who am I?" prediction(documents, query) ``` here is my twice outputs (run the above code twice): ``` tensor([[[-14.6796, -6.2236, -13.3517, ..., -42.7422, -42.7124, -42.8204], [-18.5999, -4.9112, -10.6610, ..., -40.7506, -40.7313, -40.8373], [-17.3894, -5.3482, -11.4917, ..., -41.2223, -41.1643, -41.3228], [-18.4449, -5.9145, -12.0056, ..., -42.3857, -42.3579, -42.4859]], [[-15.7967, -6.9833, -14.5827, ..., -41.3168, -41.0326, -40.9567], [-18.4241, -5.7193, -12.0748, ..., -40.1744, -40.0635, -39.9045], [-19.5852, -5.1691, -12.7764, ..., -42.2655, -42.0788, -41.9885], [-20.3673, -3.6864, -12.5264, ..., -40.1189, -40.0787, -39.8976]]]) ``` ``` tensor([[[-14.6796, -6.2236, -13.3517, ..., -42.7422, -42.7124, -42.8204], [-18.5848, -4.9116, -10.6607, ..., -40.7443, -40.7251, -40.8300], [-17.3248, -5.3050, -11.4988, ..., -41.1802, -41.1236, -41.2818], [-18.3950, -5.8967, -11.9756, ..., -42.3553, -42.3273, -42.4553]], [[-15.7967, -6.9833, -14.5827, ..., -41.3168, -41.0326, -40.9567], [-18.5133, -5.7466, -12.1301, ..., -40.2199, -40.1078, -39.9497], [-19.3636, -5.1246, -12.7134, ..., -42.1881, -41.9993, -41.9106], [-20.3247, -3.6335, -12.4559, ..., -40.0285, -39.9912, -39.8106]]]) ``` Am I did anything wrong?
08-28-2020 03:51:47
08-28-2020 03:51:47
When I add torch.manual_seed(0) at the beginning the outputs will be the same. does the model from tf pretrained has some randomization when loading the model weights?<|||||>Hmm, there sholud always be the same. Could you add your script to a google colab that allows to run your checkpoint, so that we can debug?<|||||>Hi here is the colab script with my checkpoint uploaded: https://colab.research.google.com/drive/1Yx7zRkzpaGMMraTkIeIGnJJLnlpUKzQH?usp=sharing you can download my checkpoint here: https://drive.google.com/drive/folders/1521pvzvkqvEBUvRqZn7CRO-soCXOUZCu?usp=sharing<|||||>I think some layers in your T5 model have not been trained and are therefore not saved in your model checkpoint. At initialization this layer is then randomely initialized. One thing you can do to verify is to load the model as follows, save it as a PyTorch model and then load the PyTorch model as follows. ```python config = T5Config.from_pretrained('t5-base') model = T5ForConditionalGeneration.from_pretrained( "sample_data/model/model.ckpt-1004000", from_tf=True, config=config) model.save_pretrained("./model") model = T5ForConditionalGeneration.from_pretrained("/.model") ``` if this command throws a warning that some layers are not initialized, then you know what the problem is. If not, I will take a look again :-) <|||||>hi, I tried what you said, no warning shows up. so I guess it is not the case? Do you think it could be that weights from float64 to float32 or the opposite causing this problem? because the difference in outputs is tiny. I don't know, this still cannot explain why outputs are different every time.<|||||>any updates?<|||||>Hey @ArvinZhuang, I just downloaded the weights and tried to reproduce the error using your code snippet. In my case the output is deterministic, as expected. Could you make sure that you are using the newest version of transformers and try again?<|||||>Hi, the outputs still different on my machine.... very strange. I'm using transformer v3.3.0 torch v1.6.0 TensorFlow v2.3.1 ![image](https://user-images.githubusercontent.com/46237844/94499979-607f0900-0241-11eb-8b64-3b91d4ef1d31.png) and btw, I cannot directly load t5 config by using T5Config.from_pretrained('t5-base') now, but the 't5-base' is still in the "https://huggingface.co/models" list. So I copy and past config.json from "https://huggingface.co/t5-base" but the results show this time is very different from the post above, which I think should not be the case because the tf checkpoint and input string are exactly the same as before.... updates: can directly load t5 config by T5Config.from_pretrained('t5-base') now, however, the output logits still very different from my first post above.....<|||||>+1 on the issue I'm using transformer ==3.4.0 torch==1.6.0 I run: ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig config = AutoConfig.from_pretrained("Vamsi/T5_Paraphrase_Paws", output_hidden_states=True) tokenizer = AutoTokenizer.from_pretrained("Vamsi/T5_Paraphrase_Paws") model = AutoModelForSeq2SeqLM.from_pretrained("Vamsi/T5_Paraphrase_Paws", config=config).to('cuda') def prediction(documents, query): querys = [query] * len(documents) encoded_decoder_inputs = tokenizer(documents, padding=True, truncation=True, return_tensors="pt").to('cuda') encoded_encoder_inputs = tokenizer(querys, padding=True, truncation=True, return_tensors="pt").to('cuda') with torch.no_grad(): outputs = model(input_ids=encoded_encoder_inputs["input_ids"], labels=encoded_decoder_inputs["input_ids"], attention_mask=encoded_encoder_inputs["attention_mask"]) batch_logits = outputs[1] print(batch_logits) documents = ['a', 'b'] query = "who am I?" prediction(documents, query) ``` and got: ``` tensor([[[-21.6500, -9.8658, -13.6561, ..., -43.1233, -43.0788, -43.0745], [-30.3906, -12.7200, -1.2460, ..., -41.7208, -41.6774, -41.6465], [-15.7073, -5.9496, -5.9364, ..., -36.8553, -36.8221, -36.8052]], tensor([[[-21.6500, -9.8658, -13.6561, ..., -43.1233, -43.0788, -43.0745], [-30.3906, -12.7200, -1.2460, ..., -41.7208, -41.6774, -41.6465], [-20.1459, -5.3198, -4.7644, ..., -37.7978, -37.7850, -37.8202]]], device='cuda:0') ``` Note: rerunning ` prediction(documents, query)` produces same deterministic results, suggesting that the inputs to `labels` do affect the `logits` outcome.<|||||>Hi ednussi, yes, rerunning prediction(documents, query) will give deterministic results. However, my issue is rerunning the above chunk of code twice (reload everything, including model.). the outputs are different. Does this also happen to you? Update: I tried the model provided by ednussi, and the outputs of running the code twice are the same. But my tf model still gives two different results, suggesting that loading model from tf gives different outcomes.<|||||>Hi @ArvinZhuang, Yes, similar to you when I run the model call with same input_ids but different labels I got different outcomes (using pytorch). Realized I missed a part of the print so edited my comment above to match the output. Hoping @patrickvonplaten or someone from the `huggingface` team can take a second look, so we can get to the bottom of this.<|||||>Hey @ednussi, if you look into the code, you can see that T5 uses the `labels` to create the `decoder_input_ids` which necessarily do affect the outputs. You can try using deterministic `input_ids` and `decoder_input_ids` and no labels and see if the output stays deterministic (it should).<|||||>Thanks @patrickvonplaten. Was following your suggestion, and after reading through the documentation and code of how the `decoder_input_ids` is used, it became clear why it affects the `logits` and helped clear my confusion.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@ednussi - wanted to ask for your help if you could provide a brief explanation from your reading? Thanks.
transformers
6,785
closed
[help] Add multigpu evalulation script for seq2seq
### Goal At the end the command, ``` cd examples/seq2seq export DATA_DIR=test_data # use wmt_en_ro for more data, download instructions in README.md export save_dir=translations python multigpu_eval.py Helsinki-NLP/opus-mt-en-ro $DATA_DIR/test.source $save_dir/test_translations.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_bleu.json --fp16 --task translation --bs 16 ``` On wmt_en_ro, thissShould get ~27.7 (same result as run_eval.py on 1 GPU) and should take less time. On test_data I don't know the 1 GPU result. side note: The motivation here is actually summarization, where the validation sets are larger and on 1 GPU for pegasus running eval takes an hour. ### Implementation This looks similar, https://github.com/facebookresearch/ParlAI/blob/00efcbebb49524918692638ab580cadeebe70cf8/parlai/scripts/multiprocessing_eval.py#L34 but I'm open to many strategies. Lots of freedom. You will probably need to use one of the datasets in utils + an unshuffled dataloader of some sort. Try to keep the command line as similar to run_eval.py as possible, with the exception of a --gpus flag. Alternatively, use all available gpus and tell the user to use CUDA_VISIBLE_DEVICES=0,1 to restrict. ### Process 1. You can claim the bug here, but don't wait for me to reply to get started. This is hard enough that it is very unlikely that two people will be working on this at once. If I start working on this (unlikely) I will post here and remove the help-wanted label. 2. When you send a PR, tag me and link this issue. 3. It will be difficult to test your solution in CI, but try to make the code work with 1 GPU so we can test it that way in CI. You can make a new test_.py file or add to test_seq2seq_examples.py 4. Report BLEU scores in PR description. Happy hacking!
08-28-2020 03:43:48
08-28-2020 03:43:48
transformers
6,784
closed
[style] set the minimal required version for `black`
`make style` with `black < 20.8b1` is a no go (in case some other package forced a lower version) - so make it explicit to avoid confusion
08-28-2020 03:29:41
08-28-2020 03:29:41
transformers
6,783
closed
Why does examples/seq2seq/finetune.py only use sortish sampler for train?
Would it make eval faster? Would it make validation sanity check obnoxiously slow? This should be a ~1 line PR with a longer PR description.
08-28-2020 02:39:32
08-28-2020 02:39:32
@sshleifer I can help out here. What exactly are you looking for?<|||||>I'll take care of it actually, sorry! There is a harder one at #6785 if you're interested.<|||||>Fixed, using sortish sampler for val. Much faster!
transformers
6,782
closed
[style] a new `black` version is out there (20.8)
it now seems to respect `--line-length 119` in the `make style/quality` command so a big reformat - no code change These are just the changes after running `make style`.
08-28-2020 02:05:26
08-28-2020 02:05:26
weird, I thought I had a newer version, but it was old for some reason. updating to `pip install black==20.8b1` resolved this.
transformers
6,781
closed
Typo in modling_tf_bert
end_pos will be overwritten in some case https://github.com/huggingface/transformers/blob/858b7d5873577d7997918439eb7429d424d07dd5/src/transformers/modeling_tf_bert.py#L1422
08-28-2020 01:15:26
08-28-2020 01:15:26
Good catch, thanks a lot! We will fix it ASAP.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,780
closed
How can I get rid of this message every time I run GPT2?
``` Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
08-28-2020 00:55:27
08-28-2020 00:55:27
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,779
closed
Character-level tokenization?
# ❓ Questions & Help In a project I'm using the Hugging Face library for, I would like to use transformers to perform arithmetic (framed as a text-to-text problem). However, it seems that the current method of tokenization adopted by most existing models in the Hugging Face library (BPE, WordPiece, etc.) are not character-level, and thereby, would not allow for the network to properly process the input for later arithmetic. Are there any ways to get this sort of analysis to work properly with the existing models in Hugging Face (and their accompanying tokenizers), otherwise, what recommendations do you have? For reference, I am currently using the BART model from Hugging Face.
08-27-2020 23:24:49
08-27-2020 23:24:49
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,778
closed
Bertology-like Analysis for BART, T5?
# ❓ Questions & Help In my current project, I am working on training encoder-decoder models (BART, T5, etc.) and the Transformers library has been absolutely invaluable! After seeing several Bertology analyses (i.e. looking at the information the model's attention mechanism learns to attend to), I would like to know if a similar analysis is possible with the BART and T5 models in the Hugging Face library. Any recommendations are certainly appreciated!
08-27-2020 23:19:39
08-27-2020 23:19:39
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,777
closed
[transformers-cli] fix logger getter
`transformers-cli` is currently broken: ``` File "src/transformers/commands/transformers_cli.py", line 8, in <module> from transformers.commands.serving import ServeCommand File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/commands/serving.py", line 28, in <module> logger = logging.getLogger("transformers-cli/serving") AttributeError: module 'transformers.utils.logging' has no attribute 'getLogger' ``` this fixes it. plus added a basic test.
08-27-2020 22:57:20
08-27-2020 22:57:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=h1) Report > Merging [#6777](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.90%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6777/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6777 +/- ## ========================================== + Coverage 78.47% 80.37% +1.90% ========================================== Files 157 157 Lines 28569 28569 ========================================== + Hits 22420 22963 +543 + Misses 6149 5606 -543 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `55.88% <100.00%> (+55.88%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=footer). Last update [42fddac...6fd85c4](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,776
closed
PL: --adafactor option
CC @moscow25, @patil-suraj I used the "External LR" setup and verified that is saves a significant amount of memory on pegasus finetuning. Happy to add to Trainer.
08-27-2020 22:37:23
08-27-2020 22:37:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=h1) Report > Merging [#6776](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **decrease** coverage by `0.99%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6776/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6776 +/- ## ========================================== - Coverage 78.47% 77.48% -1.00% ========================================== Files 157 157 Lines 28569 28569 ========================================== - Hits 22420 22137 -283 - Misses 6149 6432 +283 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: | | ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=footer). Last update [42fddac...c769370](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,775
closed
Support fast tokenizer in Question Answering Pipeline
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6545 The problem is that the behavior of the python tokenizer and the rust based fast tokenizer is very different. The python tokenizer handles cases where inputs are in different formats (str tokens and int tokens and vice versa), where as the fast tokenizer is unable to do so. Some edge cases where the fast tokenizer fails because of limited functionality (compare to python tok) have been added and I've included comments in those places.
08-27-2020 22:00:42
08-27-2020 22:00:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=h1) Report > Merging [#6775](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.22%`. > The diff coverage is `13.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6775/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6775 +/- ## ========================================== + Coverage 78.47% 79.70% +1.22% ========================================== Files 157 157 Lines 28569 28579 +10 ========================================== + Hits 22420 22779 +359 + Misses 6149 5800 -349 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `27.59% <13.33%> (-0.54%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.07% <0.00%> (+0.12%)` | :arrow_up: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=footer). Last update [42fddac...54cbfb1](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@LysandreJik @mfuntowicz Just checking in to see if this PR is good, or does it need some more improvements? Thanks<|||||>Hi @bdalal, Will have a look at it ASAP, sorry for the delay Morgan<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,774
closed
Task specific params for pegasus-large to allow finetuning with correct generation_parameters
``` 'task_specific_params': {'summ_xsum': {'max_length': 56, 'length_penalty': 0.8}}} ```
08-27-2020 20:12:58
08-27-2020 20:12:58
Done! ```python # Config values that vary between checkpoints: for testing and conversion task_specific_params = { # These are task specific params for pegasus-large and normal params for finetuned checkpoints "summarization_xsum": {"length_penalty": 0.8, "max_length": 64, "max_position_embeddings": 512}, "summarization_cnn_dailymail": {"length_penalty": 0.8, "max_length": 128, "max_position_embeddings": 1024}, "summarization_newsroom": {"length_penalty": 0.8, "max_length": 128, "max_position_embeddings": 512}, "summarization_wikihow": {"length_penalty": 0.6, "max_length": 256, "max_position_embeddings": 512}, "summarization_multi_news": {"length_penalty": 0.8, "max_length": 256, "max_position_embeddings": 1024}, "summarization_reddit_tifu": {"length_penalty": 0.6, "max_length": 128, "max_position_embeddings": 512}, "summarization_big_patent": {"length_penalty": 0.7, "max_length": 256, "max_position_embeddings": 1024}, "summarization_arxiv": {"length_penalty": 0.8, "max_length": 256, "max_position_embeddings": 1024}, "summarization_pubmed": {"length_penalty": 0.8, "max_length": 256, "max_position_embeddings": 1024}, "summarization_gigaword": {"length_penalty": 0.6, "max_length": 32, "max_position_embeddings": 128}, "summarization_aeslc": {"length_penalty": 0.6, "max_length": 32, "max_position_embeddings": 512}, "summarization_billsum": {"length_penalty": 0.6, "max_length": 256, "max_position_embeddings": 1024}, # this last entry is useless -- just for consistency "summarization_large": {"length_penalty": 0.8, "max_length": 256, "max_position_embeddings": 1024}, } ```
transformers
6,773
closed
Error in run_ner.py
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @stefan-it ## Information The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Rare Entities task: WNUT’17 (English NER) dataset * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: When I try to execute run_ner.py, the following error occurs: 08/27/2020 16:10:10 - INFO - filelock - Lock 2174122787120 acquired on ./data_wnut_17\cached_dev_BertTokenizer_128.lock 08/27/2020 16:10:24 - INFO - utils_ner - Creating features from dataset file at ./data_wnut_17 08/27/2020 16:12:04 - INFO - filelock - Lock 2174122787120 released on ./data_wnut_17\cached_dev_BertTokenizer_128.lock Traceback (most recent call last): File "C:\Users\Monique\Documents\IA\NLP\transformers\examples\token-classification\utils_ner.py", line 247, in __init__ examples = token_classification_task.read_examples_from_file(data_dir, mode) File "C:\Users\Monique\Documents\IA\NLP\transformers\examples\token-classification\tasks.py", line 27, in read_examples_from_file for line in f: File "C:\ProgramData\Anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb4 in position 46: invalid start byte My only local change was the following in preprocess.py: `with open(dataset, "rt", encoding='utf-8') as f_p:` (I added encoding='utf-8' due to a similar error in preprocessing). ## Expected behavior The dataset should have been created without encoding errors.
08-27-2020 19:24:58
08-27-2020 19:24:58
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,772
closed
Unable to install Transformers Master version
@patil-suraj @joeddav Is there some other way to install master version of transformers? I tried using the URL to install the master version but it installed v3.0.2 _Originally posted by @AnkitVarshney02 in https://github.com/huggingface/transformers/issues/6752#issuecomment-681716615_
08-27-2020 18:57:29
08-27-2020 18:57:29
Answered in #6752
transformers
6,771
closed
Exported TF Bert model is much slower than that exported from Google's Bert
I benchmarked the Bert model exported from Huggingface TF Bert code and observed that it is much slower than Google's TF2 Bert model(https://github.com/tensorflow/models/tree/master/official/nlp/bert) on TF Serving (GPU device). When batch_size=1, Huggingface Bert model can be more than 200% slower than Google's one. Here are my benchmark results: **latency: (ms/batch) on 1 GPU-V100** Batch Size | google official | HF_TF2 | relative change | -- | -- | -- | -- 1 | 6.7 | 21.3 | 238.10% 2 | 9.4 | 24.2 | 171.91% 4 | 14.4 | 28.1 | 99.29% 8 | 24.0 | 36.9 | 55.70% 16 | 46.6 | 58.6 | 26.57% 64 | 171.5 | 171.4 | 0.35% 128 | 338.5 | 324.5 | -3.71% I used the latest Tensorflow profiling tool (https://www.tensorflow.org/tfx/serving/tensorboard) to compare these two: **Hugging-face Bert-Base FP32 Batch_size = 128** <img width="1518" alt="Screen Shot 2020-06-05 at 8 20 34 PM" src="https://user-images.githubusercontent.com/70337521/91475490-87b47600-e850-11ea-8e19-ac2b5103073d.png"> **Google Bert-Base FP32 Batch_size = 128** <img width="1518" alt="image (1)" src="https://user-images.githubusercontent.com/70337521/91475500-8a16d000-e850-11ea-880c-eab176c0db59.png"> Comparing the event traces of HF's and google's, we see that before each *MatMul* operation on GPU, there are always some CPU processes running (wasn’t able to get more info about these CPU processes because the Profiling tool doesn’t provide meaningful labels or description for those CPU processes but it seems like they are doing some gather/reduce operations) and the GPU is inactive for a while. If you look at the trace of google official, you can find that all GPU ops are compact and there are no idle time for GPU. In parallel, only one CPU process keeps running and doesn’t cause GPU ops to wait. The short idle time will hurt the performance more when batch_size or model_size is small because short time is spent on GPU calculations and idle time is not ignorable anymore. See time comparison grouped by type below: **Hugging-face Bert-Base FP32 Batch_size = 4** <img width="425" alt="image (3)" src="https://user-images.githubusercontent.com/70337521/91475887-175a2480-e851-11ea-8493-a25f0cea4914.png"> **Google Bert-Base FP32 Batch_size = 4** <img width="418" alt="image (4)" src="https://user-images.githubusercontent.com/70337521/91475882-145f3400-e851-11ea-830d-7ed59a04e3bc.png"> Under small batch_size=4, Hugging-face model has 60% GPU idle time while that of google model is only 25%. This also explains the reason why for small batch_size, there is over 200% slow down and for large batch_size=128, the situation gets better. The *MatMul* op is triggered by tf.keras.layers.Dense(), which is widely used in Transformer encoder self-attention, intermediate layer and output layer. In comparison, Google's Bert uses DenseEinsum()([link](https://github.com/tensorflow/models/blob/1ebff962832ec8a599bd67ac8c43b9ae1b05c3fa/official/nlp/transformer/attention_layer.py#L59-L69)) to replace all usage of Dense layer. I personally modified Huggingface's TF Bert code base to use DenseEinsum(). And the slow down issue got solved: **latency: (ms/batch) on 1 GPU-V100** Batch Size | google official | HF_TF2 | relative change | After Fixing | relative change -- | -- | -- | -- | -- | -- 1 | 6.7 | 21.3 | 238.10% | 6.60 | 4.76% 2 | 9.4 | 24.2 | 171.91% | 8.90 | 0.00% 4 | 14.4 | 28.1 | 99.29% | 13.40 | -4.96% 8 | 24.0 | 36.9 | 55.70% | 22.10 | -6.75% 16 | 46.6 | 58.6 | 26.57% | 43.20 | -6.70% 64 | 171.5 | 171.4 | 0.35% | 158.80 | -7.03% 128 | 338.5 | 324.5 | -3.71% | 313.10 | -7.09% Profile after fixing: <img width="1517" alt="image (4) copy" src="https://user-images.githubusercontent.com/70337521/91475978-3789e380-e851-11ea-9dae-a5136fac1074.png"> I noticed that multiple issues may be related to this (https://github.com/huggingface/transformers/issues/6264). Do you plan to solve this issue? Thanks. @patrickvonplaten
08-27-2020 18:07:07
08-27-2020 18:07:07
This looks super interesting @jlei2 ! I openen a PR #6877 to see if changing to EinusmDense speeds up the runtime...In the PR I cannot see an improvement - could you take a look and maybe comment on how you changed the TF Bert code in transformers to benchmark your changes? :-) <|||||>Hi Patrick, thank you for taking time to investigate this issue! Your PR is not completely correct. Here is my draft [PR](https://github.com/jlei2/transformers/pull/1). DenseEinsum actually is the combination of matrix reshape/transpose and dense layer. So we need to be careful when replacing keras.dense with DenseEinsum, otherwise we will not be able to get the same output as we can get before this change. I tested on my end that there is no speedup after this change **at runtime**, using the huggingface bechmark command tool given by you. But I did see expected speedup after exporting DenseEinsum models. I used tensorflow profiling tool(https://www.tensorflow.org/tfx/serving/tensorboard) to benchmark the inference time of exported model when serving on TF-serving 2.2.0. So actually I don't have my own benchmark code. Device: 1 GPU-V100 Batch Size: 1 Model: Bert-base FP32 <img width="220" alt="Screen Shot 2020-09-02 at 1 52 08 PM" src="https://user-images.githubusercontent.com/70337521/92035460-981e9200-ed23-11ea-8141-a5f8e600559a.png"> <img width="220" alt="Screen Shot 2020-09-02 at 1 52 28 PM" src="https://user-images.githubusercontent.com/70337521/92035489-a2409080-ed23-11ea-9414-dd2bb6e8ea68.png"> So the left picture is from the Huggingface model after applying my PR. The right one is from original Huggingface model using current master. You can see that there is almost 100% speedup. So I suspect this issue only happens on exported model for tf-serving. The code I used to export saved model: ``` import tensorflow as tf import transformers from transformers import BertModel pt_model_path = 'bert-base-uncased' model = BertModel.from_pretrained('bert-base-uncased') model.save_pretrained(pt_model_path) MAX_LEN = 128 model_path = 'saved_model/tmp_model' input_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = transformers.TFBertModel.from_pretrained(pt_model_path, output_hidden_states=False, from_pt=True) base_output = base_model.bert([input_ids, attention_mask, token_type_ids]) seq_out, pool_out = base_output[0], base_output[1] base_model.trainable = False model = tf.keras.models.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[pool_out]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) print(model.summary()) model.save(model_path, include_optimizer=False, save_format="tf") ``` Is this something you will want to solve on your end? though we see that during runtime there is no difference.<|||||>Hey @jlei2, Thanks a lot for checking and your draft PR. I think we will be able to implement your proposition given that we can keep complete backwards compatibility. Do you have an idea by any chance why we can see the speed up only on TF-serving? Also cc @jplu - this looks very interesting! <|||||>To be honest I have no idea about the root difference on TFS side. I can only observe that there are some CPU processes wasting time before the Matmul op. My feeling is that the MatMul op triggered by tf.keras.layers.Dense() may not be implemented to be very efficient on TF-serving. Though this issue seems to have more to do with tensorflow keras team or TFS team instead of Huggingface code base imo, it would be helpful to all Huggingface users if you are able to resolve this issue on your end.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Re-activated cc @jplu <|||||>Hey @jlei2! Sorry for this long silence. We are currently working on this performance issue and to be sure to make the proper changes, can you share with us your benchmark script that you used to compute the values in the table? Thanks!<|||||>Hi @jplu , thanks for keeping an eye on this! Unfortunately I can not share the benchmark script I used to get the numbers in the table with you because it's confidential. But I can share how you can obtain the same slow-down observation using Tensorflow Profiler tool (to get the two Performance Summary Screenshots I uploaded above 15.7ms vs 28.8ms). ### Export baseline and modified SavedModel **Baseline transformers repo**: The commit I used is the same as the master branch in my fork (https://github.com/jlei2/transformers) **Modified transformers repo using tf.keras.layers.experimental.EinsumDense**: (https://github.com/jlei2/transformers/tree/dense-einsum-tf2.3) **The code to export the model**: ``` import tensorflow as tf import transformers from transformers import BertModel pt_model_path = 'bert-base-uncased' model = BertModel.from_pretrained('bert-base-uncased') model.save_pretrained(pt_model_path) MAX_LEN = 128 model_path = 'saved_model/tmp_model' input_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = transformers.TFBertModel.from_pretrained(pt_model_path, output_hidden_states=False, from_pt=True) base_output = base_model.bert([input_ids, attention_mask, token_type_ids]) seq_out, pool_out = base_output[0], base_output[1] base_model.trainable = False model = tf.keras.models.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[pool_out]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) print(model.summary()) model.save(model_path, include_optimizer=False, save_format="tf") ``` Then on a machine with 1 GPU-V100 and TF-Serving-gpu== 2.3.0 and Tensorflow==2.3.0: ### Spin up SavedModel: ``` export LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH export MODEL_DIR=your_model_dir export MODEL_NAME=your_model_name nohup tensorflow_model_server --rest_api_port=8501 --model_name=$MODEL_NAME --model_base_path=$MODEL_DIR >server.log 2>&1 ``` ### Spin up Tensorboard ``` tensorboard --logdir ~/logs/inference_demo/ --port your_port_number --bind_all ``` Go to the profile page on Tensorboard <img width="447" alt="Screen Shot 2020-12-01 at 8 58 18 PM" src="https://user-images.githubusercontent.com/70337521/100830662-03f2c280-3419-11eb-95bc-f0d70caec6db.png"> After clicking capture, I would send request to TFS to profile the whole process **Code to send request to served model**: ``` import json import requests BATCH_SIZE = 1 MAX_SEQ_LEN = 128 batch = [] MODEL_NAME = your_model_name for _ in range(BATCH_SIZE): batch.append({"input_ids":[999] * MAX_SEQ_LEN, "attention_mask":[1] * MAX_SEQ_LEN, "token_type_ids":[0] * MAX_SEQ_LEN}) input_data = {"instances": batch} r = requests.post("http://localhost:8501/v1/models/%s:predict"%(MODEL_NAME), data=json.dumps(input_data)) print(r.text) ``` I would run this scripts for several times first to warm up the model and then start to profile formally. And finally you will see profiling results on Tensorboard UI page just like what I uploaded. Hope this could be helpful to you!<|||||>Actually I write a very simple benchmark script that can show the difference: ``` import json import requests import time BATCH_SIZE = 1 MAX_SEQ_LEN = 128 batch = [] MODEL_NAME = your_model_name for _ in range(BATCH_SIZE): batch.append({"input_ids":[999] * MAX_SEQ_LEN, "attention_mask":[1] * MAX_SEQ_LEN, "token_type_ids":[0] * MAX_SEQ_LEN}) input_data = {"instances": batch} start = time.time() for _ in range(100): r = requests.post("http://localhost:8501/v1/models/%s:predict"%(MODEL_NAME), data=json.dumps(input_data)) end = time.time() print(end-start) ``` Baseline time: ~2.8s My version's time: ~1.5s. So we can easily see ~2x speed up.<|||||>Awesome!! Thanks a lot @jlei2!! This is already a good start to check the differences. I will let you know here once we have done the changes!<|||||>@jlei2 I have open a PR for integrating this change. Unfortunately, as I'm on Windows, the GPU profiling is not yet available in WSL, can you clone my branch and run it on your side with your own benchmark in order to be sure that it looks ok. There is a small update on the code to create the saved model: ``` import tensorflow as tf import transformers MAX_LEN = 128 model_path = 'saved_model/tmp_model' input_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask') token_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids') base_model = transformers.TFBertModel.from_pretrained("bert-base-uncased") inputs = {"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids} base_output = base_model.bert(inputs) seq_out, pool_out = base_output[0], base_output[1] base_model.trainable = False model = tf.keras.models.Model(inputs=inputs, outputs=[pool_out]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) print(model.summary()) model.save(model_path, include_optimizer=False, save_format="tf") ``` No need anymore to load from the PT model, thanks to some update we have applied on the loading weights mechanism.<|||||>Hi @jplu , Thanks for opening the PR to integrate this change. I have cloned your branch and run benchmark on my side. The results are as expected! **latency: (ms/batch) on 1 GPU-V100** Batch Size | current master | tf-einsumdense | -- | -- | -- 1 | 20.9 | 6.26 2 | 24.1 | 8.68 4 | 27.6 | 13.1 8 | 36.3 | 21.5 16 | 58.8 | 42.3 32 | 94.7 | 80.4 64 | 170 | 156 128 | 321 | 309 And on GPU profiling, obtained the same observation like I posted in the very first comment: GPU computation is continuous and compact. It is not cut off by CPU process anymore. **current master, batch_size=128** ![image](https://user-images.githubusercontent.com/70337521/102310915-c580fb80-3f20-11eb-8062-b89e8b11dc49.png) **tf-einsumdense, batch_size=128** ![image](https://user-images.githubusercontent.com/70337521/102311089-12fd6880-3f21-11eb-9129-100081eef716.png)
transformers
6,770
closed
Roberta Large not working with SNLI dataset always gives random baseline that is 33 percent accuracy.
I have been working with parts of SNLI dataset, even after hours and hours of pretraining the model always gives 33.24 percent accuracy no matter the amount of the data used in pretraining. This issue is fixed if I revert my transformers version to 2.11.0
08-27-2020 17:56:56
08-27-2020 17:56:56
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,769
closed
Seq2SeqTrainer
This PR adds `Seq2SeqTrainer` to use `Trainer` for summarization, translation and other seq2seq tasks. `Seq2SeqTrainer` includes: - label smoothing loss - sortish sampler - predict from generate to allow calculating generative metrics `finetune_trainer.py` includes: - `Seq2SeqDataCollator` to correctly prepare `labels` and `decoder_input_ids` - `Seq2SeqTrainingArguments` for extra training arguments. - encoder and embedding freezing. - `compute_metrics_fn` for translation(BLEU) and summarization (ROUGE) - evaluation Most of the data and model arguments are kept identical to `finetune.py` file Seq2Seq4Eva!🔥
08-27-2020 17:32:09
08-27-2020 17:32:09
Command for running `finetune_trainer.py` ```bash python finetune_trainer.py \ --model_name_or_path sshleifer/bart-tiny-random \ --data_dir xsum \ --output_dir test \ --overwrite_output_dir \ --n_train 8 \ --n_val 8 \ --max_source_length 512 \ --max_target_length 56 \ --val_max_target_length 56 \ --do_train \ --do_eval \ --num_train_epochs 2 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --evaluate_during_training \ --predict_from_generate \ --logging_steps 2 \ --save_steps 2 \ --eval_steps 2 \ --sortish_sampler \ ```<|||||>Note: Eventually we need to refactor seq2seq/README.md to accommodate this<|||||>This looks great. Very excited to try this out with the `EncoderDecoder` model.<|||||>LGTM pending resolution of padding mystery.<|||||>Great work @patil-suraj !
transformers
6,768
closed
Floating-point operations logging in trainer
First of two PRs to implement #4847 : - logging loss vs floating-point operations - using the results for scaling laws analysis This directly logs floating-point operations in wandb and comet, and creates a `log_history.json` file with training metrics. To do so, it adds methods to `PretrainedModel` to count parameters with and without embeddings, and the number of floating-point operations. It also has a few Trainer fixes, most importantly averaging the eval loss across processes rather than logging the one in process 0, and a bug with checkpoint folder creation.
08-27-2020 17:06:00
08-27-2020 17:06:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=h1) Report > Merging [#6768](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.17%`. > The diff coverage is `38.57%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6768/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6768 +/- ## ========================================== + Coverage 78.47% 79.65% +1.17% ========================================== Files 157 157 Lines 28569 28625 +56 ========================================== + Hits 22420 22800 +380 + Misses 6149 5825 -324 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `51.85% <31.48%> (-1.81%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <62.50%> (-0.84%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.56% <0.00%> (+0.17%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.13% <0.00%> (+68.28%)` | :arrow_up: | | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `86.00% <0.00%> (+76.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=footer). Last update [42fddac...4becfac](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I think even with domain-agnostic models we'd like to keep the configuration, no? I'm not sure the trainer would behave correctly without a configuration, so if we want to remove the dependency towards configurations, we might as well do it all at once, right? Would the goal be to have the trainer accept all `nn.Module`s?<|||||>Like agreed upon internally, we will move to Trainer accepting models instantiating a base abstractclass/conforming to some protocol. I think the config will be in the required field but have to work a bit more on this to be sure. In any case, this is work for a subsequent PR :-)
transformers
6,767
closed
Add NLP install to self-scheduled CI
Same change as @sgugger already made for `self-push.yml`
08-27-2020 15:07:26
08-27-2020 15:07:26
transformers
6,766
closed
Distributed training doesn’t work
Even if CUDA detects all the GPU of the machine, there is no distributed training : `RuntimeError: CUDA out of memory. Tried to allocate 3.82 GiB (GPU 0; 11.17 GiB total capacity; 7.59 GiB already allocated; 594.31 MiB free; 10.28 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289)` The training is launched only on the first GPU. I’m using language modeling code. I tried to set local_rank at 0 and to use the Torch environment variables MASTER_ADDR, MASTER_PORT, WORLD_SIZE and RANK, but without success. Is there another way to do distributed training with transformers ?
08-27-2020 11:57:30
08-27-2020 11:57:30
Hi! What are you using to launch your ditributed training? What script are you using? Could you show the command you used, and could you paste your environment information as required by the template? Thank you.<|||||>## Environment info - `transformers` version: 2.11.0 - Platform: Databricks - Python version: 3.6.9 - PyTorch version: 1.3.1 - Tensorflow version : 2.0 - Using GPU in script: yes - Using distributed or parallel set-up in script: try to ## Information I am using the model xlm-roberta-base. The tasks I am working on is a further train on my own dataset. The problem arises when using the script examples/language_modeling/run_language_modeling.py with the following command : ``` python transformers/examples/language-modeling/run_language_modeling.py --model_type xlm-roberta --model_name_or_path xlm-roberta-base --train_data_file data/processed/piaf.txt --output_dir ./output --learning_rate 0.1 --per_gpu_train_batch_size 2 --local_rank 0 --num_train_epochs 1 --do_train --mlm ```<|||||>Okay, could you try using the `torch.distributed.launch` utility used to launch a distributed training? Your command, given that you have 8 GPUs, would be: ``` python -m torch.distributed.launch \ --nproc_per_node 8 transformers/examples/language-modeling/run_language_modeling.py --model_type xlm-roberta --model_name_or_path xlm-roberta-base --train_data_file data/processed/piaf.txt --output_dir ./output --learning_rate 0.1 --per_gpu_train_batch_size 2 --local_rank 0 --num_train_epochs 1 --do_train --mlm ``` Feel free to modify this part: `--nproc_per_node 8` according to the number of GPUs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,765
closed
Adds Adafactor to the docs and slightly fixes the formatting
08-27-2020 09:16:34
08-27-2020 09:16:34
transformers
6,764
closed
Add ProtBert model card
This is a model card for our ProtBert: https://huggingface.co/Rostlab/prot_bert
08-27-2020 08:05:00
08-27-2020 08:05:00
Any idea why there is one failed test case ? I have checked the code on the notebook and it runs without any issue on Colab.<|||||>This seems completely unrelated, just relaunched the failed test.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=h1) Report > Merging [#6764](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7be9a4268221d2a0000c7e8033aaeb365c03b?el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6764/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6764 +/- ## ========================================== - Coverage 79.74% 79.70% -0.05% ========================================== Files 157 157 Lines 28479 28479 ========================================== - Hits 22712 22698 -14 - Misses 5767 5781 +14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=footer). Last update [4bd7be9...c7eb12c](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @agemagician, your model https://huggingface.co/Rostlab/prot_bert is not recognized by our Hosted inference API, probably because it doesn't have a `architectures` field in its config.json: <img width="565" alt="Screenshot 2020-10-21 at 21 52 57" src="https://user-images.githubusercontent.com/326577/96775413-849fb700-13b5-11eb-8075-11d7792548f6.png"> Do you mind if we update the config.json on the model hub? Alternatively, do you prefer doing it yourself? Thanks!<|||||>Hi @julien-c , No problem. I already have seen you updated it. However, it doesn't work properly because we don't lower case tokens: https://huggingface.co/Rostlab/prot_bert?text=A+T+G+%5BMASK%5D+C I have uploaded the "tokenizer_config.json" file which should fix it, but we have to wait until the model is no longer in memory and reload it. I have also did the same for our better bert model "prot_bert_bfd", and it is working fine: https://huggingface.co/Rostlab/prot_bert_bfd?text=A+T+G+%5BMASK%5D+C Thanks for your help.<|||||>Now properly loaded, and looks great! https://huggingface.co/Rostlab/prot_bert?text=D+L+I+P+T+S+S+K+V+V+%5BMASK%5D+D+T+S+L+Q+V+K+K+A+F+F+A+L+V+T <img width="693" alt="Screenshot 2020-10-21 at 22 56 44" src="https://user-images.githubusercontent.com/326577/96786965-ff210480-13be-11eb-9133-375bb20e60a1.png"> Thanks a lot @agemagician
transformers
6,763
closed
Cuda out of memory when fp16 training
# ❓ Questions & Help I have error when run train with fp16 <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to train Albert model on Russian. I use run_language_modeling.py with some changes in dataset creating. If i run training with fp32 (without --fp flag) and batch_size=2, I will ok. But if i run training with --fp16 flag, i will obtain cuda out of memory error. I tried to decdeare batch size to 1, however, I obtained same error.
08-27-2020 07:04:33
08-27-2020 07:04:33
Same issue when running distilbart<|||||>I face the same issue with PEGASUS<|||||>I've run into this with several seq2seq models (Pegasus, BART, T5). Ironically, running with fp16 causes increased GPU memory usage.<|||||>I solved my issue by downgrading Pytorch to 1.5.1 and installing Nvidia/Apex. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,762
closed
Checklist for Model Hub
# 🚀 Feature request I want to propose a checklist for model hub. Model hub is increasingly being used by `transformers` users. It is imperative that users exactly know what model they are downloading and importing for transparency and research-related reasons. It should be mandatory for users to upload a `REAME.md` while uploading model on model hub. Additionally, it will reduce the need to send a seperate PR for model cards. Checklist should contain the following elements: - Details of training and validation set used. Specify the split if not using default. - A table with results on the validation set - Model name (can be inferred from `config.json`) - Objective used for training - Hyperparameter configurations if default ones are not being used - Link to implemented code (if available, and is different from default examples) - For how many epochs the model was trained for ? - Associated paper (if published) These are just the general ones. Happy to hear about other suggestions. This is just a suggestion which would help many users. Thanks for this feature.
08-27-2020 03:12:21
08-27-2020 03:12:21
Yes! This is something we'll build, step by step. I'll post more about our roadmap for this in the coming weeks.<|||||>> Yes! This is something we'll build, step by step. I'll post more about our roadmap for this in the coming weeks. Maybe you can create a web-based interface with input fields/drop downs that can generate the model card automatically. Just a thought. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>posting so that the bot doesn't close this issue.<|||||>@prajjwal1 Have you checked out our new model storage/model versioning features on huggingface.co? You can also now edit your model card directly from the website (or from `git`) and we'll make this workflow more prominent (vs. adding model cards to `transformers`) in the coming days/weeks. Feedback welcome, here or on the Forum.<|||||>No, I haven't. I am aware of it. I will check out soon. Loving the direction where HF is headed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,761
closed
s2s distillation uses AutoModelForSeqToSeqLM
hard coding bart caused subsequent AutoTokenizer to fail for marian/mbart distillation.
08-27-2020 03:11:53
08-27-2020 03:11:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=h1) Report > Merging [#6761](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05e7150a53cc6c1571c0e3acb1b4d692737976d9?el=desc) will **decrease** coverage by `0.24%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6761/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6761 +/- ## ========================================== - Coverage 79.70% 79.45% -0.25% ========================================== Files 157 157 Lines 28479 28479 ========================================== - Hits 22698 22627 -71 - Misses 5781 5852 +71 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=footer). Last update [05e7150...88974cc](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,760
closed
Question: Differentiable Search in generate
Hi, This might be a very dumb question to ask, but is there a way that we can have differentiable search and pickings in the "generate" method in models like T5 and GPT-2? If not, is there an example of using the pure real-valued model output in a task? Or an article giving some intuition on what the real-valued output means? I'd really appreciate your help.
08-27-2020 01:54:33
08-27-2020 01:54:33
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,759
closed
added model card for flexudys t5 model
- Created a model card for a fine tuned model.
08-26-2020 22:43:59
08-26-2020 22:43:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=h1) Report > Merging [#6759](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/434936f34a8b3154a79564c87f4cb50f5d57e050?el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6759/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6759 +/- ## ========================================== + Coverage 79.47% 79.51% +0.03% ========================================== Files 157 157 Lines 28479 28479 ========================================== + Hits 22635 22644 +9 + Misses 5844 5835 -9 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=footer). Last update [434936f...fd0b38d](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>really cool model card, thanks for sharing<|||||>File path was incorrect but I fixed it in d822ab636b6a14ed50f7bca0797c1de42c19de61. Thanks!<|||||>@julien-c thanks
transformers
6,758
closed
Bart can make decoder_input_ids from labels
mimics a nice t5 feature
08-26-2020 22:14:00
08-26-2020 22:14:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=h1) Report > Merging [#6758](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6758/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6758 +/- ## ========================================== - Coverage 78.96% 78.96% -0.01% ========================================== Files 157 157 Lines 28486 28488 +2 ========================================== + Hits 22495 22496 +1 - Misses 5991 5992 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6758/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.57% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6758/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=footer). Last update [a75c64d...06e7cc5](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,757
closed
Seq2Seq tokenization and training fixes
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
08-26-2020 22:06:48
08-26-2020 22:06:48
transformers
6,756
closed
Fix run_squad.py to work with BART
BART model should be among the set that doesn't use `token_type_ids`.
08-26-2020 22:03:32
08-26-2020 22:03:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=h1) Report > Merging [#6756](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b9ed80742f564dc522783a33bf001d6d871a2c?el=desc) will **increase** coverage by `0.16%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6756/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6756 +/- ## ========================================== + Coverage 79.65% 79.82% +0.16% ========================================== Files 157 157 Lines 28479 28479 ========================================== + Hits 22686 22734 +48 + Misses 5793 5745 -48 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.00%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+7.18%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=footer). Last update [61b9ed8...7ba95ec](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,755
closed
Model Card for Multilingual Passage Reranking BERT
08-26-2020 21:32:49
08-26-2020 21:32:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=h1) Report > Merging [#6755](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b9ed80742f564dc522783a33bf001d6d871a2c?el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6755/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6755 +/- ## ========================================== - Coverage 79.65% 79.41% -0.24% ========================================== Files 157 157 Lines 28479 28479 ========================================== - Hits 22686 22618 -68 - Misses 5793 5861 +68 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=footer). Last update [61b9ed8...9ff6c2f](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That is an awesome model card, thanks for sharing. ➡️ **[amberoad/bert-multilingual-passage-reranking-msmarco](https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco)**
transformers
6,754
closed
add __init__.py to utils
Adds `__init__.py` to the `utils` dir introduced in #6434, which caused a namespace issues breaking imports in some cases. Mentioned in #6752. To reproduce, pip install from source without the editable `-e` flag. ``` >>> from transformers import pipeline ... ~/.virtualenvs/transformers/lib/python3.7/site-packages/transformers/file_utils.py in <module> ---> 32 from .utils import logging ModuleNotFoundError: No module named 'transformers.utils' ```
08-26-2020 21:08:10
08-26-2020 21:08:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=h1) Report > Merging [#6754](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/99407f9d1ece38d62a257fa8c65c3a2e114164e6?el=desc) will **increase** coverage by `0.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6754/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6754 +/- ## ========================================== + Coverage 79.02% 79.45% +0.42% ========================================== Files 157 157 Lines 28479 28479 ========================================== + Hits 22505 22627 +122 + Misses 5974 5852 -122 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=footer). Last update [99407f9...c6e6400](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is great! I can confirm this fixes it. It would be good to merge it fast as the current master fails.
transformers
6,753
closed
Removing memory/deleting a model: how to properly do this
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: - Python version: 3.6.7 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): T5-large, T5-3b, bert-base-uncased The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load a model 2. Try to remove it via `del`, clear GPU memory and cache <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` import torch from transformers import AutoTokenizer, AutoModelWithLMHead model = AutoModelWithLMHead.from_pretrained("t5-large") # same behavior for `bert-base-uncased`, larger T5 models.. model = model.cuda() model = model.train() ## delete model del model torch._C._cuda_emptyCache() ## alternatively # with torch.cuda.device("cuda:0"): # ...: torch.cuda.empty_cache() ## (as per the discussion here: https://discuss.pytorch.org/t/how-to-debug-causes-of-gpu-memory-leaks/6741/3, seeing all the hanging tensors) import gc for obj in gc.get_objects(): try: if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): print(type(obj), obj.size()) except: pass ``` ## Expected behavior I would expect this to clear the GPU memory, though the tensors still seem to linger (fuller context: In a larger Pytorch-Lightning script, I'm simply trying to re-load the best model after training (and exiting the pl.Trainer) to run a final evaluation; behavior seems the same as in this simple example (ultimately I run out of memory when loading the best model because the model is the absolutely massive T5-3b).). <!-- A clear and concise description of what you would expect to happen. -->
08-26-2020 19:07:25
08-26-2020 19:07:25
I encountered similar problems of freeing GPU memory while implementing the benchmark tools. A trick that worked for me was to wrap the function into a multi-process. Maybe you can take a look at this implementation and change your code accordingly so that the model is run in a subprocess: https://github.com/huggingface/transformers/blob/3726754a6c646adcf9cb2135ab7f72dffe074473/src/transformers/benchmark/benchmark_utils.py#L64<|||||>Thanks for getting back! After investigating a bit further, my particular problems seem to be partly related to PyTorch-Lightning (specificially, related to not properly detaching tensors in some of the eval code), but this general bit of advice is good since this seems to be a more general problem that I've seen in other contexts (like you mentioned). I will look more closely at running a multi-process. As a terrible hack (which probably shouldn't be repeated), I found that converting all models/tensors/training params/.. to cpu then deleting them and applying manual garbage collection fixed my issue. <|||||>> > > I encountered similar problems of freeing GPU memory while implementing the benchmark tools. A trick that worked for me was to wrap the function into a multi-process. Maybe you can take a look at this implementation and change your code accordingly so that the model is run in a subprocess: > > https://github.com/huggingface/transformers/blob/3726754a6c646adcf9cb2135ab7f72dffe074473/src/transformers/benchmark/benchmark_utils.py#L64 @patrickvonplaten have you ran into the following error using this method? ``` Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` Tried setting the context as follows with no success: ```python import multiprocessing as mp mp.set_start_method('spawn') ```<|||||>met the same problem, anything update ?
transformers
6,752
closed
Unable to install Transformers Master version
I want to run Pegasus Model for which master version of transformers is required but I am unable install it. I tried installing with pip command but it always install version 3.0.2.
08-26-2020 19:06:46
08-26-2020 19:06:46
The problem with the current `master` is that I have the following error `ModuleNotFoundError: No module named 'transformers.utils'` `version 3.0.2` does not include Pegasus. Can anyone suggest to us the latest stable version of master (not release `version 3.0.2`)? So we will be able to run the Pegasus Model.<|||||>`pip install -U git+https://github.com/huggingface/transformers.git`<|||||>I actually think this is a breaking change which @joeddav seems to have fixed in his PR. It is due to a change 6 hours ago which makes the import of the utils [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py) fail!<|||||>K, import issue is fixed by #6754 and installs from master as @patil-suraj mentioned should be good now.<|||||>@patil-suraj @joeddav Is there some other way to install master version of transformers? I tried using the URL to install the master version but it installed v3.0.2<|||||>@AnkitVarshney02 1. Since master is not tagged as a release it will still register as `3.02` in your environment when you've installed from master. 2. If you already have transformers installed in your env make sure you're also passing `--upgrade` `pip install --upgrade git+https://github.com/huggingface/transformers.git` <|||||>@joeddav I tried using the URL to install the master version but it again installed v3.0.2. Not sure what am I missing here. Please see the terminal output below: `pip install --upgrade git+https://github.com/huggingface/transformers.git Collecting git+https://github.com/huggingface/transformers.git Cloning https://github.com/huggingface/transformers.git to c:\users\varshney ankit\appdata\local\temp\pip-req-build-iddyb2c1 Running command git clone -q https://github.com/huggingface/transformers.git 'C:\Users\varshney ankit\AppData\Local\Temp\pip-req-build-iddyb2c1' Requirement already satisfied, skipping upgrade: numpy in c:\programdata\anaconda3\lib\site-packages (from transformers==3.0.2) (1.18.5) Collecting tokenizers==0.8.1.rc2 Using cached tokenizers-0.8.1rc2-cp38-cp38-win_amd64.whl (1.9 MB) Requirement already satisfied, skipping upgrade: packaging in c:\users\varshney ankit\appdata\roaming\python\python38\site-packages (from transformers==3.0.2) (20.4) Requirement already satisfied, skipping upgrade: filelock in c:\programdata\anaconda3\lib\site-packages (from transformers==3.0.2) (3.0.12) Requirement already satisfied, skipping upgrade: requests in c:\programdata\anaconda3\lib\site-packages (from transformers==3.0.2) (2.24.0) Requirement already satisfied, skipping upgrade: tqdm>=4.27 in c:\programdata\anaconda3\lib\site-packages (from transformers==3.0.2) (4.47.0) Requirement already satisfied, skipping upgrade: regex!=2019.12.17 in c:\programdata\anaconda3\lib\site-packages (from transformers==3.0.2) (2020.6.8) Collecting sentencepiece!=0.1.92 Using cached sentencepiece-0.1.91-cp38-cp38-win_amd64.whl (1.2 MB) Collecting sacremoses Using cached sacremoses-0.0.43.tar.gz (883 kB) Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in c:\users\varshney ankit\appdata\roaming\python\python38\site-packages (from packaging->transformers==3.0.2) (2.4.7) Requirement already satisfied, skipping upgrade: six in c:\users\varshney ankit\appdata\roaming\python\python38\site-packages (from packaging->transformers==3.0.2) (1.15.0) Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\programdata\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (2020.6.20) Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\programdata\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (2.10) Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in c:\programdata\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (3.0.4) Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\programdata\anaconda3\lib\site-packages (from requests->transformers==3.0.2) (1.25.9) Requirement already satisfied, skipping upgrade: click in c:\programdata\anaconda3\lib\site-packages (from sacremoses->transformers==3.0.2) (7.1.2) Requirement already satisfied, skipping upgrade: joblib in c:\programdata\anaconda3\lib\site-packages (from sacremoses->transformers==3.0.2) (0.16.0) Building wheels for collected packages: transformers, sacremoses Building wheel for transformers (setup.py) ... done Created wheel for transformers: filename=transformers-3.0.2-py3-none-any.whl size=886632 sha256=fde9ef47b87c3c42f0dc98920877a9cb6a2446395dce5e03eb3a6e3802d73f06 Stored in directory: C:\Users\varshney ankit\AppData\Local\Temp\pip-ephem-wheel-cache-dptow2tc\wheels\05\0a\97\64ae47c27ba95fae2cb5838e7b4b7247a34d4a8ba5f7092de2 Building wheel for sacremoses (setup.py) ... done Created wheel for sacremoses: filename=sacremoses-0.0.43-py3-none-any.whl size=893262 sha256=d9c55c4f55923ebf6ffba1f0a27a9034af0eebfb76a5dc6475c1de1a4e977abd Stored in directory: c:\users\varshney ankit\appdata\local\pip\cache\wheels\7b\78\f4\27d43a65043e1b75dbddaa421b573eddc67e712be4b1c80677 Successfully built transformers sacremoses Installing collected packages: tokenizers, sentencepiece, sacremoses, transformers Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc2 transformers-3.0.2`<|||||>Master isn't tagged with its own release, so it will actually still show as `3.02` right now even if you've installed from master correctly. Did you try importing Pegasus after the above?<|||||>> Master isn't tagged with its own release, so it will actually still show as `3.02` right now even if you've installed from master correctly. Did you try importing Pegasus after the above? Thanks @joeddav ! It is working!!
transformers
6,751
closed
Add Language Agnostic Bert Sentence Embedding
# 🌟 New model addition ## Model description Google released a new model LaBSe, which is a Language agnostic Bert sentence embedding with 109 language support. https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html?utm_source=Deep+Learning+Weekly&utm_campaign=cdb8afedb7-EMAIL_CAMPAIGN_2019_04_24_03_18_COPY_01&utm_medium=email&utm_term=0_384567b42d-cdb8afedb7-72940109 The model is also hosted on tfhub as well:https://tfhub.dev/google/LaBSE/1 <!-- Important information --> ## Open source status * [ ] the model implementation is available: (https://arxiv.org/abs/2007.01852) * [ ] the model weights are available: https://tfhub.dev/google/LaBSE/1 * [ ] who are the authors: (mention them, if possible by @gh-username)
08-26-2020 18:52:51
08-26-2020 18:52:51
Any update on this if this will be taken up by the team?<|||||>I think this is already done right? seen that we have this https://github.com/huggingface/transformers/tree/master/model_cards/pvl/labse_bert<|||||>The model uploaded by @pvl mentioned by @aalloul performs the wrong pooling, i.e., embeddings produced by that model are NOT the same as the embeddings from the TFHub version. I uploaded the model with the right pooling here: https://huggingface.co/sentence-transformers/LaBSE It tested it against the TF Hub version and it produces similar embeddings (small epsilon difference due to different padding variations). On down stream tasks, the TFHub and the HF Pytorch version achieve the same performance.<|||||>For LABSE model can we use FAISS to build parallel corpus. If not what is the best best algo to that. LABSE paper was suggesting ANN. Where can I find its implementation for building parallel corpus.<|||||>Have a look here for some examples with ANN, including FAISS https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications Personally I prefer hnswlib over Faiss: https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic_search_quora_hnswlib.py It is nicer and easier to use, better documented and offer certain features that are missing in Faiss. FAISS added later hnswlib as index structure, as it is also faster than the other index types FAISS were offering. Yes, you can use LaBSE with ANN. <|||||>@nreimers Which ANN LaBSE paper was recommending?<|||||>@aj7tesh I think they are not mentioning which ANN is used. As it is a google paper, I could imagine they use SCANN: https://github.com/google-research/google-research/tree/master/scann A good source for comparison is: http://ann-benchmarks.com/index.html Here, HNSWLib performs really well: https://github.com/nmslib/hnswlib<|||||>@nreimers thank you. Surely I will explore above comparisons. Will see which one is helping me more to generate parallel corpus.<|||||>32GB should be more than fine. Issues can be: - too long sequences. Try to use max_length in the tokenizer - Too large batch sizes. Try to use a smaller batch size. <|||||>basically # in my case i have lets say more than 2k sentences in array # its passing the encoded_input step, however its going OOM in model_output. Usually on my machine its works fine for upto 10k sentences when using LASER, however for LABSE its failing after 150 only encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input, return_dict=True)<|||||>@nreimers its working fine when I used below method from sentence_transformers import SentenceTransformer not exaclty sure the reason(hope model weights are similar to tf model) I have a questions, does these Labse and Laser kind of multilingual model works on language which is not related to major languages on which these models are trained? I believe for zero shot learning the language should have some similarity to other major languages. <|||||>sentence transformers performs batching of your data. If you pass 10k sentences, it splits it into batches of e.g. 32 and encodes them. So that you don't run out of memory. Embeddings are nearly identical to those of Tensorflow. I tested both models on Tatoeba test set on 100 languages, and Pytorch and TF Version perform equally. Depends on the language. If the language is really similar to another language that was used for training, then it works. If it uses a different vocab or is really different, then it doesn't work.<|||||>> Have a look here for some examples with ANN, including FAISS > https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications > > Personally I prefer hnswlib over Faiss: > https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic_search_quora_hnswlib.py > > It is nicer and easier to use, better documented and offer certain features that are missing in Faiss. FAISS added later hnswlib as index structure, as it is also faster than the other index types FAISS were offering. > > Yes, you can use LaBSE with ANN. I was going through this HNSW implementation. At some places it was written that ANN is not perfect and then HNSWs results were compared against util.semantic search, which again was executing cosine similarity for bitext mining. What is the reason of performing this step. I understand this thread is not the right place to ask this question, kindly suggest some other thread or place for such queries<|||||>Approximate Nearest Neighbor only returns approximately the 10 nearest neighbor. It can be that it misses points that are closer. This is expressed as recall. In the examples, it compares ANN against an exact nearest neighbor to see if there might be an issue with the index construction. For large datasets, exact search is too slow, so you have to live with it that ANN does not find perfectly all nearest neighbors.<|||||>so for corpus building where I expect the sentence with highest cosine similarity be the translation pair for corresponding source sentence. I will have to go for exact matches using something like util.semantic_search or scipy spatial distance<|||||>If the corpora are not too large, yes, you can use exact matches. But these methods have quadratic runtime, i.e., when you have 10 Millions of sentences, searching will take a long time. If your corpus is smaller, you can use exact search.<|||||>Got it @nreimers thanks<|||||>> The model uploaded by @pvl mentioned by @aalloul performs the wrong pooling, i.e., embeddings produced by that model are NOT the same as the embeddings from the TFHub version. > > I uploaded the model with the right pooling here: > https://huggingface.co/sentence-transformers/LaBSE > > It tested it against the TF Hub version and it produces similar embeddings (small epsilon difference due to different padding variations). On down stream tasks, the TFHub and the HF Pytorch version achieve the same performance. @nreimers I'm guessing there should be only 1 implementation of LaBSE and people might get confused with which one to use. How should we go about this?<|||||>Coming late to this thread, we also uploaded a pytorch and TF compatible versions of the LaBSE model here - https://huggingface.co/rasa/LaBSE . This will also be available inside Rasa Open Source very soon. I do agree with @aalloul about the confusion this can create. Looking for thoughts from folks on this.<|||||>@dakshvar22 did you run any comparisons with the official model?<|||||>> I'm guessing there should be only 1 implementation of LaBSE and people might get confused with which one to use. How should we go about this? We could imagine building a curation system built on top of (e.g.) a combination of downloads and an explicit marker like a "Star" button, but I don't want to overfit too much to the first few examples – given that this use case is still not super frequent. Happy to hear anyone's thoughts on this<|||||>@aalloul I cross-checked the embeddings from the TFhub version and the transformers compatible versions we uploaded and they are almost identical. This was on a corpus of around 50k sentences across 5 different languages. Please feel free to test them out on the Tatoeba dataset on all 100 languages. I might not be able to do that myself right now.<|||||>@aalloul @dakshvar22 I tested https://huggingface.co/sentence-transformers/LaBSE on the Tatoeba dataset on all 100+ languages and the performances were comparable to the TFHub model (+/- 0.1 accuracy for some languages due to different padding and numerical stability in pytorch vs. tensorflow)<|||||>ah nice, thanks @nreimers for letting us know! I'll have a look at it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Late to this thread (and noticed it existed after publishing the model), but I ported the model from TF Hub and uploaded it here: https://huggingface.co/setu4993/LaBSE Additionally, my code to port it, alongside tests that verify the embeddings generated by the source TF Hub model and the ported PyTorch model (uploaded above) are in my repo: https://github.com/setu4993/convert-labse-tf-pt Should be easy to extend it / add other tests and verify the embeddings match, if someone is interested. I haven't run tests on downstream performance, though.
transformers
6,750
closed
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I'm getting the following error when I try to evaluate a model while it's training: ```python wandb: Waiting for W&B process to finish, PID 242876 Traceback (most recent call last): File "run_finetune_gpt2.py", line 163, in <module> main() File "run_finetune_gpt2.py", line 150, in main trainer.train() File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 537, in train self.evaluate() File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 745, in evaluate output = self._prediction_loop(eval_dataloader, description="Evaluation") File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 838, in _prediction_loop preds = torch.cat((preds, logits.detach()), dim=0) RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated ``` I printed out the `preds` and `logits` shapes inside `trainer.py`: ```python preds.shape = torch.Size([]) logits.shape = torch.Size([]) ``` I can't recall exactly how my `TrainingArguments` object was set up but I think it was something like this: ```python training_args = TrainingArguments( output_dir=output_dir, do_train=True, evaluate_during_training=True, eval_steps=5, logging_dir=logging_dir, per_device_train_batch_size=32, num_train_epochs=1, save_steps=100) trainer = Trainer( model=model, args=training_args, train_dataset=sd_dataset_train, eval_dataset=sd_dataset_test, data_collator=sd_data_collator) ``` I was using `GPT2LMHeadModel`. I tried to put together a code sample but couldn't reproduce the error. Has anyone run into this before when trying to evaluate while training?
08-26-2020 18:44:42
08-26-2020 18:44:42
## System Info Debian 10 Pytorch: 1.6.0 Transformers: 3.0.2 Python: 3.7.8 Pretrained Model: AlbertPreTrainedModel (albert-base-v2) Pretrained Tokenizer: AlbertTokenizer (albert-base-v2) ## Question I'm getting the same error, but when trying to evaluate the model after training. ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-39-7c7016f6f03e> in <module> ----> 1 res = trainer.evaluate(val_dataset) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluate(self, eval_dataset) 743 eval_dataloader = self.get_eval_dataloader(eval_dataset) 744 --> 745 output = self._prediction_loop(eval_dataloader, description="Evaluation") 746 747 self._log(output.metrics) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _prediction_loop(self, dataloader, description, prediction_loss_only) 834 preds = logits.detach() 835 else: --> 836 preds = torch.cat((preds, logits.detach()), dim=0) 837 if inputs.get("labels") is not None: 838 if label_ids is None: RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated ``` The values of `preds` and logits at this point are: ``` ipdb> preds tensor(0.4661, device='cuda:0') ipdb> logits tensor(0.4578, device='cuda:0') ``` Replacing `torch.cat` with `torch.stack` seemed to do the job, is there a reason for using `torch.cat` here? ``` ipdb> torch.stack((preds, logits.detach()), dim=0) tensor([0.4661, 0.4578], device='cuda:0') ``` These are my training arguments and trainer: ```python training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, per_device_train_batch_size=16, per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', logging_steps=10, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=train_dataset, compute_metrics=compute_metrics, ) ```<|||||>Same issue<|||||>I am getting the same issue. CC: @sgugger<|||||>Same issue!<|||||>It's not that we don't want to fix that issue but no one as given us a reproducer and it was linked to a version of transformers that is now quite old. So please do let us know if the error persists on v3.5.1 (after upgrading transformers) and on which script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Found the same issue on `transformers==3.4.0`. But after upgrading to `transformers==4.2.2` the problem fixed. FYI.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>Facing the same issue with > 4.2, now the issue is File "***/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 48, in torch_pad_and_concatenate if len(tensor1.shape) == 1 or tensor1.shape[1] == tensor2.shape[1]: IndexError: tuple index out of range
transformers
6,749
closed
RuntimeError: grad can be implicitly created only for scalar outputs
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I'm getting the following error and I'm not sure how to resolve it: ```python Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/1 [00:00<?, ?it/s] Iteration: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/path/to/misc_tests.py", line 78, in <module> trainer.train() File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 637, in _training_step loss.backward() File "/path/to/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 94, in backward grad_tensors = _make_grads(tensors, grad_tensors) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 35, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs ``` Here's some sample code: ```python from transformers import Trainer, TrainingArguments, GPT2LMHeadModel, GPT2Tokenizer import torch from torch.utils.data import Dataset class SDAbstractsDataset(Dataset): def __init__(self, set_type_str): if set_type_str == 'train': prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13.' prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.' prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.' prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay.' prompt5 = 'BackgroundBack injury is a common place in our society. Up to two-thirds of back injuries have been associated with trunk rotation. However, the torque production ability with a rotated spine and electromyographic activity of trunk muscles in such efforts is poorly understood. Therefore, the objectives of this study are to study torque production capacity of variously rotated and flexed trunk and to measure the EMG of selected trunk muscles in these activities.MethodsNineteen normal young subjects (7 males and 12 females) were recruited. Subjects were stabilized on a posture-stabilizing platform and were instructed to assume a flexed and right rotated posture (20°, 40° and 60° of rotation and 20°, 40° and 60° of flexion) in a random order. The subjects were asked to exert their maximal voluntary contraction in the asymmetric plane of rotation–extension for a period of 5s. The surface EMG of the external and internal obliques, rectus abdominis, latissimus dorsi, erector spinae at the 10th thoracic and 3rd lumbar vertebral levels was recorded bilaterally along with the torque generated.FindingsWhereas the torque generated was significantly affected by both rotation and extension in both genders (P<0.001), the EMG was independent of rotation but affected by flexion in females only (P<0.01). The torques produced by both genders in each of the nine postures was significantly different from each other (P<0.001). The EMG demonstrated a trend of increase with increasing rotation and flexion. The response surfaces of normalized peak EMG of the right external oblique and internal oblique was somewhat similar, indicating a rotator torque and a stabilizing effect. The left latissimus dorsi and right external oblique provided the rotational torque and the right erector spinae provided the extensor effort. Since the rotation–extension was performed in the plane of asymmetry, the effort required the recruitment of muscles involved in left rotation, stability of rotated spine and an extensor effort.InterpretationThe torque production capacity of the human trunk is posture dependent and declines with increasing rotation. However, with increasing rotation and flexion, the magnitude of EMG increases. This implies that with increasing asymmetry, it requires more muscle effort (thus tissue stress) to generate less torque. Increasing asymmetry tends to weaken the system and may enhance chances of injury.' prompt6 = 'Orthogonal frequency division multiplexing (OFDM) is a promising candidate for light emitting diode (LED)-based optical wireless communication (OWC); however, precise channel estimation is required for synchronization and equalization. In this work, we study and discover that the channel response of the white-lightLED-based OWC was smooth and stable. Hence we propose and demonstrate using a specific and adaptive arrangement of grid-type pilot scheme to estimate the LED OWC channel response. Experimental results show that our scheme can achieve better transmission performance and with some transmission capacity enhancement when compared with the method using training-symbol scheme (also called block-type pilot scheme).' prompt7 = 'The catalytic activities of three nano-sized nickel catalysts Ni/Y2O3, Ni/La2O3 and Ni/Al2O3, using nickel oxalate as precursor and by impregnation–decomposition–reduced method, have been investigated for the reactions of steam reforming of ethanol at low temperature. Properties of structure and surface of catalysts were tested by XRD, XPS, XES, SEM and BET area. The initial reaction kinetics of ethanol over the catalysts was studied by steady-state reaction and a first-order reaction with respect to ethanol was found. It is found that the catalysts Ni/Y2O3 and Ni/La2O3 exhibit relative high activity for ethanol steam reforming at 250∘C with a conversion of ethanol of 81.9% and 80.7%, and a selectivity of hydrogen of 43.1% and 49.5%, respectively. When temperature reached 320∘C, the conversion of ethanol increased to 93.1% and 99.5% and the selectivity of hydrogen was 53.2% and 48.5%, respectively. The catalyst Ni/Al2O3 exhibits relative lower activity for ethanol steam reforming and hydrogen selectivity. However, the three catalysts all have long-term stability for ethanol steam reforming.' prompt8 = 'Exergetic and exergoeconomic analyses are often used to evaluate the performance of energy systems from the thermodynamic and economic points of view. While a conventional exergetic analysis can be used to recognize the sources of inefficiencies, the so-called advanced exergy-based analysis is convenient for identifying the real potential for thermodynamic improvements and the system component interactions by splitting the exergy destruction and the total operating cost within each component into endogenous/exogenous and unavoidable/avoidable parts. In this study for the first time an advanced exergoeconomic analysis is applied to a gas-engine-driven heat pump (GEHP) drying system used in food drying for evaluating its performance along with each component. The advanced exergoeconomic analysis shows that the unavoidable part of the exergy destruction cost rate within the components of the system is lower than the avoidable part. The most important components based on the total avoidable costs are drying ducts, the condenser and the expansion valve. The inefficiencies within the condenser could particularly be improved by structural improvements of the whole system and the remaining system components. Finally, it can be concluded that the internal design changes play a more essential role in determining the cost of each component.' self.data_list = [prompt1, prompt2, prompt3, prompt4, prompt5, prompt6, prompt7, prompt8] else: prompt1 = 'State estimation (SE) is well-established at the transmission system level of the electricity grid, where it has been in use for the last few decades and is a most vital component of energy management systems employed in the monitoring and control centers of electric transmission systems. However, its use for the monitoring and control of power distribution systems (DSs) has not yet been widely implemented because DSs have been majorly passive with uni-directional power flows. This scenario is now changing with the advent of smart grid, which is changing the nature of electric distribution networks by embracing more dispersed generation, demand responsive loads, and measurements devices with different data rates. Thus, the development of distribution system state estimation (DSSE) tool is inevitable for the implementation of protection, optimization, and control techniques, and various other features envisioned by the smart grid concept. Due to the inherent characteristics of DS different from those of transmission systems, transmission system state estimation (TSSE) is not applicable directly to DSs. This paper is an attempt to present the state-of-the-art on DSSE as an enabler function for smart grid features. It broadly reviews the development of DSSE, challenges faced by its development, and various DSSE algorithms. Additionally, it identifies some future research lines for DSSE.' prompt2 = 'A solar-assisted absorption heat transformer (SAAHT) is a useful substitute for the conventional equipment to generate low-pressure steam. A 100kW steam generation system with a SAAHT in Langfang (China) is evaluated in this study. Hourly thermodynamic performance, including system efficiency, exergy efficiency, CO2 emission reduction rate and output heat in typical days in four seasons, is discussed. Results show that ambient temperature has a smaller effect on system performance than solar irradiation. In any one of the typical days in spring, summer and autumn, the system presents higher output heat and CO2 emission reduction rate, more stable system efficiency and exergy efficiency than those in winter. Comparative results from two methods show that ratio method has higher system efficiency with solar irradiation below 600W/m2. A hybrid method combining both the degree method and ratio method is adopted to work with the off-design condition, and results show that performance improvement for system is not so obvious as that in solo absorption heat transformer. Among the four typical days, the most obvious improvement occurs in summer with cumulative output heat increasing from 1318kWh to 1343kWh, and the CO2 emission reduction increasing from 296kg to 301kg.' prompt3 = 'The European Commission is encouraging the Cement, Lime and Magnesium Oxide Manufacturing Industries to reutilize collected particulate matter or wastes in the emission control of SO2 with a 100% removal efficiency. Following this directive, three different by-products from the calcination of natural magnesite were selected in order to evaluate their desulfurization capacity. The saturation time, defined as the time for the total neutralization of SO2 was used to determine consumption values at laboratory scale with 100% removal efficiency. The by-product LG-MgO (∼68% MgO) presented the lowest consumption value, with 2.9kg per m3 of SO2, three times the corresponding to the widely used high grade Ca(OH)2. The liquid-to-gas (L/G) ratio was used for comparison to the industry and taking this into account, the final pH range before achieving saturation was 5.1–6.3. The residual solids obtained at the end of the process were mainly composed of unreacted magnesium and calcium compounds and reaction products CaSO4·2H2O and MgSO4·6H2O which can be used as fertilizers. Therefore, the reutilization of these by-products in a wet flue gas desulfurization process is a feasible and sustainable choice that allows extending their life-cycle.' prompt4 = 'The GP Joule Group is starting the biggest ‘green’ hydrogen mobility project in Germany so far, close to the northern border with Denmark. The eFarm project will create a modular hydrogen infrastructure, covering production and processing through to utilisation in a number of hydrogen-powered vehicles.' self.data_list = [prompt1, prompt2, prompt3, prompt4] def __len__(self): return len(self.data_list) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() abstract_text = self.data_list[idx] return abstract_text def sd_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch output_dir = 'TMP_DIR' logging_dir = 'TMP_DIR' training_args = TrainingArguments( output_dir=output_dir, logging_dir=logging_dir, do_train=True, per_device_train_batch_size=2, num_train_epochs=1, ) model = GPT2LMHeadModel.from_pretrained('gpt2') train_dataset = SDAbstractsDataset('train') trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, data_collator=sd_data_collator ) trainer.train() ```
08-26-2020 18:24:37
08-26-2020 18:24:37
When the model receives inputs that include the labels, it's supposed to produce a tuple of (loss, predictions), where the loss is a scalar. The trainer then uses the loss to calculate the gradients. In this case (or at least in my case when I get a similar error) the trainer appears to be trying to use the predictions not the loss to calculate the gradient. This appears to be because the model is not receiving the 'labels' as input and so is only producing a one tuple of (predictions). You should be able to fix it by passing a value for "labels" in your collator. See for example transformers.DataCollatorForLanguageModeling.<|||||>For me, I am getting the same error because the model I choose does not return loss even though I pass labels. It's better to check the model documentation you are using whether model forward() return loss or not. This is the snapshot of BertModel (Model which I choose first) forward() returns. Which does not return any loss value. ![image](https://user-images.githubusercontent.com/47693507/96410535-e40c9400-1208-11eb-95aa-df4f58928932.png) And this is the snapshot of BertModelLMHeadModel (Model which I choose later) forward() returns. Which return loss value. ![image](https://user-images.githubusercontent.com/47693507/96410933-80cf3180-1209-11eb-8ef1-19effe5ea93a.png) <|||||>@ameasure @MojammelHossain Thank you both for your feedback! Checking the GPT2 documentation showed me an example of what I could set the `labels` value to in my collator.
transformers
6,748
closed
Pin code quality dependencies: black,isort,flake8
Goal: Prevent dependencies from causing transformers CI to be in style violation without transformers code changing. Tested: all isort 5.4 versions, all flake 3.8 versions work identically well. changing black at all causes bad results. I pinned to the tested range.
08-26-2020 18:22:10
08-26-2020 18:22:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=h1) Report > Merging [#6748](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6748/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6748 +/- ## ========================================== - Coverage 78.96% 78.78% -0.19% ========================================== Files 157 157 Lines 28486 28486 ========================================== - Hits 22495 22442 -53 - Misses 5991 6044 +53 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+2.28%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+3.25%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=footer). Last update [a75c64d...9ca2abb](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Someone else did it!
transformers
6,747
closed
Add checkpointing to Ray Tune HPO
Saves checkpoints in Ray Tune HPO, enabling advanced schedulers like PBT.
08-26-2020 16:11:37
08-26-2020 16:11:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=h1) Report > Merging [#6747](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **decrease** coverage by `0.82%`. > The diff coverage is `8.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6747/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6747 +/- ## ========================================== - Coverage 79.01% 78.18% -0.83% ========================================== Files 157 157 Lines 28739 28782 +43 ========================================== - Hits 22707 22503 -204 - Misses 6032 6279 +247 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-4.87%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.57% <7.14%> (+0.45%)` | :arrow_up: | | [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `31.11% <9.09%> (-34.61%)` | :arrow_down: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.90% <0.00%> (-0.14%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=footer). Last update [02d09c8...7488b03](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>```diff diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py index 3470a473..acf1503c 100755 --- a/src/transformers/trainer.py +++ b/src/transformers/trainer.py @@ -544,7 +544,8 @@ class Trainer: if trial.should_prune(): raise optuna.TrialPruned() elif self.hp_search_backend == HPSearchBackend.RAY: - self._tune_save_checkpoint() + if self.global_step % self.args.save_steps == 0: + self._tune_save_checkpoint() tune.report(objective=self.objective, **metrics) def _tune_save_checkpoint(self): @@ -911,6 +912,8 @@ class Trainer: # search. _tb_writer = self.tb_writer self.tb_writer = None + _model = self.model + self.model = None # Setup default `resources_per_trial` and `reporter`. if "resources_per_trial" not in kwargs and self.args.n_gpu > 0: n_jobs = int(kwargs.pop("n_jobs", 1)) ``` This allows us to: 1. Not die when tuning BERT and 2. Not be dominated by saving latency.<|||||>Thanks for your suggestions. I moved the bulk of the hp search code to `integrations`, including the objective, since it depends on the search space. Is this what you had in mind?<|||||>Yes. I think we can split the function in two: one for ray, one for optuna and avoid a lot of tests this way to have some cleaner code (with a small duplication in the _objective function). I can do it in a separate PR if you want.<|||||>That would be great, thanks!<|||||>Merging and will follow up then.
transformers
6,746
closed
[s2s] run_eval.py QOL improvements and cleanup
- default to saving metrics to `metrics.json` - cleanup: delete some dead code. - save seconds_per_sample, runtime, num_samples
08-26-2020 16:07:19
08-26-2020 16:07:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=h1) Report > Merging [#6746](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.03%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6746/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6746 +/- ## ========================================== - Coverage 78.96% 78.93% -0.04% ========================================== Files 157 157 Lines 28486 28486 ========================================== - Hits 22495 22485 -10 - Misses 5991 6001 +10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=footer). Last update [a75c64d...0b9f1cd](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,745
closed
[s2s] run_eval saves samples/second
it's easy to compute and worth noting, since we are already saving a dictionary.
08-26-2020 15:01:41
08-26-2020 15:01:41
transformers
6,744
closed
[pipelines] Text2TextGenerationPipeline
This PR adds `Text2TextGenerationPipeline`, as discussed here #4411 <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #4411 @patrickvonplaten , @LysandreJik cc @enzoampil :))
08-26-2020 12:14:44
08-26-2020 12:14:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=h1) Report > Merging [#6744](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.13%`. > The diff coverage is `96.15%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6744/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6744 +/- ## ========================================== + Coverage 78.47% 79.60% +1.13% ========================================== Files 157 157 Lines 28569 28595 +26 ========================================== + Hits 22420 22764 +344 + Misses 6149 5831 -318 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.46% <96.15%> (+0.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=footer). Last update [42fddac...201c854](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @LysandreJik ! It was already in the doc at the bottom, now moved it below `TextGenerationPipeline`<|||||>failing test doesn't look related to this PR
transformers
6,743
closed
BART for Pre-Training
# ❓ Questions & Help How can I run BART pre-training? I have data to pre-training(Masked LM)
08-26-2020 12:08:25
08-26-2020 12:08:25
This should help: https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271<|||||>@sshleifer - think this is the 3rd issue about Bart pre-training -> maybe it would be a good idea to release a small notebook at some point.<|||||>@patil-suraj you took a stab at this at some point? [this](https://github.com/huggingface/transformers/issues/5096#issuecomment-645848176) may have been optimistic :( <|||||>Yes, I was trying to port fairseq dataset here, same for t5, I'll try to focus more on it when I'm done with current PRs, should strat with a notebook as Patrick said, then try to include it in examples/<|||||>@patrickvonplaten Does that mean I can train with Masked-input, input(label) and Decoder-input?<|||||>yes, this should be possible<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@patil-suraj any news on the pretraining script for Bart?<|||||>If anyone wants to train their MBART model then feel free to use this. https://github.com/prajdabre/yanmtt Contributions are welcome!<|||||>@patil-suraj excuse me, is there any news on the pretraining script for Bart? Thanks.<|||||>@thomas-li-sjtu you can try my toolkit if you like. It's based on transformers and allows for Bart/mbart pretraining. https://github.com/prajdabre/yanmtt<|||||>> @thomas-li-sjtu you can try my toolkit if you like. It's based on transformers and allows for Bart/mbart pretraining. https://github.com/prajdabre/yanmtt Hi there, here is my problem. I hope to pretrain a bart model based on my own dataset and fine tune it for another task (not nmt). I noticed that your toolkit designs for nmt so maybe it is not the one I need. Anyway, thanks for your reply!<|||||>@thomas-li-sjtu ok I understand. It's not just designed for NMT (despite its name). I've used it for summarisation and general NLG without problems. Good luck with your search.<|||||>> @thomas-li-sjtu ok I understand. It's not just designed for NMT (despite its name). I've used it for summarisation and general NLG without problems. Good luck with your search. Wow that is awesome. I will try it for my task!<|||||>@thomas-li-sjtu cool. Feel free to raise issues as it helps me add new functionality that may be of use to people. If you want to know how to use it for summarisation (or generic nlg) then look here: https://github.com/AI4Bharat/indic-bart<|||||>Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :) For BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223 With this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization. Let me know if anyone is interested. :) cc @patrickvonplaten <|||||>> Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :) > > For BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223 > > With this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization. > > Let me know if anyone is interested. :) cc @patrickvonplaten I think the BART pre-training script is very useful for my work and many others. It is generous of you to add this example script in 'Transfromers' !!!<|||||>> Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :) > > For BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223 > > With this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization. > > Let me know if anyone is interested. :) cc @patrickvonplaten Thanks for your reply and I think your method is absolutely feasible. But when I try it , I faced some errors that I can't fix. And could you please give me some help? Here is my changes to `run_summarization.py`(tag 4.11.0) 1. Import some necessary packages in [https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223](url) 2. Add full codes of `DataCollatorForDenoisingTasks` and also let class `DataCollatorForDenoisingTasks` inherit class `DataCollatorForSeq2Seq` in this way: `class DataCollatorForDenoisingTasks(DataCollatorForSeq2Seq):` 3. Use the new collator: `data_collator = DataCollatorForSeq2Seq(......)` -> `data_collator = DataCollatorForDenoisingTasks(.......)` Run the changed script and I get errors below. Traceback (most recent call last): File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-991cbc10c55c>", line 1, in <module> runfile('/data/whq/tmp/SBartTry/fineBartPretrain.py', args=['--model_name_or_path', 'facebook/bart-base', '--do_train', '--do_eval', '--train_file', '/data/whq/tmp/SBartTry/tryData/clickbait_train.csv', '--validation_file', '/data/whq/tmp/SBartTry/tryData/clickbait_valid.csv', '--source_prefix', '', '--num_train_epochs=3', '--output_dir', '/data/whq/tmp/SBartTry/fineBartPretrain/clickbait', '--overwrite_output_dir', '--per_device_train_batch_size=16', '--per_device_eval_batch_size=16', '--predict_with_generate'], wdir='/data/whq/tmp/SBartTry') File "/home/whq/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/home/whq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/data/whq/tmp/SBartTry/fineBartPretrain.py", line 823, in <module> main() File "/data/whq/tmp/SBartTry/fineBartPretrain.py", line 745, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py", line 1325, in train tr_loss_step = self.training_step(model, inputs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py", line 1884, in training_step loss = self.compute_loss(model, inputs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py", line 1916, in compute_loss outputs = model(**inputs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise raise exception TypeError: Caught TypeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1336, in forward return_dict=return_dict, File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1200, in forward return_dict=return_dict, File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 769, in forward input_shape = input_ids.size() TypeError: 'int' object is not callable Waiting for your generous reply! @patil-suraj <|||||>@Eurus-W make sure you convert the numpy arrays in the batch returned by `data_collator()` into tensors. `batch["input_ids"] = torch.LongTensor(batch["input_ids"])`, for example.
transformers
6,742
closed
How to generate sentences in batches, instead of generating sentences one by one
After I finetune GPT-2, I want to use it to generate sentences in batches instead of one by one. So I tried to modify the code of `examples/text-generation/run_generation.py`. the code on line 239 in `run_generation.py` is: `encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")` the `prompt_text` is a `str` type data, but when I modify it to `List[str]` type data, It always return `50256`. But looking at the source code, the type of `prompt_text` can be `str` , `List[str]` or `List[int]`. I tested this example separately , and for `token_ids` it always returns 50256. ![image](https://user-images.githubusercontent.com/22442305/91297615-344f1300-e7d1-11ea-99d4-b0e28cd1ab14.png) So, the `prompt_text` must be `str` type data? What modifications should I make to generate sentences in batches using `examples/text-generation/run_generation.py`? Looking forward to your reply!
08-26-2020 11:27:46
08-26-2020 11:27:46
Hey @SuHe36, I'm currently working on adding support for batched generation. At the moment, this is the best answer we can give you: https://github.com/huggingface/transformers/issues/3021#issuecomment-591236688<|||||>Thanks for your reply !<|||||>Hey, @patrickvonplaten is batch generation available for T5conditiongeneration?<|||||>Yes! Please take a look at this test, which does batch=4 generation for summarization using T5: https://github.com/huggingface/transformers/blob/55cb2ee62eb482787cff17585955f7193fe35dfa/tests/test_modeling_t5.py#L559<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,741
closed
Fix tf boolean mask in graph mode
08-26-2020 08:49:07
08-26-2020 08:49:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=h1) Report > Merging [#6741](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `1.03%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6741/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6741 +/- ## ========================================== - Coverage 80.00% 78.96% -1.04% ========================================== Files 156 156 Lines 28426 28426 ========================================== - Hits 22741 22446 -295 - Misses 5685 5980 +295 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <100.00%> (-2.61%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.45% <0.00%> (-5.02%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=footer). Last update [64c7c2b...fb02d72](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,740
closed
[Torchscript] Fix docs
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6714
08-26-2020 07:43:57
08-26-2020 07:43:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=h1) Report > Merging [#6740](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `0.29%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6740/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6740 +/- ## ========================================== - Coverage 80.00% 79.70% -0.30% ========================================== Files 156 156 Lines 28426 28426 ========================================== - Hits 22741 22656 -85 - Misses 5685 5770 +85 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=footer). Last update [64c7c2b...02d9292](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,739
closed
convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf
Added support of convert BertForQuestionAnswering pytorch model to tensorflow like below file https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py I made new file convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf.py
08-26-2020 07:30:10
08-26-2020 07:30:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=h1) Report > Merging [#6739](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `1.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6739/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6739 +/- ## ========================================== - Coverage 80.00% 78.99% -1.01% ========================================== Files 156 156 Lines 28426 28426 ========================================== - Hits 22741 22455 -286 - Misses 5685 5971 +286 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=footer). Last update [64c7c2b...b9f471b](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@LysandreJik hey i have tried to add small script of convert BertForQuestionAnswering pytorch model to tensorflow like below file https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py please look into it.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,738
closed
How to use the reformer for question answering?
In the code example in the documentation the tokenizer gets passed just a single string. Doesn't it need a context and a question? Also, how exactly to get the answer?
08-26-2020 07:00:19
08-26-2020 07:00:19
Hey @Krak91, Thanks for this issue. The PR attached to this issue should fix it. Note that even though the new example code will work for Reformer, it won't yield any good results because there is no pretrained ReformerModel yet.
transformers
6,737
closed
[s2s README] Add more dataset download instructions
Improves formatting in seq2seq readme and adds download instructions for wmt-en-de and cnn_dm_v2 (cleaned cnn_dm without empty examples).
08-26-2020 01:32:17
08-26-2020 01:32:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=h1) Report > Merging [#6737](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **decrease** coverage by `0.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6737/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6737 +/- ## ========================================== - Coverage 80.02% 79.92% -0.11% ========================================== Files 157 157 Lines 28586 28586 ========================================== - Hits 22877 22848 -29 - Misses 5709 5738 +29 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.76%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=footer). Last update [22933e6...b4c1c2f](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,736
closed
Fix run_squad.py to work with BART & add skip_decoder step for BART
* BART should be in the list of models that delete `token_type_ids` * Don't run through decoder when using hidden states only, improves efficiency
08-25-2020 23:48:07
08-25-2020 23:48:07
transformers
6,735
closed
[Generate] Facilitate PyTorch generate using `ModelOutputs`
This PR: - forces to use `return_dict=True` for generation in PyTorch. This should not lead to any problem because .generate() cannot be used to compute gradients - The handling of `encoder_outputs` is simplified for Bart, T5 and the Encoder. Previously, there was an ugly hack that forces the second position of the encoder/decoder outputs for T5 and Bart to be a tuple containing both `decoder_past_key_values` and `encoder_outputs` whereas `encoder_outputs` was a duplicated output. With the new cleaner API, only the `decoder_past_key_values` are returned because the `encoder_outputs` can be accessed differently. - Fixes #6319 - Adds better documentation for the Encoder Decoder model + test + better example - Most importantly, this PR lays the groundwork for a better GenerationOutput object (@sgugger). Returning a list of all attentions and all hidden states when using `.generate()` was a feature many people asked for. This will now be made possible by forcing to use "return_dict" in .generate() so that "decoder_attentions" and "attentions" can be accessed by keyword. **Important**: The new handling of `encoder_outputs` introduces a small breaking change in that "output[1]" is now not a mixed tuple of encoder_outputs and `decoder_past_key_values`, but only `decoder_past_key_values`, whereas the encoder_outputs can be accessed as before.
08-25-2020 22:34:28
08-25-2020 22:34:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=h1) Report > Merging [#6735](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a32d85f0d405be53117b96075eef2875d2185892?el=desc) will **decrease** coverage by `1.02%`. > The diff coverage is `78.33%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6735/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6735 +/- ## ========================================== - Coverage 80.48% 79.46% -1.03% ========================================== Files 157 157 Lines 28794 28822 +28 ========================================== - Hits 23175 22903 -272 - Misses 5619 5919 +300 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.24% <50.00%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <57.14%> (-7.14%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `89.57% <70.27%> (-1.37%)` | :arrow_down: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <93.33%> (-0.40%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.93% <100.00%> (+0.26%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.82% <100.00%> (+0.14%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <100.00%> (-48.39%)` | :arrow_down: | | [src/transformers/modeling\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.01% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <100.00%> (-72.26%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=footer). Last update [a32d85f...190985c](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> LGTM! Looks like we can now deprecate the `_use_cache` function in the `GenerationMixin`, no? yes!<|||||>**IMPORTANT** This PR does a bigger renaming from "decoder_past_key_values" to "past_key_values" as suggested by @sshleifer. This required changes for `T5`, `TFT5` and `Bart`. For each of the three models it is made sure that `decoder_past_values` can still be used as an input to keep backwards compatibility. Would be great if @LysandreJik (and @sgugger, @sshleifer depending on time difference) can review this quickly one last time.<|||||>@sshleifer - all EncoderDecoder Slow tests pass. There was one bart test that failed because of Broken Internet connection. I ran this single test again separately and it was fine. PR looks good to me now -> merging.
transformers
6,734
closed
id2lang in tokenization_xlm.py should be int, and removing hardcoding
In https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L78 we have: ``` "id2lang": {"0": "de", "1": "en"}, "lang2id": {"de": 0, "en": 1}, ``` and then: ``` lang2id (:obj:`Dict[str, int]`, `optional`, defaults to :obj:`None`): Dictionary mapping languages string identifiers to their IDs. id2lang (:obj:`Dict[int, str`, `optional`, defaults to :obj:`None`): ``` So it should be: ``` "id2lang": {0: "de", 1: "en"}, "lang2id": {"de": 0, "en": 1}, ``` All other entries need this change too. The problem hasn't been detected until now since they were used to only count the number of languages it seems. I need to pass src/tgt languages to the tokenizer I'm porting from fairseq, so I was looking at how to do that and `id2lang` seems to fit the purpose. But I actually need to look them up by `int` id, that's how I saw the problem. But I'm also not sure why do we need to hardcode the reversal, when it can be done in 1 line of code? Which would also remove this assertion code: ``` self.lang2id = lang2id self.id2lang = id2lang if lang2id is not None and id2lang is not None: assert len(lang2id) == len(id2lang) ``` Further we don't even need to hardcode the ids. Replace: ``` "id2lang": {0: "de", 1: "en"}, ``` with: ``` "id2lang": ["de", "en"] ``` So all we need is one of the two entries, and now generate the 2 lookup dicts on the fly. And since it's no longer `id2lang` semantically, probably renaming it to just `langs` would be more appropriate. I think I will use this approach regardless of the outcome of this issue. Thanks.
08-25-2020 21:38:05
08-25-2020 21:38:05
Not sure if the suggested rewrite to remove all those numbers is desirable - perhaps it's important to see those numbers, so I left it alone and just fixed the keys of `id2lang` to be int. https://github.com/huggingface/transformers/pull/7034
transformers
6,733
closed
grad checkpoint for T5 model -- and lots of debug not yet removed
@sshleifer here is my grad checkpointing code -- albeit with too much debug info and some FP16 stuff also not deleted. - all FP16 needs to be off -- else the model does not converge - with grad checkpoint as hacked now (for every "T5 Stack") -- T5-Large runs with training batch 32 - still unable to train T5-3B even with batch == 1 - still working on grad checkpointing for DDP (which we need for multi-GPU) - need to add grad checkpointing as config option... and many other cleanup but it does work <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
08-25-2020 21:04:24
08-25-2020 21:04:24
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
6,732
closed
Documentation detail in (TF)RobertaForSequenceClassification
# ❓ Questions & Help The documetation for TFRobertaForSequenceClassification lacks a description for the `labels` parameter. That is not the case for RobertaForSequenceClassification. The parameters actually exists for the TF version and the documentation causes a little confusion. Not sure where to post this in the forum. ## Details <!-- Description of your issue --> I see that the docstring for these classes comes from the variable `ROBERTA_INPUTS_DOCSTRING` in the [source](https://huggingface.co/transformers/_modules/transformers/modeling_roberta.html#RobertaForSequenceClassification) and the doc for labels is added after that on the pytorch model, but not on the tf model.
08-25-2020 20:20:04
08-25-2020 20:20:04
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,731
closed
Model card for kuisailab/albert-xlarge-arabic
08-25-2020 18:48:30
08-25-2020 18:48:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=h1) Report > Merging [#6731](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6731/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6731 +/- ## ========================================== + Coverage 79.42% 79.48% +0.06% ========================================== Files 156 156 Lines 28411 28411 ========================================== + Hits 22565 22583 +18 + Misses 5846 5828 -18 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=footer). Last update [e11d923...bfd52f8](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,730
closed
Model card for kuisailab/albert-large-arabic
08-25-2020 18:47:06
08-25-2020 18:47:06
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=h1) Report > Merging [#6730](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **increase** coverage by `0.21%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6730/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6730 +/- ## ========================================== + Coverage 79.42% 79.63% +0.21% ========================================== Files 156 156 Lines 28411 28411 ========================================== + Hits 22565 22626 +61 + Misses 5846 5785 -61 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.17% <0.00%> (-12.53%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=footer). Last update [e11d923...1f0307a](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,729
closed
Model card for kuisailab/albert-base-arabic
08-25-2020 18:45:10
08-25-2020 18:45:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=h1) Report > Merging [#6729](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **decrease** coverage by `0.44%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6729/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6729 +/- ## ========================================== - Coverage 79.42% 78.97% -0.45% ========================================== Files 156 156 Lines 28411 28411 ========================================== - Hits 22565 22438 -127 - Misses 5846 5973 +127 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.70% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=footer). Last update [e11d923...242d1d0](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,728
closed
Install nlp for github actions test
08-25-2020 18:44:23
08-25-2020 18:44:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=h1) Report > Merging [#6728](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **decrease** coverage by `0.43%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6728/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6728 +/- ## ========================================== - Coverage 79.42% 78.99% -0.44% ========================================== Files 156 156 Lines 28411 28411 ========================================== - Hits 22565 22442 -123 - Misses 5846 5969 +123 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=footer). Last update [e11d923...e3fc486](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,727
closed
added model card for codeswitch-spaeng-sentiment-analysis-lince
Hi, I added model card for `codeswitch-spaeng-sentiment-analysis-lince` and also update other model cards. Please review and if possible please merge. thanks and regards Sagor
08-25-2020 18:11:37
08-25-2020 18:11:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=h1) Report > Merging [#6727](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e6397a7d8e7433aa4c4cafba98e08e5c73f087c?el=desc) will **decrease** coverage by `1.11%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6727/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6727 +/- ## ========================================== - Coverage 80.10% 78.99% -1.12% ========================================== Files 156 156 Lines 28411 28411 ========================================== - Hits 22758 22442 -316 - Misses 5653 5969 +316 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-3.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=footer). Last update [7e6397a...8d8b352](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,726
closed
Fix pegasus-xsum integration test
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6705
08-25-2020 17:45:59
08-25-2020 17:45:59
transformers
6,725
closed
Dataset Lazyloader for transformers trainer
# 🚀 Feature request Currently, train/fine-tune models using transformers trainer loads the whole dataset in memory. This could work for small datasets, but with large datasets the training will crash. Code: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L128 ## Motivation We have trained 0.4 Billion Bert Model and currently training a 1.8 Billion Bert Model for protein sequences, and we want to train DistilBERT to reduce the model size. Unfortunately, our dataset is very huge about 0.7 Terabyte and since the trainer loads the whole dataset the trainer crashes. It will be more optimised if you could use lazy loader for loading the datasets like fairseq. @patrickvonplaten I am not sure who is responsible for the data loader for LM.
08-25-2020 17:18:40
08-25-2020 17:18:40
As far as I can see there is currently no dataloader functionality to lazy load data into memory. It should not be very hard to implement such a feature yourself, I think, see https://discuss.pytorch.org/t/loading-huge-data-functionality/346/2 . Also cc @lhoestq @sgugger @thomwolf - Are we planning on providing more support for lazy data loading? <|||||>There is an [open PR #4009](https://github.com/huggingface/transformers/pull/4009) about this that was mostly close to being merged (cc @BramVanroy) and I must confess I kind of dropped the ball on it. Shouldn't be a ton of work to complete it and get it merged. As mentioned in that PR, the other option is to use `huggingface/nlp` which can load large text datasets lazily out of the box.<|||||>@patrickvonplaten @julien-c Thanks a lot for your reply. Should we close this issue and focus on https://github.com/huggingface/transformers/pull/4009 ?<|||||>The long-term solution is to use nlp for this IMO.<|||||>I have checked NLP and it is slow, maybe I am doing something wrong. I made a simple python script to check it is speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_files = glob.glob("xxx/*.txt",recursive=True) random.shuffle(train_files) print(train_files) dataset = nlp.load_dataset('text', data_files=train_files, name="customDataset", version="1.0.0", cache_dir="xxx/nlp") ```<|||||>You should open this issue on the nlp repo, to make sure it's not forgotten! In particular they are working a lot on performance right now :-)<|||||>done: https://github.com/huggingface/nlp/issues/546 <|||||>I am closing this in favour of: - https://github.com/huggingface/transformers/pull/4009 - https://github.com/huggingface/nlp/issues/546
transformers
6,724
closed
create ProtBert-BFD model card.
This is a model card for our ProtBert-BFD : https://huggingface.co/Rostlab/prot_bert_bfd
08-25-2020 16:52:52
08-25-2020 16:52:52
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=h1) Report > Merging [#6724](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/074340339a6d6aede30c14c94ffe7b59a01786f1?el=desc) will **decrease** coverage by `0.43%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6724/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6724 +/- ## ========================================== - Coverage 79.91% 79.47% -0.44% ========================================== Files 156 156 Lines 28406 28406 ========================================== - Hits 22701 22577 -124 - Misses 5705 5829 +124 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.31% <0.00%> (+50.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=footer). Last update [0743403...332510c](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing, this looks awesome – did you want to review this @patrickvonplaten?<|||||>Looks great!
transformers
6,723
closed
add xlm-roberta-large-xnli model card
08-25-2020 16:13:28
08-25-2020 16:13:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=h1) Report > Merging [#6723](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a25c9fc8e14f3e8914116e6142af2a9589dc8e63?el=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6723/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6723 +/- ## ========================================== - Coverage 79.00% 78.96% -0.05% ========================================== Files 156 156 Lines 28405 28405 ========================================== - Hits 22442 22429 -13 - Misses 5963 5976 +13 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-3.01%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=footer). Last update [a25c9fc...715d491](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merging so I can tweet, but lmk if anything is off and I'll update it.<|||||>looks good! Fixed tiny typos in 3242e4d9<|||||>Hi @joeddav! Could you share your hyperparameters for training the model (I assume you used the `run_glue`)? Would you say that the last phase of training helped significantly?
transformers
6,722
closed
Add AdaFactor optimizer from fairseq
Tested for T5 finetuning and MLM -- reduced memory consumption compared to ADAM. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #1256
08-25-2020 15:55:27
08-25-2020 15:55:27
Hey @sshleifer -- here is belated PR for AdaFactor. Please let me know how to edit this properly, and what tests or examples we should add. Thanks!<|||||>We will integrate into examples/ in a separate PR I think.<|||||>Thanks @sshleifer -- let me try to make those changes. Agree that I should be able to add a single test -- appreciate the link -- and you can add examples in separate PR. If I don't get this figure out soon, yes happy for you to make the changes yourself :-)<|||||>Hey @sshleifer -- think I got a test working finally. We can squash the commits. Still not sure what I need to clean up for the code standards/linter. Please advise, thanks!<|||||>For local style checking, you need: `pip install isort --upgrade` Then `make style` and `make quality` to both suggest you have no errors. They should autofix things or at least give error messages. My workflow is to define ```bash sty () { make style flake8 examples templates tests src utils } ``` and then run `sty` a lot.<|||||>Also squashing happens automatically at merge time, don't worry about that.<|||||>> For local style checking, you need: `pip install isort --upgrade` > Then `make style` and `make quality` to both suggest you have no errors. > They should autofix things or at least give error messages. My workflow is to define > > ```shell > sty () { > make style > flake8 examples templates tests src utils > } > ``` > > and then run `sty` a lot. Hmm. Is there a way for `style` to tell me the location in offending file? Output seems pretty minimal.<|||||>if you also run the flake8 command it should just fix it.<|||||>I think I fixed the formatting, as requested. Took a sec to figure that all out...<|||||>@sshleifer -- any idea what happened with the `black` / code quality changes overnite? I'm very confused. Seems as if the standard changed from yesterday... <|||||>Yes they did, sorry about that. I did some cleanup on this branch. If you are curious about the style change: I tried to future proof it here https://github.com/huggingface/transformers/pull/6748<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=h1) Report > Merging [#6722](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `68.23%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6722/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6722 +/- ## ========================================== - Coverage 78.96% 78.94% -0.03% ========================================== Files 157 157 Lines 28486 28571 +85 ========================================== + Hits 22495 22555 +60 - Misses 5991 6016 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <68.23%> (-13.27%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (+0.75%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=footer). Last update [a75c64d...8958b9f](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome. Thanks @sshleifer. I'll start working more on the other less mature PRs we discussed. And please ping me if/when you write tests or examples for this. Happy to contribute to that as well if you need.<|||||>I've added Adafactor to the docs and slightly changed the style of the docstrings in https://github.com/huggingface/transformers/pull/6765<|||||>Thanks! I'll add a `--adafactor` option lightning_base and trainer in 2 prs.
transformers
6,721
closed
Add model card for singbert large
Model card for singbert large, similar to [singbert](https://github.com/huggingface/transformers/tree/master/model_cards/zanelim/singbert) but bert large model.
08-25-2020 15:51:11
08-25-2020 15:51:11
@JetRunner same model but bert large version, thank you!!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=h1) Report > Merging [#6721](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d17cce227022594ba84dbb92bafc802fb41434df?el=desc) will **increase** coverage by `0.49%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6721/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6721 +/- ## ========================================== + Coverage 79.00% 79.50% +0.49% ========================================== Files 156 156 Lines 28406 28406 ========================================== + Hits 22443 22583 +140 + Misses 5963 5823 -140 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.75%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=footer). Last update [d17cce2...cd3dbfc](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,720
closed
Resuming training from a checkpoint on Windows does not resume at the correct global_step
### Who can help Trainer: @sgugger ## Information I've noticed that resuming training from a checkpoint on Windows fails to start at the right global step (though learning rate, loss, etc continue from where the training left off). The same doesn't happen under macOS. ## Solution? I think the issue is in Trainer.train, specifically this line: `self.global_step = int(model_path.split("-")[-1].split("/")[0])` And it seems to work as expected by swapping `"/"` with `os.sep`
08-25-2020 15:46:38
08-25-2020 15:46:38
Seems wrong indeed. Would you mind fixing with a PR?<|||||>I'm embarrassed to say I'm not super familiar with git or open source collaboration in general, so even though it seems super trivial, I'm worried I'll mess something up. <|||||>I can fix this but it'll be next week as I'm off tonight and tying a few other loose ends.<|||||>Thank you! I'm in no personal rush – I got my aborted training to resume. I just wanted someone on the project to know about the problem/solution I found.
transformers
6,719
closed
[Albert] Add position ids to allowed uninitialized weights
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6700
08-25-2020 15:16:09
08-25-2020 15:16:09
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=h1) Report > Merging [#6719](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6719/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6719 +/- ## ========================================== - Coverage 79.49% 79.48% -0.01% ========================================== Files 156 156 Lines 28405 28406 +1 ========================================== - Hits 22581 22579 -2 - Misses 5824 5827 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6719/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.53% <100.00%> (+0.03%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6719/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.76%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=footer). Last update [625318f...8f67175](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,718
closed
Create model card for lordtt13/COVID-SciBERT
08-25-2020 12:43:22
08-25-2020 12:43:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=h1) Report > Merging [#6718](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6718/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6718 +/- ## ========================================== - Coverage 79.49% 79.43% -0.06% ========================================== Files 156 156 Lines 28405 28405 ========================================== - Hits 22581 22564 -17 - Misses 5824 5841 +17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=footer). Last update [625318f...c44165d](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,717
closed
Fix TF optimizer
Fix #6560. Now the parameter `experimental_aggregate_gradients` should be correctly taken into account.
08-25-2020 12:29:27
08-25-2020 12:29:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=h1) Report > Merging [#6717](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/841f07156948a28f8cc9182bb14c911f9e63b0e7?el=desc) will **decrease** coverage by `0.33%`. > The diff coverage is `50.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6717/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6717 +/- ## ========================================== - Coverage 79.77% 79.44% -0.34% ========================================== Files 156 156 Lines 28392 28392 ========================================== - Hits 22650 22556 -94 - Misses 5742 5836 +94 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <50.00%> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=footer). Last update [841f071...5281631](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,716
closed
Fix ONNX test_quantize unittest
`quantize` was catching exception silently, it now let the exception be catched by the caller to avoid dead path. Also, introduces extras dependencies ["onnxruntime"] required to quantize a model and installs the extra when running slow tests.
08-25-2020 12:06:44
08-25-2020 12:06:44
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=h1) Report > Merging [#6716](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abc0202194674ae5e241e547f3af34b4226bdc72?el=desc) will **increase** coverage by `1.03%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6716/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6716 +/- ## ========================================== + Coverage 78.98% 80.01% +1.03% ========================================== Files 156 156 Lines 28398 28398 ========================================== + Hits 22429 22724 +295 + Misses 5969 5674 -295 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.01%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=footer). Last update [abc0202...b0d57be](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Lets see what happens!
transformers
6,715
closed
tensor.nonzero() is deprecated in PyTorch 1.6
It also seems to make the api-inference crash ... Signed-off-by: Morgan Funtowicz <[email protected]>
08-25-2020 11:34:48
08-25-2020 11:34:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=h1) Report > Merging [#6715](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b105f2c6b3986e7170be353b1684861a3f70991b?el=desc) will **increase** coverage by `0.50%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6715/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6715 +/- ## ========================================== + Coverage 79.14% 79.65% +0.50% ========================================== Files 156 156 Lines 28245 28245 ========================================== + Hits 22355 22499 +144 + Misses 5890 5746 -144 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: | | ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=footer). Last update [b105f2c...cccb59f](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,714
closed
RuntimeError: forward() Expected a value of type ‘Tensor’ for argument ‘input_ids’ but instead found type ‘list’ while loading a torchscript model following the documentation
# ❓ Questions & Help I followed the documentation [here](https://huggingface.co/transformers/torchscript.html) to convert transformers model to torchscript, modified the code for `ALBERT` and then ran them on Google Colab (also my local machine): **To recurrent:** **Step 1** ``` from transformers import AlbertModel, AlbertTokenizer, AlbertConfig import torch enc = AlbertTokenizer.from_pretrained("albert-xxlarge-v2") text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) masked_index = 8 tokenized_text[masked_index] = '[MASK]' indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] config = AlbertConfig(torchscript=True) model = AlbertModel(config) model.eval() model = AlbertModel.from_pretrained("albert-xxlarge-v2", torchscript=True) traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "albert-xxlarge-v2.pt") ``` I successfully exported the model, but the second code cell threw out a `RuntimeError`: **Step 2** **Code:** ``` loaded_model = torch.jit.load("albert-xxlarge-v2.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(dummy_input) ``` **Error message:** ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-2e61b1def246> in <module>() 2 loaded_model.eval() 3 ----> 4 all_encoder_layers, pooled_output = loaded_model(dummy_input) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), RuntimeError: forward() Expected a value of type 'Tensor' for argument 'input_ids' but instead found type 'list'. Position: 1 Value: [tensor([[ 2, 72, 23, 2170, 27674, 13, 60, 3, 4, 27674, 23, 21, 10956, 7911, 3]]), tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]])] Declaration: forward(__torch__.transformers.modeling_albert.AlbertModel self, Tensor input_ids, Tensor attention_mask) -> ((Tensor, Tensor)) Cast error details: Unable to cast Python instance to C++ type (compile in debug mode for details) ``` In order to ensure this is't caused by my edit of the code, I also ran all the code there to export `BERT` to TorchScript and **get same error at the same step**. **Software Versions:** ``` Name: torch Version: 1.6.0+cu101 Name: transformers Version: 3.0.2 ``` Is this a problem with the documentation? And how to fix it. Thanks for any advice. ## Details It shouldn’t throw out a runtime error and the model should be able to use normally. Original Link to the Huggingface Forum: [https://discuss.huggingface.co/t/get-runtimeerror-forward-expected-a-value-of-type-tensor-for-argument-input-ids-but-instead-found-type-list-while-loading-torchscript-models-converting-from-transformers/828](https://discuss.huggingface.co/t/get-runtimeerror-forward-expected-a-value-of-type-tensor-for-argument-input-ids-but-instead-found-type-list-while-loading-torchscript-models-converting-from-transformers/828)
08-25-2020 11:10:35
08-25-2020 11:10:35
Hey @xf05888, Thanks for submitting the issue! I can reproduce it. I updated the docs here: #6714 . This should solve your problem I think. The input lists of tensors has to be unpacked before using it.
transformers
6,713
closed
Fix the TF Trainer gradient accumulation and the TF NER example
This PR fixes #6479 and fix also the NER example that was not working anymore since few weeks.
08-25-2020 11:02:17
08-25-2020 11:02:17
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=h1) Report > Merging [#6713](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/841f07156948a28f8cc9182bb14c911f9e63b0e7?el=desc) will **increase** coverage by `0.26%`. > The diff coverage is `20.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6713/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6713 +/- ## ========================================== + Coverage 79.77% 80.04% +0.26% ========================================== Files 156 156 Lines 28392 28393 +1 ========================================== + Hits 22650 22727 +77 + Misses 5742 5666 -76 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.21% <0.00%> (-0.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <100.00%> (-0.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=footer). Last update [841f071...b4b51de](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,712
closed
longformer padding logging
# 🚀 Feature request The longformer model logs the extra padding length that it adds (when the padding length is bigger than zero). Can you please add a verbose or non-verbose option to control this behavior? It basically creates a log every forward pass. the code is in: module - transformers/modeling_longformer.py function - _pad_to_window_size line - 555 Thank you so much !
08-25-2020 09:23:12
08-25-2020 09:23:12
Hi! This should be manageable with the logger: ```py import transformers import logging logger = logging.getLogger("transformers") logger.setLevel(logging.ERROR) ```<|||||>thank you! sounds great. However, I do think that you can perhaps just notify this behavior at the start of training rather than logging every step.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,711
closed
Pegasus finetuning: OOM
Epoch 0: 91% 5747/6331 [39:52<04:03, 2.40it/s, loss=75.765, v_num=2]/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) tcmalloc: large alloc 1083260928 bytes == 0x1aece0000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 1354080256 bytes == 0x21e5c000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 1692606464 bytes == 0x7f10651ce000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 2115764224 bytes == 0x7f0fe700e000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 2644705280 bytes == 0x7f0f495de000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 3305881600 bytes == 0x7f0fe700e000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 4132356096 bytes == 0x7f0e530f2000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 5165449216 bytes == 0x7f0f495de000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 ./finetune_pegasus_xsum.sh: line 15: 876 Killed I appreciate any help. Thank you.
08-25-2020 08:19:07
08-25-2020 08:19:07
Hi @laibamehnaz can you also post env info <|||||>have seen this issue with colab, when the RAM usage suddenly increases colab just crashes it , were you using colab ? Just to confirm can you try using less examples, you can control the number of examples using `--n_train` OR My guess is at that point when it crashed it may have received a longer sentence which could have resulted in the whole batch being large. If running on single GPU, you can use `sortish_sampler` which samples the longer batches first, so we can catch these types of errors early, can be enabled using `--sortish_sampler` <|||||>Yes, I am using Colab. Sure, I will check with --sortish_sampler. <|||||>So I tried what you said but still get the same error. Tried with lesser training examples, still getting the same error. Tried with fewer validation examples as well. It seems like this error comes every time the first validation loop is ended, no matter the size. <|||||>I see. Could you post the env info, GPU, RAM etc ? and the specific command that you ran which resulted in this error ? I will try to reproduce it <|||||>GPU: Tesla K80 RAM: 12GB ./finetune_pegasus_xsum.sh \ --data_dir ./data/ \ --output_dir ./output/ \ --train_batch_size=2 \ --eval_batch_size=2 \ --val_check_interval 0.5 \ --num_train_epochs 3 \ --gradient_accumulation_steps 128 \ --model_name_or_path google/pegasus-xsum<|||||>I'd add `--freeze_embeds --sortish_sampler`. LMK how it goes, happy to help!<|||||>I have tried with them as well. Same issue :(<|||||>Hi @sshleifer , Looks like I hadn't tried with --freeze_embeds. Works well now. Thanks a lot. Even though it leads to full completion, I still get something like this : `Epoch 3: 100% 300/300 [09:26<00:00, 1.89s/it, loss=97.408, v_num=6] ./finetune_pegasus_xsum.sh: line 16: 551 Killed ` And, it doesn't generate summaries for the test set even with --do_predict <|||||>@sshleifer similar issue here #6665<|||||> `--do_predict` doesn't work (there is a PL bug), you have to use `run_eval.py` to evaluate. Here is command I ran to evaluate `pegasus-xsum` on xsum/test data: ```bash mkdir gens export DATA_DIR=xsum python run_eval.py google/pegasus-xsum \ $DATA_DIR/test.source gens/peg_xsum_test_generation.txt \ --reference_path $DATA_DIR/test.target \ --score_path gens/peg_xsum_rouge.txt --task summarization \ --device cuda \ --bs 8 ```<|||||>I have seen "killed" at/towards the end of training a few times and just ignore it.<|||||>Alright. Thank you so much :))<|||||>@sshleifer I got core dumped with this, even with --freeze-embeds on 16GB P100 ```bash !bash finetune_pegasus_xsum.sh \ --train_batch_size 2 \ --eval_batch_size 4 \ --model_name_or_path google/pegasus-large \ --n_train 256 \ --n_val 256 \ --output_dir xsum_pegasus_test_4 \ --data_dir xsum \ --gpus 1 \ --sortish_sampler \ --val_check_interval 0.02 ``` ``` File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_io.py", line 273, in save_checkpoint self._atomic_save(checkpoint, filepath) File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_io.py", line 264, in _atomic_save torch.save(checkpoint, tmp_path) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 365, in save return File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 258, in __exit__ self.file_like.write_end_of_file() RuntimeError: [enforce fail at inline_container.cc:262] . unexpected pos 869515520 vs 869515408 terminate called after throwing an instance of 'c10::Error' what(): [enforce fail at inline_container.cc:262] . unexpected pos 869515520 vs 869515408 frame #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x47 (0x7f71b2235fd7 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so) frame #1: <unknown function> + 0x228ff30 (0x7f71eb51af30 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #2: <unknown function> + 0x228c163 (0x7f71eb517163 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #3: caffe2::serialize::PyTorchStreamWriter::writeRecord(std::string const&, void const*, unsigned long, bool) + 0x17b (0x7f71eb51c10b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #4: caffe2::serialize::PyTorchStreamWriter::writeEndOfFile() + 0xe1 (0x7f71eb51cca1 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #5: caffe2::serialize::PyTorchStreamWriter::~PyTorchStreamWriter() + 0x115 (0x7f71eb51d495 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so) frame #6: <unknown function> + 0x5a35e3 (0x7f71f9e0b5e3 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so) frame #7: <unknown function> + 0x273c00 (0x7f71f9adbc00 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so) frame #8: <unknown function> + 0x274e4e (0x7f71f9adce4e in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so) frame #9: python3() [0x588a98] frame #10: python3() [0x5ad558] frame #11: python3() [0x5ad56e] frame #12: python3() [0x5ad56e] frame #13: python3() [0x5ad56e] frame #14: python3() [0x5ad56e] frame #15: python3() [0x5ad56e] frame #16: python3() [0x5ad56e] frame #17: python3() [0x5ad56e] frame #18: python3() [0x5ad56e] frame #19: python3() [0x5ad56e] frame #20: python3() [0x5ad56e] frame #21: python3() [0x5ad56e] frame #22: python3() [0x5ad56e] frame #23: python3() [0x5ad56e] frame #24: python3() [0x5ad56e] frame #25: python3() [0x5ad56e] frame #26: python3() [0x5ad56e] frame #27: python3() [0x56b636] <omitting python frames> frame #33: __libc_start_main + 0xe7 (0x7f72058bdb97 in /lib/x86_64-linux-gnu/libc.so.6) finetune_pegasus_xsum.sh: line 14: 2967 Aborted (core dumped) python finetune.py --learning_rate=1e-4 --do_train --do_predict --n_val 1000 --val_check_interval 0.25 --max_source_length 512 --max_target_length 56 --freeze_embeds --max_target_length 56 --label_smoothing 0.1 "$@" ``` GPU: P100 (16GB) RAM: 12 GB [colab](https://colab.research.google.com/drive/1d0FfFNGUkRrgqkbUQRQEB0VqMzuVET7Z?usp=sharing)<|||||>Crazy traceback. Is that torch 1.6 @patil-suraj ?<|||||>Yes, 1.6.0+cu101<|||||>Also `--freeze_embeds` is already present in `finetune_pegasus_xsum.sh` [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh#L13), so I'm a bit confused about how adding extra `--freeze_embeds` solved @laibamehnaz 's issue.<|||||>Yes, i was very confused about that too.<|||||>I tried with just 100 training examples, and added an extra --freeze_embeds and it worked. Now I am trying on the entire dataset and checking.<|||||>LMK how it goes, I'm thinking that this is not GPU OOM. This issue was previously observed when RAM usage suddenly increased on colab<|||||>Same issue again :/ `Epoch 0: 50% 3415/6831 [49:00<49:00, 1.16it/s, loss=88.546, v_num=7]/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) tcmalloc: large alloc 1202077696 bytes == 0x181364000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 1502601216 bytes == 0x212e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 1878253568 bytes == 0x7f9ef00c2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 2347819008 bytes == 0x7f9e641b4000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 2934775808 bytes == 0x7f9db52e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 3668475904 bytes == 0x7f9e641b4000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 4585594880 bytes == 0x7f9ca3db8000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 tcmalloc: large alloc 5731999744 bytes == 0x7f9db52e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 ./finetune_pegasus_xsum.sh: line 16: 453 Killed `<|||||>Again after validation loop ?<|||||>Yes, exactly after the first validation loop.<|||||>Did you notice the RAM usage ? Seems related to serialisation or RAM <|||||>No, I didn't. I don't think I can check now, right?<|||||>Yes, need to check when it's executing. can check ram usage in top right corner of colab when it's executing<|||||>Sure, lemme run it again and check. Will let you know.<|||||>Same thing again, fails right after the first validation loop. RAM usage at the exact end of validation loop : 7.86GB/12.72GB and then the same error as before. <|||||>@laibamehnaz `tcmalloc` is google's fancy `malloc` alternative and throws this error when it thinks that the requested memory might exceed the available memory. `tcmalloc: large alloc 5731999744 bytes` means it's trying to alloc ~5.73GB, so I think memory usage is peaking up when saving the checkpoint (which is large for pegasus, ~5GB) which is resulting in this error. Strangely, I ran this multiple times with 100 XSUM examples on the same K80 and 12 GB RAM instance and didn't see this error This is the command that I used ```bash !bash finetune_pegasus_xsum.sh \ --model_name_or_path google/pegasus-xsum \ --data_dir xsum \ --output_dir xsum_pegasus_test_4 \ --train_batch_size 2 \ --eval_batch_size 2 \ --num_train_epochs 1 \ --n_train 100 \ --n_val 100 \ --sortish_sampler \ --gpus 1 \ --val_check_interval 0.25 \ --gradient_accumulation_steps 4 \ ``` [colab](https://colab.research.google.com/drive/1iyhtazbj6yPKPWHuy02vwjDPpH4R5p-G?usp=sharing) maybe switching to higher RAM instance should solve this issue. But let's wait for @sshleifer's answer<|||||>Right right, I understand. Actually, I am running this on my own dataset, and not on XSUM.<|||||>`%%bash source venv/bin/activate cd transformers cd examples cd seq2seq ./finetune.sh \ --data_dir /content/xsum \ --train_batch_size=1 \ --eval_batch_size=1 \ --output_dir=xsum_results \ --num_train_epochs 1 \ --model_name_or_path facebook/bart-large ` I am getting an error where the finetuning gets killed. This was the command I am running for fine-tuning BART and I am using a python3 virtual environment and working on a Google Cloud Platform instance. I think the issue is that currently it shows torch.cuda.is_available() as false. When I tried running it on the terminal, it showed that I need to install an NVIDIA driver. However when I try to run the commands sudo /opt/deeplearning/install-driver.sh, I get an error. <|||||>@mc2259 I wish I could help. When I get nvidia driver errors on gcp I just get a new instance 🤷 . Feel free to start a GCP troubleshooting thread on the forums! I am a big GCP user and would love to exchange tips and tricks. https://discuss.huggingface.co/ <|||||>@patil-suraj your intuition sounds reasonable, I am on a higher ram instance and can't reproduce. Smaller Checkpoint strategies: At some point I was passing `save_weights_only` to the checkpointer [here](https://github.com/huggingface/transformers/blob/b6b2f2270fe6c32852fc1b887afe354b7b79d18c/examples/seq2seq/callbacks.py#L93), but I had subsequent `--resume_from_checkpoint` failure. This was on pl 0.8.1 so I would welcome a PR that added it back and tested that `--resume_from_checkpoint` works on the skinnier checkpoint.<|||||>@sshleifer looking at `pl` docs `save_weights_only` won't save optimizer state so `resume_from_checkpoint` will fail. I'll see if there's an alternative. I think `Seq2SeqTrainer` will take care of such issues, since it saves each `state_dict` separately.<|||||>> @patil-suraj your intuition sounds reasonable, I am on a higher ram instance and can't reproduce. > > Smaller Checkpoint strategies: > > At some point I was passing `save_weights_only` to the checkpointer [here](https://github.com/huggingface/transformers/blob/b6b2f2270fe6c32852fc1b887afe354b7b79d18c/examples/seq2seq/callbacks.py#L93), but I had subsequent `--resume_from_checkpoint` failure. This was on pl 0.8.1 so I would welcome a PR that added it back and tested that `--resume_from_checkpoint` works on the skinnier checkpoint. Alright, I can try this. For model inference, saving only the weights should be fine. Thank you. <|||||>I trained the model on my entire dataset but when I generated the summaries, the first word is always missing. Few examples of the summaries generated on the test set: doesn't want to go to Chloe's family reunion. Ivy wants Carter to stay with his family. got a Christmas tree from Sammie's work. Bart and Ingrid want to buy it for Noah's nativity. 's guitar broke down. The sound engineer told him to put it on the blacklist. Trent and Hugh will see each other in Kingston next show. 's birthday is on Saturday at 8. bought a black top from H&M. Marie will give it back to Sophie. and Maxx are going to the birthday party. What could be the mistake here? And how can I fix this? Thank you.<|||||>Hey @laibamehnaz , set `force_bos_token_to_be_generated` in the config file to `False`, before you call `.generate` you can do it using `model.config.force_bos_token_to_be_generated = False` also did switching to high RAM instance solved your issue ?<|||||>Sure, lemme try this and check. Actually, I don't have resources to switch to a higher RAM instance. Tried with 12GB RAM itself, but only saved the weights as pointed by @sshleifer <|||||>> Hey @laibamehnaz , > > set `force_bos_token_to_be_generated` in the config file to `False`, before you call `.generate` > > you can do it using > `model.config.force_bos_token_to_be_generated = False` > > also did switching to high RAM instance solved your issue ? Doesn't seem to help with the summaries. They are still being generated without the first word.<|||||>I'm investigating and will update in either direction tonight.<|||||>Thank you so much :))<|||||>My fix works, master will have it when #6654 is merged, which will probably be next week.<|||||> I am running into a similar issue with this command: %%bash source venv/bin/activate cd transformers cd examples cd seq2seq ./finetune.sh \ --data_dir ./xsum \ --train_batch_size=1 \ --eval_batch_size=1 \ --output_dir=xsum_results \ --num_train_epochs 1 \ --model_name_or_path facebook/bart-large I am getting ./finetune.sh: line 14: 16531 Killed python finetune.py --learning_rate=3e-5 --fp16 --gpus 1 --do_train --do_predict --n_val 1000 --val_check_interval 0.1 "$@" <|||||>@patil-suraj I tried running the pegasus fine-tuning commands on the collab you linked after connecting to my local runtime and I still get 'killed'. I am not sure why that is happening. I am connected to my local Virtual machine runtime and am using a virtual environment. https://colab.research.google.com/drive/1pdHcn2E5CjSGVEEOQo2qCi5Bby5-pOOZ<|||||>Hey @mc2259 , when did it get killed ? If it got killed at the end of then it's fine, as @sshleifer said there's bug with `--do_predict`<|||||>@patil-suraj It is getting killed in the beginning. The same thing is happening with my BART fine-tuning. I am uploading screenshots if that is helpful.<|||||>I have it working pretty well **on master** on an NVIDIA-RTX with the following command. (It might get killed, still running, but now at ROUGE2 >= 23.6 for xsum, where the goal is ~24.5). Sorry it's gross. Feel free to pretty it up and repost. + Set $BS as big as possible to fit in memory. + Set $GAS = 256/$BS + This takes a **long** time. Has been running for 25h for me and scheduled to last about 30h. 5hr/epoch on an NVIDIA-RTX. I'm working on distilled versions, but don't have a good way to distill pegasus-large. + It might still get `Killed` at the end. + wandb link: https://app.wandb.ai/sshleifer/transformers_fork-examples_seq2seq/runs/3cz2fe87?workspace=user-sshleifer + important changes: new decoder_input_ids on master to solve the truncation problem + `--adafactor` optimizer. Have a great weekend! ### Command ```bash python finetune.py --task summarization --learning_rate=3e-4 --do_train --do_predict \ --val_check_interval 0.25 --n_val 1000 --data_dir xsum --max_source_length 512 --max_target_length=56 \ --freeze_embeds --model_name_or_path google/pegasus-large --tokenizer_name google/pegasus-xsum \ --warmup_steps 500 --dropout 0.1 --attention_dropout 0.1 --label_smoothing 0.1 \ --train_batch_size=$BS --eval_batch_size=8 --gradient_accumulation_steps=$GAS \ --logger_name wandb --sortish_sampler --gpus 1 --output_dir xsum_ft_ls_mask_fix \ --num_train_epochs 6 --adafactor ```
transformers
6,710
closed
[squad] make examples and dataset accessible from SquadDataset object
In order to do evaluation on the SQuAD dataset using squad_evaluate, the user needs access to both the examples loaded in the dataset and the TensorDataset that contains values like unique_id and the like that are used in constructing the list of SquadResult objects. This PR surfaces the examples and dataset to the user so that they can access it directly. For example of why access to those is needed, see how evaluation is currently done in examples/run_squad.py. The SquadDataset object attempts to wrap up some of this functionality, but without access to examples and dataset the evaluation is not possible.
08-25-2020 06:38:27
08-25-2020 06:38:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=h1) Report > Merging [#6710](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0344428f7955675847ef95ddcb4980236b6f8721?el=desc) will **decrease** coverage by `1.27%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6710/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6710 +/- ## ========================================== - Coverage 80.06% 78.78% -1.28% ========================================== Files 156 156 Lines 28386 28391 +5 ========================================== - Hits 22726 22367 -359 - Misses 5660 6024 +364 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `44.31% <0.00%> (-2.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.34% <0.00%> (-1.63%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.76%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: | | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=footer). Last update [0344428...20d0228](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@sgugger @LysandreJik thanks so much for the comments/suggestions! I have updated the code to include support for the legacy cache format. I had a question on one comment, but if there are any other changes needed please let me know.
transformers
6,709
closed
consolidate tf activation functions
e.g. ``` def gelu(x): """Gaussian Error Linear Unit. This is a smoother version of the RELU. Original paper: https://arxiv.org/abs/1606.08415 Args: x: float Tensor to perform activation. Returns: `x` with the GELU activation applied. """ cdf = 0.5 * (1.0 + tf.tanh((np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))) return x * cdf ``` See ACT2FN for torch.
08-25-2020 04:42:41
08-25-2020 04:42:41
Done by @jplu ! https://github.com/huggingface/transformers/blob/master/src/transformers/activations_tf.py#L50
transformers
6,708
closed
[bart] rename self-attention -> attention
Makes more sense since also used from cross-attention. Does not change the state dict, or break slow tests/backwards compatibility.
08-25-2020 04:29:49
08-25-2020 04:29:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=h1) Report > Merging [#6708](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f58903bb62870342eae52f5a02c9105ec6f9b1e?el=desc) will **increase** coverage by `0.12%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6708/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6708 +/- ## ========================================== + Coverage 80.02% 80.15% +0.12% ========================================== Files 157 157 Lines 28586 28586 ========================================== + Hits 22876 22912 +36 + Misses 5710 5674 -36 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <100.00%> (ø)` | | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=footer). Last update [0f58903...4db250c](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,707
closed
copy of #6654 for easier pulling
```bash git fetch git checkout batch-parity-cleaner ```
08-25-2020 03:44:39
08-25-2020 03:44:39
transformers
6,706
closed
ci/gh/self-scheduled: add newline to make examples tests run even if src/ tests fail
At the moment, the examples tests dont run if the src/ tests don't this is not the case for self-push I don't think. I don't know why. Maybe you have an idea @julien-c ? https://github.com/huggingface/transformers/runs/1024137280?check_suite_focus=true ![image](https://user-images.githubusercontent.com/6045025/91119061-5a08da80-e660-11ea-8b48-916a2a54c019.png) Gunna merge this tomorrow cause it can't hurt, then try harder/stackoverflow if it doesn't work.
08-25-2020 03:12:04
08-25-2020 03:12:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=h1) Report > Merging [#6706](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ebc9699fad3889a6e882dce1e4c465232d73438?el=desc) will **increase** coverage by `0.64%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6706/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6706 +/- ## ========================================== + Coverage 78.80% 79.44% +0.64% ========================================== Files 156 156 Lines 28386 28386 ========================================== + Hits 22369 22552 +183 + Misses 6017 5834 -183 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=footer). Last update [0ebc969...1887abe](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I'm pretty sure that the tests stop as soon as there's a failure in one of the items. It would seem logical since there could be a relationship between tests, e.g., a test needing the output of the previous test to continue. Still merging because it looks more consistent this way!
transformers
6,705
closed
PegasusXSUMIntegrationTest.test_pegasus_xsum_summary
self-scheduled. cause of force_bos
08-25-2020 03:08:30
08-25-2020 03:08:30
transformers
6,704
closed
[s2s] round bleu, rouge to 4 digits
Current output is ridiculous to copy paste into the forums/github/slack: ``` { "rouge1": 43.38281995775625, "rouge2": 20.59345268595595, "rougeL": 30.00905449498762 } ``` also `calculate_bleu_score` -> `calculate_bleu` for consistency with `calculate_rouge`
08-25-2020 03:04:57
08-25-2020 03:04:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=h1) Report > Merging [#6704](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ebc9699fad3889a6e882dce1e4c465232d73438?el=desc) will **increase** coverage by `1.26%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6704/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6704 +/- ## ========================================== + Coverage 78.80% 80.06% +1.26% ========================================== Files 156 156 Lines 28386 28386 ========================================== + Hits 22369 22727 +358 + Misses 6017 5659 -358 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=footer). Last update [0ebc969...c0bbc79](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,703
closed
Adding model cards for 5 models
Adding model cards for: - iarfmoose/bert-base-cased-qa-evaluator - iarfmoose/roberta-base-bulgarian-pos - iarfmoose/roberta-base-bulgarian - iarfmoose/roberta-small-bulgarian-pos - iarfmoose/roberta-small-bulgarian
08-25-2020 01:33:33
08-25-2020 01:33:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=h1) Report > Merging [#6703](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/640550fc7a1e311915ead1bcca6dacea0c503faf?el=desc) will **decrease** coverage by `0.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6703/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6703 +/- ## ========================================== - Coverage 77.85% 77.75% -0.10% ========================================== Files 146 146 Lines 26326 26326 ========================================== - Hits 20496 20471 -25 - Misses 5830 5855 +25 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+5.26%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+29.90%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=footer). Last update [640550f...8293ed6](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).