repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 13,734 | closed | Silence warning in gradient checkpointing when it's False | # What does this PR do?
Currently, when loading a `bert-base-cased` model, one gets the warning
```
UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.
```
because it's config has the line `"gradient_checkpointing": false`.
This PR only issues the warning only when the value passed for `gradient_checkpointing` is `True`. | 09-24-2021 19:02:31 | 09-24-2021 19:02:31 | |
transformers | 13,733 | closed | [Examples] speech recognition - remove gradient checkpointing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`gradient_checkpointing` has been moved to the training args.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-24-2021 16:28:11 | 09-24-2021 16:28:11 | cc @sgugger <|||||>Nice! |
transformers | 13,732 | closed | TypeError in tensorflow/run_summarization.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.4.0-1055-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.5.0 (Yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Distributed
### Who can help:
@patrickvonplaten, @patil-suraj, @Rocketknight
Models: facebook/bart
datasets: xsum
* [ ] the official example scripts: (give details below)
run_summarization.py
relative path: examples/tensorflow/summarization
Steps to reproduce the behavior: (Note that --max_train_samples is optional)
python run_summarization.py --model_name_or_path facebook/bart-base --dataset_name xsum --dataset_config "3.0.0" --output_dir /tmp/tst-summarization --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --num_train_epochs 3 --do_train --do_eval --max_train_samples 100
Error message
- INFO - __main__ - Evaluation...
0%| | 0/2833 [00:01<?, ?it/s]
Traceback (most recent call last):
File "run_summarization.py", line 663, in <module>
main()
File "run_summarization.py", line 639, in main
generated_tokens = model.generate(**batch)
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/pbodigut1/code/Users/pbodigut/transformers/v-4.10/transformers/src/transformers/generation_tf_utils.py", line 736, in generate
output = self._generate_beam_search(
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/pbodigut1/code/Users/pbodigut/transformers/v-4.10/transformers/src/transformers/generation_tf_utils.py", line 1102, in _generate_beam_search
model_inputs = self.prepare_inputs_for_generation(
TypeError: prepare_inputs_for_generation() got multiple values for argument 'decoder_input_ids'
## Expected behavior
Successfully run the evaluation step.
| 09-24-2021 16:08:58 | 09-24-2021 16:08:58 | Hello,
Facing the same issue while using https://github.com/huggingface/transformers/tree/master/examples/tensorflow/translation for finetuning MarianMT's opus-mt-en-hi model.
## Environment info:
transformers = '4.12.0.dev0'
tensorflow = 2.6.0
Python version: 3.8.5
Using GPU in script?: Yes
## Stack Trace:
Traceback (most recent call last):
File "run_translation.py", line 626, in <module>
main()
File "run_translation.py", line 610, in main
generated_tokens = model.generate( input_ids = input_ids, attention_mask = attention_mask, decoder_input_ids = decoder_input_ids, labels = labels)
File "/home/t-hdiddee/hf/lib/python3.8/site-packages/transformers/generation_tf_utils.py", line 736, in generate
output = self._generate_beam_search(
File "/home/t-hdiddee/hf/lib/python3.8/site-packages/transformers/generation_tf_utils.py", line 1102, in _generate_beam_search
model_inputs = self.prepare_inputs_for_generation(
TypeError: prepare_inputs_for_generation() got multiple values for argument 'decoder_input_ids'
<|||||>@Rocketknight1, wondering if there is any update on this issue.<|||||>@Rocketknight1 - could you take a look? :-)<|||||>I tested this simple fix, and it seems to work. In _run_summarization.py_, I added the line to remove ```decoder_input_ids``` from the batch dictionary (see below). My guess is that duplicate copies of ```decoder_input_ids``` are being passed to `prepare_inputs_for_generation()` function, after invoking ```model.generate(**batch)```.
```
if training_args.do_eval:
logger.info("Evaluation...")
for batch, labels in tqdm(
tf_eval_dataset, total=len(eval_dataset) // training_args.per_device_eval_batch_size
):
decoder_input_ids = batch.pop("decoder_input_ids", None)
```
@patrickvonplaten and @Rocketknight1 could you confirm if the change looks good to you and the fix doesn't break the BART model inference logic?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @bpraveenk, firstly, sorry for the very long delay here! You're correct that supplying `decoder_input_ids` seems to break `generate()`. Your solution to `pop()` that key from the dictionary probably fixes the issue, but the underlying problem is that `generate()` simply doesn't know what to do with a `decoder_input_ids` input key. We're currently in the middle of entirely refactoring our TF `generate()` function for performance/bugfixing reasons, so I'll put that in the list of stuff to be investigated there, but I don't have a timeline on it unfortunately.
In the meantime, if you want to submit a PR with that change, please do! However, it will definitely only be a temporary workaround until we figure things out properly.<|||||>Thank you for confirming the fix @Rocketknight1 . Could you point me to the PR submission instructions, I will submit the fix and I will add a comment that it is a temporary fix. <|||||>Hi @bpraveenk, thanks for the offer! We have instructions for contributors [here](https://huggingface.co/docs/transformers/contributing).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,731 | closed | [TMP] Do not merge. | <!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-24-2021 12:53:48 | 09-24-2021 12:53:48 | Was just a temp to fix some tests. |
transformers | 13,730 | closed | Add model card creation snippet to example scripts | # What does this PR do?
This PR adds model card creation in case the `push_to_hub` argument is not used.
@patrickvonplaten | 09-24-2021 10:20:57 | 09-24-2021 10:20:57 | Looks good to me - thanks for the PR!<|||||>@sgugger Does this look fine? I went over the scripts in `pytorch` examples.<|||||>That's great! Can you just run `make style` on your branch to take care of the quality issue? |
transformers | 13,729 | closed | [Tests] FNetTokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In the official `test_special_tokens_initialization` the fast tokenizer is loaded from the slow one. This just takes too long for FNetTokenizer. This PR overwrites the test to make it fast and adds a slow one so that 100% coverage is kept.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-24-2021 10:10:13 | 09-24-2021 10:10:13 | |
transformers | 13,728 | closed | Error when loading weights with a different 'projection_dim' in DPRQuestionEncoder | ## Environment info
```
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help
research_projects/rag: @patrickvonplaten, @lhoestq
## To reproduce
```
from transformers import DPRQuestionEncoder, DPRConfig
model_name = "voidful/dpr-question_encoder-bert-base-multilingual"
config = DPRConfig.from_pretrained(model_name)
config.projection_dim = 512
model = DPRQuestionEncoder.from_pretrained(model_name, config=config)
```
# Error
```
NotImplementedError Traceback (most recent call last)
<ipython-input-16-0256969428e8> in <module>()
11 config.projection_dim = 512
12
---> 13 model = DPRQuestionEncoder.from_pretrained(model_name, config=config)
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in _init_weights(self, module)
577 Initialize the weights. This method should be overridden by derived class.
578 """
--> 579 raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")
580
581 def tie_weights(self):
NotImplementedError: Make sure `_init_weigths` is implemented for <class 'transformers.models.dpr.modeling_dpr.DPRQuestionEncoder'>
```
## Expected behavior
Be able to change `config.projection_dim` without errors.
| 09-24-2021 09:44:57 | 09-24-2021 09:44:57 | This is indeed an error and should be fixed by the PR linked above |
transformers | 13,727 | closed | Add support for XLM-R XL and XXL models by modeling_xlm_roberta_xl.py | This PR adds support for the newly released XL and XXL models for XLM-R, and . These models are described in the "Larger-Scale Transformers for Multilingual Masked Language Modeling" paper.
And thank you for @patrickvonplaten and @stefan-it, as the review I got from the #13210,
I added modeling_xlm_roberta_xl.py and convert_xlm_roberta_xl_original_pytorch_checkpoint_to_pytorch.py for conversion script.
I compared fairseq and transformers side by side,
and managed output same.
torch.Size([1, 11, 250880]) torch.Size([1, 11, 250880])
max_absolute_diff = 0.000186920166015625
Do both models output the same tensors? 🔥
Saving model to converted_xlmr_xl
Configuration saved in converted_xlmr_xl/config.json
Model weights saved in converted_xlmr_xl/pytorch_model.bin
Since fairseq roberta to transformers conversion was made a long time ago,
transformers architecture differs far from fairseq which originally started from,
and it makes quite confusion to write right code.
I synced transformers code to allow fairseq model structure.
- [ ] add test for XLM-R XL and XXL
- [ ] upload model for XLM-R XL and XXL to official repo | 09-24-2021 09:28:44 | 09-24-2021 09:28:44 | Thanks for the PR @Soonhwan-Kwon!
Could you also add a test file and some integration tests? :-)<|||||>@patrickvonplaten I started to work on test file, It seems test needs models uploaded on official repo. but how I can upload model files for xlm-roberta-xl or xlm-roberta-xxl to the official repo? <|||||>Hey @Soonhwan-Kwon,
Thanks a lot for working and this and sorry to reply so late!
Would it be ok to upload the checkpoints for now under your name on the hub and to make the tests pass and then in a last step, will move the checkpoints to the official organization?
Let me know if you need some help fixing the last steps :-)<|||||>> Hey @Soonhwan-Kwon,
>
> Thanks a lot for working and this and sorry to reply so late! Would it be ok to upload the checkpoints for now under your name on the hub and to make the tests pass and then in a last step, will move the checkpoints to the official organization?
>
> Let me know if you need some help fixing the last steps :-)
Thank you for the reply, I'm middle of uploading models, but it takes time for xxlarge(over 24GB) model.<|||||>@patrickvonplaten I have uploaded all models, but I have no idea how to fix last steps because I'm kind of newbie here. How can I fix last steps? Thank you in advance.<|||||>@Soonhwan-Kwon, could you maybe also add a configuration file (just copy the xlm-roberta one) and also add a full test suite for the model? :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> @Soonhwan-Kwon, could you maybe also add a configuration file (just copy the xlm-roberta one) and also add a full test suite for the model? :-)
sorry for the late response. I already added config.json, so is it the tokenizer.json you're talking about? And I added simple test for models(tests/test_modeling_xlm_roberta_xl.py) but where can I find the full test suite?<|||||>Hey @Soonhwan-Kwon,
I meant more a new `configuration_xlm_roberta_xl.py` python file that is more or less a copy of `configuration_xlm_robert.py`:-) But I see that the configs are exactly similar so maybe we can leave as is.
@sgugger @LysandreJik - This PR adds the checkpoints of https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/ to `transformers`. The model is essentially a "scaled-up" version of https://huggingface.co/docs/transformers/master/en/model_doc/xlmroberta#overview . Since the "scaled-up" version has a significantly different architecture (layer_norm is used very different amongst others) we decided to make a new `modeling_xlm_roberta_xl.py` model file. Now would it be ok for you to a) **not** have a corresponding `configuration_xlm_roberta_xl.py` and just use the `config_xlm_roberta.py` code or do you prefer b) adding a new `configuration_xlm_roberta_xl.py` file for consistency? I'm a bit indifferent here, but do slighly prefer b). What do you think?
@Soonhwan-Kwon - there are some failing tests which I think can partly be solved by rebasing to current master. Otherwise, if ok for you I'm also happy to dive in the PR and help you finish the last parts of it. Let me know what you prefer :-)<|||||>Hey @Soonhwan-Kwon, thanks a lot for your PR!!
@patrickvonplaten, I prefer `b)`: a lot of the library is built on the assumption that you have one configuration file/object per modeling file/model objects. Since we've authorized auto models to map one configuration to multiple models this isn't as much of an issue as it could have been in the past, but I'm positive we'll find edge cases where it doesn't work as well as we expect it to simply because of the wrong assumption.<|||||>Also, since it falls into our "new architecture test", there should be a new folder regrouping this modeling file and configuration file instead of putting everything in the xlm-roberta folder.<|||||>> Hey @Soonhwan-Kwon,
>
> I meant more a new `configuration_xlm_roberta_xl.py` python file that is more or less a copy of `configuration_xlm_robert.py`:-) But I see that the configs are exactly similar so maybe we can leave as is.
>
> @sgugger @LysandreJik - This PR adds the checkpoints of https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/ to `transformers`. The model is essentially a "scaled-up" version of https://huggingface.co/docs/transformers/master/en/model_doc/xlmroberta#overview . Since the "scaled-up" version has a significantly different architecture (layer_norm is used very different amongst others) we decided to make a new `modeling_xlm_roberta_xl.py` model file. Now would it be ok for you to a) **not** have a corresponding `configuration_xlm_roberta_xl.py` and just use the `config_xlm_roberta.py` code or do you prefer b) adding a new `configuration_xlm_roberta_xl.py` file for consistency? I'm a bit indifferent here, but do slighly prefer b). What do you think?
>
> @Soonhwan-Kwon - there are some failing tests which I think can partly be solved by rebasing to current master. Otherwise, if ok for you I'm also happy to dive in the PR and help you finish the last parts of it. Let me know what you prefer :-)
@patrickvonplaten Sure, I will be glad if you help the last parts and feel free to dive in this PR.<|||||>@patrickvonplaten I added you as collaborator in my repo, perhaps you might need access.<|||||>Thanks @Soonhwan-Kwon - I'll try to tackle this tomorrow :-) <|||||>@Soonhwan-Kwon - I've done some changes and left some final To-Dos in case you would like to tackle them :-)
We should also add this model to the README.md with this paper: https://arxiv.org/pdf/2105.00572.pdf and give it a doc
Feel free to finish those last tasks if you want. Otherwise, I think I'm available to do this by mid- or end of the next week :-)<|||||>Thanks for working on that @Soonhwan-Kwon . I made some minor suggestions and will look at the tokenization part now (to check if there are any differences between XLM-R and XLM-R-XL/XXL :)<|||||>The tokenization part is working as expected. Here are some details:
* Underlying sentence piece models (XLM-R and XLM-R-XL) are identical (checked that via `torch.hub.load`, that downloads the model and stores them under `~/.cache/torch/pytorch_fairseq`). Checksums are the same.
* Tokenizer mapping (fairseq to spm model) is thankfully the same as for XLM-R, here I documented that mapping:
https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py#L163-L167
* But where does this vocab size "mismatch" come from? XLM-R has a vocab size of 250,002, whereas XLM-R-XL has 250,880.
Fairseq comes with an own dictionary file (`dict.txt`), for XLM-R it has 249,997 entries, and 250,875 entries for XLM-R-XL.
The dictionary file for XLM-R-XL is the same as XLM-R, but it contains `madeupword` tokens ranging from `madeupword0` to `madeupword877` at the end.<|||||>@stefan-it - it would also be great if you could do a final review :-)<|||||>Really cool! I'm currently running experiments on token classification with that new model :hugs: <|||||>@patrickvonplaten @sgugger @stefan-it Thank you for the merge, it was a great experience, and I came to respect committers of transformers. And below revert is just miss click, sorry. |
transformers | 13,726 | closed | [WIP] Tensor model parallelism for GPTNeo model. | ## Changes
1. Add `parallelization_utils.py`
2. Add parallel mpdules like row & column parallel linear and vocab parallel embedding and cross entropy to `modeling_utils.py`
3. Add `vocab_parallel_embedding` and `make_vocab_size_divisible_by` attribute to `GPTNeoConfig`
4. Add `GPTNeoParallelismPolicy` in `configuration_gpt_neo.py`
5. Now, `GPTNeoPreTrainedModel` inherits `ParallelizationMixin` class. if a model inherits `ParallelizationMixin`, a method named `from_pretrained_with_parallel()`and `from_config_with_parallel()` will be available. This API is motivated from [parallelformers' API](https://github.com/tunib-ai/parallelformers#2-put-the-model-in-the-parallelize-function)
6. Add `mpu` or `process_group` into trainer's DDP wrapper.
---
## Model parallelism API
- Note this API can be used for both training and inference.
- We can branch `from_pretrained_with_parallel()` into the methods such as `from_pretrained()` + `parallelize()`, but it is designed to create and parallelize the model at the same time for the implementation of pipeline parallelism later. (we should match API of transformers-friendly and megatron-friendly)
- When extending this API to other models in the future, we also need to decide how to dispose the existing naive `parallelize()` method in GPT2 and T5. So I designed the names not to overlap, so that naive method can be deprecated slowly in the future.
```python
'''
deepspeed --num_gpus=4 test_model_parallel_inference.py
or
python -m torch.distributed.launch --nproc_per_node=4 test_model_parallel_inference.py
'''
from transformers import GPTNeoForCausalLM, GPT2Tokenizer
# FP32
model = GPTNeoForCausalLM.from_pretrained_with_parallel(
"EleutherAI/gpt-neo-1.3B", tensor_model_parallel_size=4
)
# FP16 (default=False)
model = GPTNeoForCausalLM.from_pretrained_with_parallel(
"EleutherAI/gpt-neo-1.3B", tensor_model_parallel_size=4, fp16=True,
)
# Use Vocab parallel embedding (default=False)
model = GPTNeoForCausalLM.from_pretrained_with_parallel(
"EleutherAI/gpt-neo-1.3B", tensor_model_parallel_size=4, vocab_parallel_embedding=True,
)
# For scratch training
model = GPTNeoForCausalLM.from_config_with_parallel(
YOUR_CONFIG_OBJECT, tensor_model_parallel_size=4
)
```
---
## TODO
- We should add test codes
- FP32 inference test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- FP16 inference test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- Vocab parallel embedding inference test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- FP32 training Test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- MixedPrecision training test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- Vocab parallel embedding training test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- 2D parallelsim (DP + TP) with Trainer API test
- [ ] ForSequenceClassification
- [ ] ForCausalLM
- [ ] We should consider how to provide checkpoint merging tools
---
## Reviewers
@stas00 | 09-24-2021 09:09:06 | 09-24-2021 09:09:06 | Note current PR is about transformers-friendly model parallelism, but some codes are designed for both transformers-friendly and megatron-friendly model parallelism. These will be used later when implementing `ParallelGPT2` and `ParallelBert` described [here](https://github.com/huggingface/transformers/issues/13690). I uploaded them because they are the codes must be used in both parallelization methods (transfo-friendly, mega-friendly).
Note I didn't upload the code needed only for megatron-friendly. All of the codes uploaded now are also necessary for transformers-friendly.<|||||>@stas00 I'm not familiar with the transformers test suite. Could I ask you to write the test code? Usage is very simple. Just call the `from_pretrained_with_parallel()` method and run with `torch.distributed.launch --nproc_per_node=n` or `deepspeed --num_gpus=n`. Both training and inference require testing.
I think it would be good to divide the roles in our collaboration like this.
- @hyunwoongko: implement the most of code and test it internally and then PR it. plus I should describe the API so that you can test the code.
- @stas00 or other HF engineers: design test cases that fits the huggingface test suite.<|||||>The nice thing about this implementation is that we have an MPU. When using DDP in Trainer, if we put `get_data_parallel_group()` together in MPU, 2D parallel training will be possible (TP + DP). So, I put it in most places where `mpu` or `process_group` is needed. However, I am not familiar with the internal structure of the `Trainer`, so I would like other huggingface engineers to help me with this.
+) I think we should prevent use of non-distributed data parallel. is it right?<|||||>Training with vocab parallel embedding has problem. vocab parallel cross entropy is necessary when language model training (clm) So I added vocab parallel cross entropy loss function.<|||||>I will reopen the PR. thanks. |
transformers | 13,725 | closed | Fixing zero-shot backward compatiblity | # What does this PR do?
Fixes #13697
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-24-2021 07:59:30 | 09-24-2021 07:59:30 | |
transformers | 13,724 | closed | Adding `batch_size` support for (almost) all pipelines | # What does this PR do?
When running pipeline on a dataset, with a small model (relative to the GPU). It can be good to be able to batch
the `forward` pass for performance.
This PR addresses this by adding `batch_size` argument.
This PR contains
- [x] Some facilities for batching and unbatching, not handled within each individual pipelines
- [x] Automated testing for ALL small models + pipelines of this functionality
- [x] Disabled for `question-answering` and `zero-shot-classification`. They are trickier because they already use batching with candidate labels and question features. The full solution would involve moving the iterator to real N [hypothesis, template] and batching there, and having another iterator on top that recreates the current `zero-shot`/`question-answering` results. Should we add that capabilities, at least for these 2 pipelines we would have a much better idea of alignement.
- [x] Ran all slow (pipelines) tests without issue
- [x] Refactor the batch/unbatch for better quality code
- [x] More doc, caveats about this argument and use cases, benchmarks and so on.
- [ ] Need to think about TF which has currently no support (both streaming and batching)
The good example (https://gist.github.com/Narsil/4e1c36d7cf8477e5c1d580585860810e):
This code was executed on GTX 970 (and Titan RTX with similar conclusions), model is `distilbert-base-uncased-finetuned-sst-2-english` (250Mo bin file)
The old pipelines GPU method of iteration is excluded because it's an order of magnitude slower in all cases.
```
------------------------------
Streaming no batching
100%|██████████████████████████████████████████████████████████████████████| 5000/5000 [00:26<00:00, 187.52it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:04<00:00, 1205.95it/s]
------------------------------
Streaming batch_size=64
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:02<00:00, 2478.24it/s]
------------------------------
Streaming batch_size=256
100%|█████████████████████████████████████████████████████████████████████| 5000/5000 [00:01<00:00, 2554.43it/s]
(diminishing returns)
```
**This seems promising !**
However, this has:
- Perfect alignment (all inputs are exactly the same length)
- Small model (lots of GPU RAM left for inputs and intermediary results)
Let's look at another example, which might (or not) be a bit more realistic:
Using varying size inputs (https://gist.github.com/Narsil/de88b2d7c242c29772a61af56a5c8270)
```
------------------------------
Streaming no batching
100%|█████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:30<00:00, 32.51it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:29<00:00, 33.62it/s]
------------------------------
Streaming batch_size=64
100%|█████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:29<00:00, 34.29it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 38, in <module>
for out in tqdm.tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
File "/home/nicolas/src/transformers/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
....
hidden_states = self.intermediate_act_fn(hidden_states)
File "/home/nicolas/src/transformers/.venv/lib/python3.9/site-packages/torch/nn/functional.py", line 1555, in gelu
return torch._C._nn.gelu(input)
RuntimeError: CUDA out of memory. Tried to allocate 472.00 MiB (GPU 0; 3.95 GiB total capacity; 2.13 GiB already allocated; 266.75 MiB free; 2.49 GiB reserved in total by PyTorch)
```
Here we can see, no speedup was achieved, and we actually crashed for large batch size.
This is entirely due to non alignment.
The problem can even be made worse, when you have large batch sizes, and RARE very long sentences (https://gist.github.com/Narsil/357519fd385d864bfec3caf5aa8df575).
```
------------------------------
Streaming no batching
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:05<00:00, 183.69it/s]
------------------------------
Streaming batch_size=8
100%|█████████████████████████████████████████████████████████████████████| 1000/1000 [00:03<00:00, 265.74it/s]
------------------------------
Streaming batch_size=64
100%|██████████████████████████████████████████████████████████████████████| 1000/1000 [00:26<00:00, 37.80it/s]
------------------------------
Streaming batch_size=256
0%| | 0/1000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/nicolas/src/transformers/test.py", line 42, in <module>
for out in tqdm.tqdm(pipe(dataset, batch_size=256), total=len(dataset)):
....
q = q / math.sqrt(dim_per_head) # (bs, n_heads, q_length, dim_per_head)
RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 3.95 GiB total capacity; 1.72 GiB already allocated; 354.88 MiB free; 2.46 GiB reserved in total by PyTorch)
```
Here we are actually 5x SLOWER on the batch_size=64 than on the non batched version. That is because the rare long sentence is so long, it actually forces the whole batch to be pad to its sequence length, and use much more memory and processing power (the padding tokens ARE processed by the GPU, they just don't influence the end result).
For users, a rule of thumb is:
- Measure performance on your load, with your hardware. Measure, measure, and keep measuring. Real numbers are the only way to go.
- If you are latency constrained (live product doing inference), don't batch
- If you are using CPU, don't batch.
- If you are using throughput (you want to run your model on a bunch of static data), on GPU, then:
- If you have no clue about the size of the sequence_length ("natural" data), by default don't batch, measure and try tentatively to add it, add OOM checks to recover when it will fail (and it will at some point if you don't control the sequence_length.)
- If your sequence_length is super regular, then batching is more likely to be VERY interesting, measure and push it until you get OOMs.
- The larger the GPU the more likely batching is going to be more interesting
- As soon as you enable batching, make sure you can handle OOMs nicely.
There are no good (general) solutions for this problem, and your mileage may vary depending on your use cases. Which is why for now:
- batch_size=1 by default (both for speed and OOM, issues we can't guess the correct parameters, at least with batch_size=1 we have the smallest chance possible to go OOM).
- batch_size = 1 is somehow comparable in speed to batched data with irregular data sizes (which is an important use case, like live products where latency also matters).
- Other batch_sizes are opt-in, because it might be valuable for users to use it (for instance when checking some metric on some dataset which has very regular input lengths, but then it's a user responsibility to check for OOM and slowness).
- batch_size > 1 won't work for `tokenizer`/`feature_processor` that don't have a padding mecanism (if they require it).
It would be ideal if `pipelines` could start taking that responsibility on its shoulders and start batching dynamically for users but it's a hard problem right now:
- It's hard to evaluate OOM, and OOM might happen late (so batch_size will always to have to be somehow dynamic during the streaming process)
- It's even harder to evaluate the slowness factor due to padding, `pipelines` would have to count them, do some kind of batch exclusion mecanism.
- Padding issue could be helped quite a bit with RaggedTensors, however, they also don't play that nicely with the GPU capabilities (which need as much aligned/regular data as possible.
Some other links/issues/discussions:
https://github.com/huggingface/transformers/pull/11251
https://discuss.huggingface.co/t/how-to-change-the-batch-size-in-a-pipeline/8738
https://discuss.huggingface.co/t/how-to-make-pipeline-automatically-scale/7432
https://github.com/huggingface/transformers/issues/13141
https://github.com/huggingface/transformers/issues/12195
https://gist.github.com/Narsil/ee5c09875e74fa6f018dc6d014f6c06c
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-24-2021 07:33:06 | 09-24-2021 07:33:06 | Release done, merging. |
transformers | 13,723 | closed | PIL and soundfile shouldn't be required to run `transformers-cli env` | PIL and soundfile shouldn't be required to run `transformers-cli env`.
Considering that, can someone fix this issue please? Please let me know if need more information from me.
_Originally posted by @LysandreJik in https://github.com/huggingface/transformers/pull/13588#pullrequestreview-757529042_
Thank you! | 09-24-2021 00:01:54 | 09-24-2021 00:01:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,722 | closed | [Examples] Add an official audio classification example | This PR adds an example for fine-tuning Wav2Vec2-like models on any labeled audio dataset.
The single-GPU script runs on one V100 and trains on [SUPERB KS](https://huggingface.co/datasets/superb#ks)
The multi-GPU one runs on 4 V100s and trains on [CommonLanguage](https://huggingface.co/datasets/anton-l/common_language) (`datasets` PR: https://github.com/huggingface/datasets/pull/2989) | 09-23-2021 23:46:26 | 09-23-2021 23:46:26 | Added a multi-gpu example and trained the demo models:
* https://huggingface.co/anton-l/wav2vec2-base-keyword-spotting
* https://huggingface.co/anton-l/wav2vec2-base-langid
Ready for a full review now :)<|||||>Very nice addition! |
transformers | 13,721 | closed | latest TPU VM dies on import of TrainingArguments? | Hi! I'm wondering if TPU VMs are supported by transformers (I think I saw a tweet that they are, among other bits of documentation :-) ). I'm having some trouble getting some basic scripts running. The error message is pretty odd --- it implies that somewhere within huggingface, a large malloc is being attempted.
```
jack@t1v-n-f3eee39e-w-0:~/transformers/examples/tensorflow/language-modeling$ python3 run_clm.py --help
tcmalloc: large alloc 113702387720192 bytes == (nil) @ 0x7f6a2b8f1680 0x7f6a2b911ff4 0x7f6a2b408309 0x7f6a2b409fb9 0x7f6a2b40a056 0x7f66f4db3659 0x7f66ea7e9954 0x7f6a2bae5b8a 0x7f6a2bae5c91 0x7f6a2b844915 0x7f6a2baea0bf 0x7f6a2b8448b8 0x7f6a2bae95fa 0x7f6a2b6b934c 0x7f6a2b8448b8 0x7f6a2b844983 0x7f6a2b6b9b59 0x7f6a2b6b93da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
https://symbolize.stripped_domain/r/?trace=7f6a2b72718b,7f6a2b72720f&map=
*** SIGABRT received by PID 15450 (TID 15450) on cpu 95 from PID 15450; stack trace: ***
PC: @ 0x7f6a2b72718b (unknown) raise
@ 0x7f67f8afb1e0 976 (unknown)
@ 0x7f6a2b727210 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f6a2b72718b,7f67f8afb1df,7f6a2b72720f&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f67ebdfc000-7f67f8e2eb20
E0923 20:01:20.325205 15450 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0923 20:01:20.325261 15450 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0923 20:01:20.325271 15450 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0923 20:01:20.325278 15450 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0923 20:01:20.325298 15450 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0923 20:01:20.325316 15450 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0923 20:01:20.325320 15450 coredump_hook.cc:525] RAW: Discarding core.
E0923 20:01:20.329836 15450 process_state.cc:771] RAW: Raising signal 6 with default behavior
```
When I run the tensorflow `tpu-test.py` script from [here](https://cloud.google.com/tpu/docs/tensorflow-quickstart-tpu-vm) all is well.
```
...
PerReplica:{
0: tf.Tensor(2.0, shape=(), dtype=float32),
1: tf.Tensor(2.0, shape=(), dtype=float32),
2: tf.Tensor(2.0, shape=(), dtype=float32),
3: tf.Tensor(2.0, shape=(), dtype=float32),
4: tf.Tensor(2.0, shape=(), dtype=float32),
5: tf.Tensor(2.0, shape=(), dtype=float32),
6: tf.Tensor(2.0, shape=(), dtype=float32),
7: tf.Tensor(2.0, shape=(), dtype=float32)
}
```
Interestingly, if I try other backends like `jax`, it also doesn't work
```
pip install "jax[tpu]>=0.2.16" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
pip install optax
git clone https://github.com/google/flax.git ; pip install --user -e flax
```
then
```
python3 run_clm_flax.py --help
tcmalloc: large alloc 33883982246649856 bytes == (nil) @ 0x7f641c085680 0x7f641c0a5ff4 0x7f641bb9c309 0x7f641bb9c370 0x7f641bb9c406 0x7f60c5dc55ca 0x7f60c5bac6e4 0x7f60bb5e2954 0x7f641c279b8a 0x7f641c279c91 0x7f641bfd8915 0x7f641c27e0bf 0x7f641bfd88b8 0x7f641c27d5fa 0x7f641be4d34c 0x7f641bfd88b8 0x7f641bfd8983 0x7f641be4db59 0x7f641be4d3da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
https://symbolize.stripped_domain/r/?trace=7f641bebb18b,7f641bebb20f&map=
*** SIGABRT received by PID 21617 (TID 21617) on cpu 20 from PID 21617; stack trace: ***
PC: @ 0x7f641bebb18b (unknown) raise
@ 0x7f619eba5c75 976 (unknown)
@ 0x7f641bebb210 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f641bebb18b,7f619eba5c74,7f641bebb20f&map=03ff3e3b5ed284dc8d852c5156c3a04c:7f6191131000-7f619eeeaf80
E0923 20:24:28.285293 21617 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0923 20:24:28.285317 21617 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0923 20:24:28.285325 21617 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0923 20:24:28.285333 21617 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0923 20:24:28.285344 21617 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0923 20:24:28.285352 21617 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0923 20:24:28.285359 21617 coredump_hook.cc:525] RAW: Discarding core.
E0923 20:24:28.289724 21617 process_state.cc:772] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
```
Given that this issue is happening during the imports, I did a binary search of all imports within the flax clm script, and found that this import causes the large malloc.
```
from transformers import TrainingArguments
```
and confirmed that that line by itself is sufficient to cause the issue.
_Notably, though, not /all/ of transformers is subject to this malloc._ I have a different script (which I haven't cleaned up/wasn't intending to share) that seems to run on this same vm with tpu+flax/jax without issue ¯\\_(ツ)_/¯
## Environment info
running `transformers-cli env` actually causes the error I'm describing, so I can't give the exact diagnostics. the issue occurs on the latest, vanilla google TPU VM with a v3-8.
```
>>> import transformers
>>> transformers.__version__
'4.11.0.dev0'
>>> import tensorflow as tf
>>> tf.__version__
'2.6.0'
```
### Who can help
@patrickvonplaten @sgugger
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Boot up a new TPU VM with a v3-8 TPU
2. install transformers
3. `python3 -c "from transformers import TrainingArguments"`
## Expected behavior
Not a giant malloc/core dump on the TPU VM
| 09-23-2021 20:46:17 | 09-23-2021 20:46:17 | The `TrainingArguments` are trying to set the XLA device when imported, so a failure there implies something is going wrong on your setup and that the script can't reach the TPU. Did you follow the steps in the [TPU VM guide](https://cloud.google.com/tpu/docs/pytorch-xla-ug-tpu-vm)? In particular
```
export XRT_TPU_CONFIG="localservice;0;localhost:51011"
```
is necessary for PyTorch.<|||||>Ahh --- thanks for the info! That makes sense. I guess because I wasn't trying to use pytorch (I had only tried jax and tensorflow), I didn't follow that part of the TPU VM guide. Let me see if that works.<|||||>I tried different configurations of `XRT_TPU_CONFIG`, `LD_PRELOAD`, etc. None seemed to support the import.
```
jack@t1v-n-f3eee39e-w-0:~$ echo $XRT_TPU_CONFIG; echo $LD_PRELOAD; cat malloc_import.py ; python3 malloc_import.py ; LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 python3 malloc_import.py
localservice;0;localhost:51011
from transformers import TrainingArguments
free(): invalid pointer
https://symbolize.stripped_domain/r/?trace=7f3a74e8b18b,7f3a74e8b20f,7f3a74edd47b,7f370ee8fad2,7f3a75052b89&map=
*** SIGABRT received by PID 6132 (TID 6132) on cpu 77 from PID 6132; stack trace: ***
PC: @ 0x7f3a74e8b18b (unknown) raise
@ 0x7f37ece091e0 976 (unknown)
@ 0x7f3a74e8b210 (unknown) (unknown)
@ 0x7f3a74edd47c 288 (unknown)
@ 0x7f370ee8fad3 64 _GLOBAL__sub_I_xla_cpu_device.cc
@ 0x7f3a75052b8a (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f3a74e8b18b,7f37ece091df,7f3a74e8b20f,7f3a74edd47b,7f370ee8fad2,7f3a75052b89&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f37e010a000-7f37ed13cb20
E0923 22:13:24.683057 6132 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0923 22:13:24.683077 6132 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0923 22:13:24.683085 6132 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0923 22:13:24.683107 6132 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0923 22:13:24.683115 6132 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0923 22:13:24.683127 6132 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0923 22:13:24.683131 6132 coredump_hook.cc:525] RAW: Discarding core.
E0923 22:13:24.686808 6132 process_state.cc:771] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
tcmalloc: large alloc 500236124160 bytes == (nil) @ 0x7f12ec8e0680 0x7f12ec900ff4 0x7f12ec3f7309 0x7f12ec3f8fb9 0x7f12ec3f9056 0x7f0f9cf65659 0x7f0f9299b954 0x7f12ecad4b8a 0x7f12ecad4c91 0x7f12ec833915 0x7f12ecad90bf 0x7f12ec8338b8 0x7f12ecad85fa 0x7f12ec6a834c 0x7f12ec8338b8 0x7f12ec833983 0x7f12ec6a8b59 0x7f12ec6a83da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
https://symbolize.stripped_domain/r/?trace=7f12ec71618b,7f12ec71620f&map=
*** SIGABRT received by PID 6270 (TID 6270) on cpu 95 from PID 6270; stack trace: ***
PC: @ 0x7f12ec71618b (unknown) raise
@ 0x7f10647b71e0 976 (unknown)
@ 0x7f12ec716210 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f12ec71618b,7f10647b71df,7f12ec71620f&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f1057ab8000-7f1064aeab20
E0923 22:13:26.997615 6270 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E0923 22:13:26.997671 6270 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E0923 22:13:26.997678 6270 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E0923 22:13:26.997690 6270 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E0923 22:13:26.997701 6270 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E0923 22:13:26.997711 6270 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0923 22:13:26.997717 6270 coredump_hook.cc:525] RAW: Discarding core.
E0923 22:13:27.000837 6270 process_state.cc:771] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
```<|||||>(similar to https://github.com/huggingface/transformers/issues/4668 )<|||||>I also tried running the pytorch xla demo script as described in the google tpu vm tutorial. seems to work fine
```
git clone --recursive https://github.com/pytorch/xla.git
python3 xla/test/test_train_mp_imagenet.py --fake_data --model=resnet50 --num_epochs=1
...
| Training Device=xla:0/3 Epoch=1 Step=80 Loss=0.04206 Rate=416.50 GlobalRate=149.87 Time=22:24:32
| Training Device=xla:0/1 Epoch=1 Step=80 Loss=0.04206 Rate=416.24 GlobalRate=150.04 Time=22:24:32
| Training Device=xla:0/7 Epoch=1 Step=80 Loss=0.04206 Rate=416.63 GlobalRate=149.92 Time=22:24:32
| Training Device=xla:0/6 Epoch=1 Step=80 Loss=0.04206 Rate=416.46 GlobalRate=152.41 Time=22:24:32
| Training Device=xla:1/0 Epoch=1 Step=80 Loss=0.04206 Rate=416.20 GlobalRate=136.09 Time=22:24:32
| Training Device=xla:0/5 Epoch=1 Step=80 Loss=0.04206 Rate=416.28 GlobalRate=149.83 Time=22:24:32
| Training Device=xla:0/4 Epoch=1 Step=80 Loss=0.04206 Rate=416.25 GlobalRate=150.01 Time=22:24:32
| Training Device=xla:0/2 Epoch=1 Step=80 Loss=0.04206 Rate=416.04 GlobalRate=149.87 Time=22:24:32
```
and with tmalloc...
```
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 python3 xla/test/test_train_mp_imagenet.py --fake_data --model=resnet50 --num_epochs=1
...
| Training Device=xla:1/0 Epoch=1 Step=20 Loss=6.10956 Rate=55.42 GlobalRate=50.91 Time=22:27:10
| Training Device=xla:0/4 Epoch=1 Step=20 Loss=6.10956 Rate=56.12 GlobalRate=57.35 Time=22:27:10
| Training Device=xla:0/7 Epoch=1 Step=20 Loss=6.10956 Rate=56.13 GlobalRate=57.42 Time=22:27:10
| Training Device=xla:0/6 Epoch=1 Step=20 Loss=6.10956 Rate=56.07 GlobalRate=56.98 Time=22:27:10
| Training Device=xla:0/2 Epoch=1 Step=20 Loss=6.10956 Rate=56.04 GlobalRate=56.72 Time=22:27:10
| Training Device=xla:0/3 Epoch=1 Step=20 Loss=6.10956 Rate=55.82 GlobalRate=54.83 Time=22:27:10
| Training Device=xla:0/5 Epoch=1 Step=20 Loss=6.10956 Rate=55.92 GlobalRate=55.79 Time=22:27:10
| Training Device=xla:0/1 Epoch=1 Step=20 Loss=6.10956 Rate=55.94 GlobalRate=55.95 Time=22:27:10
```<|||||>I just tried this on a new TPU VM and can reproduce it. Surprisingly installing `tensorflow-cpu` (`pip install tensorflow-cpu`) magically solves this issue. Gently pinging @skye , since it looks similar to what you reported here https://github.com/huggingface/transformers/issues/12761#issuecomment-915620910
The same is with the `run_clm_flax.py` script, if you install `tensorflow-cpu` and run the script with `USE_TORCH=0`, it should run fine. Here's what I did
```bash
pip install "jax[tpu]>=0.2.16" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html
pip install tensorflow-cpu
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install flax optax datasets
cd examples/flax/language-modeling
USE_TORCH=0 python3 run_clm_flax.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --block_size 1024 --num_train_epochs 1 --learning_rate 1e-5 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --output_dir ~/tmp/flax-clm --overwrite_output_dir
```
`USE_TORCH=0` is not needed if you are using CPU version of torch or if you set `export XRT_TPU_CONFIG="localservice;0;localhost:51011"`
I usually create different `venv` to avoid such framework clashes, so didn't see this issue before.<|||||>BTW: I was also able to solve this without installing `tensorflow-cpu`.
these give the malloc error:
`python3 -c "from transformers import TrainingArguments"`
`XRT_TPU_CONFIG="localservice;0;localhost:51011" python3 -c "from transformers import TrainingArguments"`
but simply doing
`USE_TORCH=0 python3 -c "from transformers import TrainingArguments"`
works.
But, weirdly, you can still run pytorch...
here is `torch_test.py`
```python
import torch
import torch_xla.core.xla_model as xm
dev = xm.xla_device()
t1 = torch.randn(3,3,device=dev)
t2 = torch.randn(3,3,device=dev)
print(t1 + t2)
```
`XRT_TPU_CONFIG="localservice;0;localhost:51011" python3 torch_test.py`
returns the expected
```
tensor([[ 0.9678, -1.2321, -1.1490],
[ 3.3503, 1.5816, 0.9326],
[-1.1846, -0.7140, -0.4168]], device='xla:1')
```
This all leads me to believe there may be some small piece of transformers, probably unrelated to the models/optimizers/etc., that doesn't play nice with TPUs on torch<|||||>I am getting a similar error with TPU VM, except it is caused by **merely having transformers installed.** Error message below. Error still occurs when I remove transformers from my code. It disappears (and model trains fine) when I uninstall transformers.
**Environment info**
```
transformers: 4.11.3
pytorch: 1.9.1
pytorch-lightning: 1.4.9
```
**Error message**
```
tcmalloc: large alloc 28254628860207104 bytes == (nil) @ 0x7f3fc05e3680 0x7f3fc0603ff4 0x7f3fc00fa309 0x7f3fc00fbfb9 0x7f3fc00fc056 0x7f3c700a5659 0x7f3c65adb954 0x7f3fc07d7b8a 0x7f3fc07d7c91 0x7f3fc0536915 0x7f3fc07dc0bf 0x7f3fc05368b8 0x7f3fc07db5fa 0x7f3fc03ab34c 0x7f3fc05368b8 0x7f3fc0536983 0x7f3fc03abb59 0x7f3fc03ab3da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
https://symbolize.stripped_domain/r/?trace=7f3fc041918b,7f3fc041920f&map=
*** SIGABRT received by PID 15664 (TID 15664) on cpu 95 from PID 15664; stack trace: ***
PC: @ 0x7f3fc041918b (unknown) raise
@ 0x7f3d380271e0 976 (unknown)
@ 0x7f3fc0419210 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f3fc041918b,7f3d380271df,7f3fc041920f&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f3d2b328000-7f3d3835ab20
E1011 23:49:12.957856 15664 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked.
E1011 23:49:12.957878 15664 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start.
E1011 23:49:12.957886 15664 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
E1011 23:49:12.957894 15664 coredump_hook.cc:447] RAW: Sending fingerprint to remote end.
E1011 23:49:12.957905 15664 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
E1011 23:49:12.957916 15664 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E1011 23:49:12.957923 15664 coredump_hook.cc:525] RAW: Discarding core.
E1011 23:49:12.960721 15664 process_state.cc:771] RAW: Raising signal 6 with default behavior
Aborted (core dumped)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Commenting since this issue is still relevant.<|||||>We still haven't been able to reproduce it and don't have any problem with TPU VMs on our side (the CI even runs tests on it).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,720 | closed | Add `BlenderbotTokenizerFast` | # What does this PR do?
This PR add the fast (rust) implementation of `BlenderbotTokenizer`.
Fixes #13634
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR :)
| 09-23-2021 17:20:43 | 09-23-2021 17:20:43 | Let me ping @Narsil if he ever has some leads to fix the failed pipeline tests. :slightly_smiling_face: <|||||>The fix is here: https://github.com/Narsil/transformers/commit/605725f24aa516fa64f42df95d394dc7e7c770d1
Not sure it's the best fix so I will describe a bit more the issue.
Blenderbot implements for AutoModelForSeq2SeqLM (the real way to use it) and `AutoModelForCausalLM` (I don't think it's really used in practice, but it's implemented in the lib).
The pipeline tests every model that implements its supported architecture so `BlenderbotForCausalLM` are used. BUT the test config for pipelines is taken from the test modeler, which implements (understandably) the encoder/decoder config (with `encoder_no_repeat_ngram_size=3`). But when the tests is tried with a decoder-only `BlenderbotForCausalLM` then it fails.
`test_pipeline_common` can have a very specific override as this behavior should be very marginal (implementing both) within the lib and also very consistent (CausalLM = decoder only, and `encoder_no_repeat_ngram` doesn't make any sense for decoder-only) <|||||>Something might have been changed since last time, since the embeddings are now an issue.
Suggesting a diff to override only the config used for pipelines tests on blenderbot.
```
index 33d506492..9e04ec89d 100644
--- a/tests/test_modeling_blenderbot.py
+++ b/tests/test_modeling_blenderbot.py
@@ -137,6 +137,11 @@ class BlenderbotModelTester:
pad_token_id=self.pad_token_id,
)
+ def get_pipeline_config(self):
+ config = self.get_config()
+ config.max_position_embeddings = 100
+ return config
+
def prepare_config_and_inputs_for_common(self):
config, inputs_dict = self.prepare_config_and_inputs()
return config, inputs_dict
```
(By default max_embeddings are 20 long which is not enough for some pipelines tests)<|||||>@Narsil Thank you very much for your help! :) It looks like everything works now.<|||||>Hey, I opened PRs to add `tokenizer.json` to the repos:
* [facebook/blenderbot-400M-distill](https://huggingface.co/facebook/blenderbot-400M-distill/discussions/3)
* [facebook/blenderbot-3B](https://huggingface.co/facebook/blenderbot-3B/discussions/3)
* [facebook/blenderbot-1B-distill](https://huggingface.co/facebook/blenderbot-1B-distill/discussions/3)
While the conversions work, `tokenizer.json` files are useful for us because they allow loading directly using the tokenizers Rust bindings, so if those can be merged it would be appreciated :) |
transformers | 13,719 | closed | How can i Use transformers to classify a obfuscated text (multinomial classification) |
## Model description
I have a train data which contains many obfuscated sentences associated with its labels, so how can I use Transformers to classify them ?
<!-- Important information -->
one sample raw is given here. Please help.
`twypmviwskuhpmuluhlrvimvvilekruluhqgskmvenqvuhletwululenlpuhtwamuluhijohultwsauhtwiwskskmvleuhtwamuluhsktwqvqvtwkrlruhkrpmsauhtwmkenlpkguhijraxeiwtwqvsaezuhucleeneztwleuhpmuluhlrvimvpmlruhqvendfuhiguhulenamdfuhulqvkrbruhpktwqvlekrpmypuhxepmuhqgtwqvlekrpmypuhxeyvkguhqgqvtwsatwuhqvulmvuhlrvimvvitwgzpmuhulkrpmamulmvdfuhqgskmvenqvuhskvienuhqgsaiwulvitwmvulengzezmvuhskentwamuhqvulmvuhucpmpmamqvuhtwqvkrpmezlepmmcuhtwamgu`
note: as you see the single document contains no spaces and no words etc. this is labeled as 1, and like this many other lines which are respectively labeled from 1 to 10.
Please advice. | 09-23-2021 17:14:13 | 09-23-2021 17:14:13 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 13,718 | closed | Fix SpeechEncoderDecoderModel | # What does this PR do?
The current `SpeechEncoderDecoderModel` doesn't pass the `labels` to the decoder, hence no loss can be calculated. Not sure why it was not included.
This PR fixes this.
Fixes #13716
| 09-23-2021 16:10:02 | 09-23-2021 16:10:02 | Hey Niels,
yeah I let this out on purpose because I have to think a bit about the design of the loss function to make it work nicely with the trainer...it's not as straight-forward. Would it be ok to wait with this 1,2 weeks ? :-) <|||||>Hi there! Would the `Seq2SeqTrainer` also need to provide for an `input_values` key [over here](https://github.com/huggingface/transformers/blob/11c69b80452fae4b13c6d8bc22bdc19f3a752199/src/transformers/trainer_seq2seq.py#L168) for it to be compatible when using speech models?<|||||>Hey,
I will take care of training SpeechEncoderDecoder models next week :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed by #14139 |
transformers | 13,717 | closed | Handle `UnicodeDecodeError` when loading config file | # What does this PR do?
This PR handles the `UnicodeDecodeError` mentioned in #13674, providing a better error message.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @sgugger | 09-23-2021 16:06:38 | 09-23-2021 16:06:38 | |
transformers | 13,716 | closed | SpeechEncoderDecoderModel does not return a loss regardless of labels | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Ubuntu
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.1+cu102 (GPU)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
Examples:
Model I am using (Bert, XLNet ...):
`SpeechEncoderDecoderModel` from `facebook/s2t-wav2vec2-large-en-de`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Tested on the examples in the docs.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I used my own 5-second audio file for this example. Nothing fancy about it.
## To reproduce
The text encoder decoder example in the docs [here](https://huggingface.co/transformers/master/model_doc/encoderdecoder.html#encoderdecodermodel):
```python
from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
loss, logits = outputs.loss, outputs.logits
```
Produces a loss of `11.2099` as intended when provided with labels.
However, the speech encoder decoder models provide no loss regardless of the labels provided. From a slightly adapted version of the example [here](https://huggingface.co/transformers/master/model_doc/speechencoderdecoder.html#speechencoderdecodermodel):
```python
from transformers import SpeechEncoderDecoderModel, Speech2Text2Processor
processor = Speech2Text2Processor.from_pretrained('facebook/s2t-wav2vec2-large-en-de')
model = SpeechEncoderDecoderModel.from_pretrained('facebook/s2t-wav2vec2-large-en-de')
features, _ = librosa.load("my_audio.wav", sr=16_000)
input_values = processor(features, sampling_rate=16_000, return_tensors="pt").input_values
decoder_input_ids = torch.tensor([[model.config.decoder.decoder_start_token_id]])
outputs = model(input_values=input_values, decoder_input_ids=decoder_input_ids, labels="PUT_LITERALLY_ANYTHING_HERE")
```
```python
output.keys()
```
Returns `odict_keys(['logits', 'past_key_values', 'encoder_last_hidden_state'])`.
## Expected behavior
The above example should throw an error (the label param is just a raw string). If given a valid set of labels then the `loss` key and values should be in the model output dict (according to the `forward` docs).
| 09-23-2021 13:44:50 | 09-23-2021 13:44:50 | Hmm not sure why `labels` are accepted in the forward pass, but not send to the decoder. Will open a PR to fix this. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,715 | closed | Unable to load weights .from_pretrained() for XLMRoberta Model | ###Defined a class that inherits from RobertaPretrainedModel
This class is defined to get from_pretrained()
class XLMRobertaPreTrainedModel(RobertaPreTrainedModel):
"""
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
"""
config_class = XLMRobertaConfig
base_model_prefix = "xlm-roberta-base"
This class is defined to concat image and text embedding
class XLMRobertaWithImageConcatenationMultiOutputClassifier(XLMRobertaPreTrainedModel):
def __init__(self, config_bert, num_labels=11, num_another_labels=4):
print ('indise')
super(XLMRobertaWithImageConcatenationMultiOutputClassifier, self).__init__(config_bert)
configuration = XLMRobertaConfig(config_bert)
self.num_labels = num_labels
self.num_another_labels = num_another_labels
self.bert = XLMRobertaModel(config_bert)
self.backbone = models.resnet50(pretrained = True)
num_features = self.backbone.fc.in_features
self.img_hidden_size = 512
self.backbone.fc = torch.nn.Linear(num_features, self.img_hidden_size)
self.dropout = torch.nn.Dropout(configuration.hidden_dropout_prob)
self.pre_classify_hidden = 512
self.pre_classify_fc = torch.nn.ReLU(torch.nn.Linear(configuration.hidden_size + self.img_hidden_size, self.pre_classify_hidden))
self.classifier = torch.nn.Linear(self.pre_classify_hidden, num_labels)
self.another_classifier = torch.nn.Linear(self.pre_classify_hidden, num_another_labels)
self.apply(self._init_weights)
Trying to call XLMRobertaWithImageConcatenationMultiOutputClassifier from pretrained
model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained('xlm-roberta-base')
The above code gives me error that some weights are not loaded. It lists all the weight layers. Not sure where I am wrong. Any help would be very helpful. | 09-23-2021 13:30:30 | 09-23-2021 13:30:30 | Can you provide the error traceback!?<|||||>No error. But below thing clearly suggests that weights are not loaded
Some weights of the model checkpoint at xlm-roberta-base were not used when initializing XLMRobertaWithImageConcatenationMultiOutputClassifier: ['roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.embeddings.LayerNorm.weight', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.pooler.dense.weight', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'lm_head.dense.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'lm_head.decoder.weight', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.embeddings.word_embeddings.weight', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'lm_head.bias', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.pooler.dense.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.11.attention.self.value.bias', 'lm_head.dense.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.embeddings.position_embeddings.weight']
- This IS expected if you are initializing XLMRobertaWithImageConcatenationMultiOutputClassifier from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaWithImageConcatenationMultiOutputClassifier from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of XLMRobertaWithImageConcatenationMultiOutputClassifier were not initialized from the model checkpoint at xlm-roberta-base and are newly initialized: ['backbone.layer2.0.downsample.1.weight', 'backbone.layer2.2.bn1.running_mean', 'backbone.layer4.1.bn3.num_batches_tracked', 'bert.encoder.layer.11.intermediate.dense.bias', 'backbone.layer2.3.bn3.weight', 'backbone.layer3.5.conv1.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'backbone.layer2.2.bn1.bias', 'backbone.layer2.1.conv2.weight', 'backbone.layer2.1.bn2.num_batches_tracked', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'backbone.layer3.5.bn1.num_batches_tracked', 'backbone.layer2.2.bn2.num_batches_tracked', 'backbone.bn1.bias', 'bert.pooler.dense.bias', 'backbone.layer3.0.bn1.weight', 'another_classifier.bias', 'bert.encoder.layer.4.attention.self.key.bias', 'backbone.layer3.2.bn1.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'backbone.layer4.0.bn3.num_batches_tracked', 'backbone.layer1.1.bn1.running_var', 'backbone.layer3.0.bn3.num_batches_tracked', 'backbone.layer4.0.conv1.weight', 'backbone.layer4.0.bn3.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.intermediate.dense.weight', 'backbone.layer2.3.bn2.num_batches_tracked', 'bert.encoder.layer.8.attention.self.query.weight', 'backbone.layer1.1.bn3.running_mean', 'bert.encoder.layer.5.output.LayerNorm.weight', 'backbone.layer3.5.bn1.weight', 'backbone.layer2.1.conv1.weight', 'backbone.layer4.1.bn1.num_batches_tracked', 'bert.encoder.layer.7.attention.self.key.bias', 'backbone.layer3.3.bn1.bias', 'backbone.layer2.3.bn3.running_var', 'bert.encoder.layer.0.output.dense.weight', 'backbone.layer2.2.conv2.weight', 'backbone.layer3.4.bn3.running_mean', 'backbone.layer4.0.downsample.0.weight', 'bert.encoder.layer.9.output.LayerNorm.weight', 'backbone.layer3.0.bn1.bias', 'backbone.layer1.0.conv3.weight', 'backbone.layer1.1.bn2.bias', 'backbone.layer2.0.downsample.1.num_batches_tracked', 'backbone.layer2.2.bn3.num_batches_tracked', 'backbone.layer3.2.bn3.weight', 'backbone.layer1.0.bn2.running_mean', 'backbone.layer1.0.bn3.running_var', 'bert.encoder.layer.6.output.LayerNorm.bias', 'backbone.layer3.3.conv3.weight', 'backbone.layer4.1.bn2.weight', 'classifier.weight', 'classifier.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'backbone.layer2.3.bn3.num_batches_tracked', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.11.output.dense.bias', 'backbone.layer1.2.bn1.running_var', 'backbone.layer2.3.bn1.running_var', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'backbone.layer3.0.conv3.weight', 'backbone.layer4.1.bn2.running_mean', 'backbone.layer3.1.conv2.weight', 'backbone.layer2.0.bn3.running_mean', 'backbone.layer3.0.downsample.1.running_var', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.query.bias', 'backbone.layer4.1.bn2.running_var', 'backbone.layer4.2.bn1.running_var', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.0.attention.self.query.bias', 'backbone.layer4.2.bn3.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'backbone.layer2.2.bn3.weight', 'backbone.layer2.3.bn3.bias', 'bert.encoder.layer.9.attention.output.dense.bias', 'backbone.layer3.0.bn3.running_mean', 'bert.encoder.layer.9.intermediate.dense.weight', 'backbone.layer1.0.bn1.num_batches_tracked', 'backbone.layer1.0.downsample.1.bias', 'backbone.layer1.1.bn2.running_var', 'backbone.layer3.5.bn3.num_batches_tracked', 'bert.encoder.layer.1.attention.output.dense.bias', 'backbone.layer3.0.bn1.num_batches_tracked', 'backbone.layer3.1.bn3.running_mean', 'backbone.layer3.4.bn1.weight', 'backbone.layer1.0.bn1.weight', 'backbone.bn1.running_var', 'backbone.layer4.1.bn1.weight', 'bert.embeddings.position_embeddings.weight', 'backbone.layer1.0.bn2.weight', 'backbone.layer3.1.bn1.num_batches_tracked', 'backbone.layer2.1.bn2.running_mean', 'backbone.bn1.num_batches_tracked', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'backbone.layer3.2.conv1.weight', 'backbone.layer4.1.conv2.weight', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.weight', 'backbone.layer3.0.conv1.weight', 'backbone.layer3.2.bn2.running_mean', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.weight', 'backbone.layer1.2.bn2.running_mean', 'backbone.layer1.1.bn3.weight', 'backbone.layer1.1.bn2.weight', 'backbone.layer2.1.bn1.running_mean', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.5.intermediate.dense.bias', 'backbone.layer3.3.conv2.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'backbone.layer2.3.bn1.bias', 'backbone.layer2.0.bn1.num_batches_tracked', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'backbone.layer1.2.conv1.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'backbone.layer3.3.bn2.running_mean', 'bert.encoder.layer.3.attention.self.query.weight', 'backbone.layer3.2.bn2.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'backbone.layer3.4.bn1.running_mean', 'bert.encoder.layer.10.attention.self.value.bias', 'backbone.layer3.5.bn2.running_var', 'bert.encoder.layer.5.attention.output.dense.weight', 'backbone.layer2.1.conv3.weight', 'bert.encoder.layer.11.attention.self.key.weight', 'backbone.layer1.0.bn2.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'backbone.layer2.0.bn2.running_mean', 'backbone.layer1.2.bn2.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'backbone.layer2.0.downsample.1.bias', 'backbone.layer3.2.bn2.num_batches_tracked', 'backbone.layer3.5.bn3.bias', 'backbone.layer2.0.bn1.bias', 'backbone.layer1.2.bn2.bias', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'backbone.layer3.0.bn2.running_var', 'backbone.layer1.0.bn3.running_mean', 'backbone.layer4.0.downsample.1.running_var', 'bert.encoder.layer.11.attention.self.value.bias', 'backbone.layer3.5.bn1.bias', 'bert.encoder.layer.2.output.dense.bias', 'backbone.layer4.0.downsample.1.weight', 'backbone.layer3.0.bn3.bias', 'bert.encoder.layer.7.attention.self.value.bias', 'backbone.layer4.0.bn1.running_mean', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.8.intermediate.dense.weight', 'backbone.layer2.0.bn3.bias', 'backbone.layer2.3.bn2.bias', 'bert.encoder.layer.9.attention.self.value.bias', 'backbone.layer4.2.bn2.num_batches_tracked', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.embeddings.LayerNorm.bias', 'backbone.layer4.2.bn2.running_mean', 'bert.encoder.layer.2.output.dense.weight', 'backbone.layer1.0.downsample.1.running_var', 'backbone.layer1.0.downsample.1.num_batches_tracked', 'backbone.layer3.3.bn2.weight', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.10.attention.output.dense.bias', 'backbone.layer3.3.bn1.running_mean', 'backbone.layer1.0.bn1.running_mean', 'backbone.layer3.1.bn1.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.5.intermediate.dense.weight', 'backbone.layer4.0.downsample.1.num_batches_tracked', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.key.bias', 'backbone.layer1.2.bn1.running_mean', 'backbone.layer3.5.bn2.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'backbone.layer2.1.bn3.num_batches_tracked', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'backbone.layer2.0.bn2.bias', 'backbone.layer3.0.bn2.num_batches_tracked', 'backbone.layer1.0.downsample.1.weight', 'bert.encoder.layer.1.attention.self.key.weight', 'pre_classify_fc.inplace.weight', 'backbone.layer2.0.bn2.running_var', 'bert.encoder.layer.1.output.dense.bias', 'backbone.layer2.2.bn2.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'backbone.layer2.0.bn1.running_mean', 'backbone.layer1.0.bn2.num_batches_tracked', 'backbone.layer3.2.bn1.running_mean', 'backbone.layer2.1.bn3.bias', 'backbone.layer2.1.bn3.weight', 'backbone.layer3.3.conv1.weight', 'bert.embeddings.position_ids', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.6.output.dense.weight', 'backbone.layer3.5.bn1.running_mean', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'backbone.layer3.3.bn2.num_batches_tracked', 'backbone.layer2.3.bn1.weight', 'backbone.layer4.2.bn1.num_batches_tracked', 'bert.encoder.layer.11.attention.self.query.weight', 'backbone.layer1.0.bn2.running_var', 'bert.encoder.layer.2.output.LayerNorm.bias', 'backbone.layer3.4.bn3.running_var', 'backbone.layer3.4.conv2.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'pre_classify_fc.inplace.bias', 'bert.encoder.layer.9.output.dense.bias', 'bert.embeddings.word_embeddings.weight', 'backbone.layer3.1.bn2.weight', 'bert.encoder.layer.4.output.LayerNorm.weight', 'backbone.layer4.1.conv1.weight', 'backbone.layer4.0.bn2.num_batches_tracked', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.self.query.weight', 'backbone.layer4.1.conv3.weight', 'backbone.layer2.2.bn1.running_var', 'backbone.layer3.1.conv1.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'backbone.layer3.0.bn2.bias', 'bert.pooler.dense.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'backbone.layer1.1.bn3.running_var', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'backbone.layer4.1.bn3.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'backbone.layer3.3.bn1.num_batches_tracked', 'backbone.layer1.1.bn1.num_batches_tracked', 'backbone.layer3.1.bn1.weight', 'backbone.layer2.0.bn1.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.11.output.dense.weight', 'backbone.layer3.2.bn2.running_var', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.output.dense.bias', 'backbone.layer2.0.bn2.weight', 'backbone.layer3.2.bn3.num_batches_tracked', 'backbone.layer2.3.bn2.running_mean', 'backbone.layer1.0.downsample.0.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'backbone.layer3.3.bn3.running_mean', 'bert.encoder.layer.2.attention.self.value.weight', 'backbone.layer3.5.bn3.running_var', 'bert.encoder.layer.8.output.LayerNorm.weight', 'backbone.layer1.1.bn3.bias', 'backbone.layer1.2.bn3.weight', 'backbone.layer3.2.bn3.running_var', 'backbone.layer3.5.bn3.weight', 'backbone.layer3.0.bn1.running_mean', 'bert.embeddings.token_type_embeddings.weight', 'backbone.layer3.3.bn2.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'backbone.layer2.2.bn3.running_var', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'backbone.layer3.1.bn3.bias', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.6.attention.self.query.bias', 'backbone.layer1.1.conv2.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'backbone.layer1.1.bn2.num_batches_tracked', 'backbone.layer3.0.bn2.weight', 'backbone.layer3.5.bn3.running_mean', 'bert.encoder.layer.1.output.dense.weight', 'backbone.layer3.3.bn3.weight', 'backbone.layer3.1.bn2.running_var', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.5.attention.self.value.bias', 'backbone.layer3.3.bn1.running_var', 'backbone.layer3.4.bn3.weight', 'backbone.layer1.2.bn3.running_var', 'bert.encoder.layer.9.attention.self.query.bias', 'backbone.layer1.1.bn1.weight', 'backbone.layer3.4.bn2.bias', 'backbone.layer2.0.bn1.running_var', 'backbone.layer1.1.bn1.bias', 'bert.encoder.layer.11.output.LayerNorm.bias', 'bert.embeddings.LayerNorm.weight', 'backbone.layer1.0.downsample.1.running_mean', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'backbone.layer2.2.bn3.bias', 'backbone.layer4.2.bn1.weight', 'backbone.layer2.2.bn2.bias', 'backbone.layer2.2.bn1.num_batches_tracked', 'bert.encoder.layer.9.attention.self.query.weight', 'backbone.layer3.2.bn3.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'backbone.layer3.1.bn1.running_var', 'backbone.layer1.1.bn3.num_batches_tracked', 'backbone.layer1.1.conv3.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'backbone.layer4.1.bn3.running_mean', 'backbone.layer2.2.bn1.weight', 'backbone.layer1.0.bn1.bias', 'backbone.layer3.0.downsample.1.num_batches_tracked', 'backbone.layer1.0.conv2.weight', 'backbone.layer4.2.bn2.running_var', 'backbone.layer4.2.bn3.num_batches_tracked', 'backbone.layer1.1.bn2.running_mean', 'bert.encoder.layer.6.output.dense.bias', 'backbone.layer2.0.bn3.running_var', 'backbone.layer3.1.bn1.running_mean', 'bert.encoder.layer.5.attention.self.value.weight', 'backbone.layer3.4.bn1.num_batches_tracked', 'backbone.layer4.2.bn1.bias', 'bert.encoder.layer.7.attention.self.query.bias', 'backbone.conv1.weight', 'backbone.fc.weight', 'backbone.layer4.1.bn1.bias', 'backbone.layer4.2.bn2.bias', 'backbone.layer4.0.bn1.weight', 'bert.encoder.layer.3.attention.self.value.weight', 'backbone.layer1.1.bn1.running_mean', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'backbone.layer4.0.bn2.running_var', 'backbone.layer3.4.conv3.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'backbone.layer1.2.bn2.num_batches_tracked', 'backbone.layer3.4.bn2.running_mean', 'backbone.bn1.weight', 'bert.encoder.layer.11.intermediate.dense.weight', 'backbone.layer2.0.conv1.weight', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.9.attention.self.key.bias', 'backbone.layer2.3.bn1.num_batches_tracked', 'backbone.layer2.2.bn2.running_var', 'bert.encoder.layer.11.attention.output.dense.bias', 'backbone.layer3.5.bn2.weight', 'backbone.layer3.4.conv1.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'backbone.fc.bias', 'backbone.layer3.4.bn3.num_batches_tracked', 'backbone.layer2.0.downsample.1.running_mean', 'bert.encoder.layer.6.attention.self.value.weight', 'backbone.layer3.0.downsample.1.bias', 'backbone.layer2.1.bn3.running_mean', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'backbone.layer3.2.conv2.weight', 'backbone.layer4.2.conv3.weight', 'backbone.layer1.1.conv1.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'backbone.layer3.2.bn3.running_mean', 'backbone.layer3.0.downsample.0.weight', 'backbone.layer4.0.bn2.bias', 'backbone.layer4.1.bn3.bias', 'backbone.layer1.2.conv3.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'backbone.layer3.5.bn1.running_var', 'bert.encoder.layer.7.attention.output.dense.bias', 'backbone.layer4.1.bn2.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'backbone.layer1.2.bn1.weight', 'backbone.layer3.5.conv3.weight', 'backbone.layer4.1.bn2.num_batches_tracked', 'bert.encoder.layer.2.intermediate.dense.bias', 'backbone.layer2.0.conv3.weight', 'backbone.layer1.0.bn3.num_batches_tracked', 'backbone.layer2.2.conv1.weight', 'bert.encoder.layer.10.output.dense.bias', 'backbone.bn1.running_mean', 'backbone.layer2.2.bn2.running_mean', 'backbone.layer4.2.bn3.running_mean', 'bert.encoder.layer.5.attention.self.key.weight', 'backbone.layer4.0.bn1.running_var', 'bert.encoder.layer.5.output.dense.weight', 'backbone.layer4.0.bn3.running_var', 'backbone.layer4.2.bn3.running_var', 'backbone.layer3.4.bn2.running_var', 'backbone.layer2.1.bn2.running_var', 'backbone.layer2.2.conv3.weight', 'backbone.layer4.0.conv2.weight', 'backbone.layer4.2.bn1.running_mean', 'bert.encoder.layer.0.attention.self.query.weight', 'backbone.layer2.1.bn1.weight', 'backbone.layer2.0.downsample.1.running_var', 'backbone.layer1.0.bn1.running_var', 'backbone.layer4.2.conv1.weight', 'backbone.layer3.2.bn2.weight', 'backbone.layer3.3.bn3.bias', 'backbone.layer1.2.bn2.running_var', 'bert.encoder.layer.5.attention.self.query.bias', 'backbone.layer4.2.conv2.weight', 'backbone.layer3.1.conv3.weight', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.8.attention.self.key.weight', 'backbone.layer3.2.conv3.weight', 'backbone.layer2.1.bn1.bias', 'bert.encoder.layer.4.output.LayerNorm.bias', 'backbone.layer3.2.bn1.running_var', 'bert.encoder.layer.4.output.dense.weight', 'backbone.layer2.1.bn2.weight', 'backbone.layer4.0.conv3.weight', 'backbone.layer1.2.bn3.bias', 'backbone.layer2.1.bn1.num_batches_tracked', 'backbone.layer3.2.bn1.weight', 'backbone.layer3.5.bn2.num_batches_tracked', 'backbone.layer4.0.downsample.1.running_mean', 'bert.encoder.layer.8.attention.output.dense.bias', 'backbone.layer3.0.bn2.running_mean', 'backbone.layer3.3.bn1.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'backbone.layer3.2.bn1.num_batches_tracked', 'backbone.layer2.3.conv2.weight', 'backbone.layer2.3.bn2.running_var', 'backbone.layer2.2.bn3.running_mean', 'backbone.layer1.2.bn1.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.11.output.LayerNorm.weight', 'backbone.layer4.0.downsample.1.bias', 'bert.encoder.layer.10.attention.self.query.bias', 'backbone.layer3.1.bn2.bias', 'backbone.layer4.0.bn2.weight', 'backbone.layer2.3.bn1.running_mean', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.output.LayerNorm.bias', 'backbone.layer3.3.bn3.running_var', 'bert.encoder.layer.0.attention.self.key.weight', 'backbone.layer3.1.bn2.num_batches_tracked', 'bert.encoder.layer.6.attention.self.key.bias', 'backbone.layer2.0.bn2.num_batches_tracked', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.value.bias', 'backbone.layer2.3.conv1.weight', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'backbone.layer1.2.bn3.num_batches_tracked', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.7.output.dense.bias', 'backbone.layer3.0.downsample.1.weight', 'backbone.layer3.0.downsample.1.running_mean', 'backbone.layer3.4.bn1.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'backbone.layer1.0.bn3.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'backbone.layer3.0.bn3.running_var', 'backbone.layer1.2.conv2.weight', 'backbone.layer3.1.bn3.weight', 'backbone.layer2.0.downsample.0.weight', 'backbone.layer3.1.bn3.num_batches_tracked', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'backbone.layer3.4.bn3.bias', 'backbone.layer3.0.conv2.weight', 'backbone.layer1.2.bn1.num_batches_tracked', 'backbone.layer3.3.bn2.running_var', 'backbone.layer2.0.bn3.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'backbone.layer4.0.bn1.num_batches_tracked', 'bert.encoder.layer.8.intermediate.dense.bias', 'backbone.layer4.1.bn1.running_var', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'backbone.layer4.1.bn3.running_var', 'backbone.layer3.4.bn2.weight', 'backbone.layer2.0.conv2.weight', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'backbone.layer4.1.bn1.running_mean', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'backbone.layer4.2.bn2.weight', 'backbone.layer4.0.bn2.running_mean', 'backbone.layer2.3.conv3.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'backbone.layer2.1.bn1.running_var', 'backbone.layer2.3.bn2.weight', 'backbone.layer3.5.conv2.weight', 'backbone.layer3.1.bn3.running_var', 'bert.encoder.layer.2.intermediate.dense.weight', 'backbone.layer1.0.bn3.weight', 'bert.encoder.layer.4.attention.output.dense.weight', 'backbone.layer3.4.bn2.num_batches_tracked', 'backbone.layer3.0.bn1.running_var', 'backbone.layer3.1.bn2.running_mean', 'bert.encoder.layer.1.intermediate.dense.bias', 'backbone.layer4.0.bn3.weight', 'another_classifier.weight', 'backbone.layer2.1.bn3.running_var', 'backbone.layer2.3.bn3.running_mean', 'backbone.layer4.2.bn3.weight', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'backbone.layer2.1.bn2.bias', 'backbone.layer4.0.bn3.running_mean', 'backbone.layer1.2.bn3.running_mean', 'backbone.layer3.3.bn3.num_batches_tracked', 'backbone.layer2.0.bn3.num_batches_tracked', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'backbone.layer1.0.conv1.weight', 'backbone.layer3.0.bn3.weight', 'backbone.layer3.4.bn1.running_var', 'bert.encoder.layer.8.output.dense.bias', 'backbone.layer4.0.bn1.bias', 'backbone.layer3.5.bn2.running_mean']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.<|||||>The name scopes you define in your model should be the same as the pretrained model you're trying to load, so that these weights can be correctly loaded into variables.
For example, according to the warning you pasted, `xlm-roberta-base` uses the name `roberta` as its base model, while in your model it is `bert`, so `from_pretrained` method cannot correspond the names between them.<|||||>@qqaatw Yes, you are right that bert encoder layers are being utilized in `xlm-roberta-base.` That's what i am unable to understand why it is initializing with bert base when `XLMRoberta` is base model in class `XLMRobertaWithImageConcatenationMultiOutputClassifier`<|||||>The model behind `xlm-roberta-base` checkpoint is `RobertaForMaskedLM`, which uses the name `roberta` as its base model, you can regard `roberta` as an alias of `bert` in Roberta-related models.
Here is it's constructor :
```
def __init__(self, config):
super().__init__(config)
if config.is_decoder:
logger.warning(
"If you want to use `RobertaForMaskedLM` make sure `config.is_decoder=False` for "
"bi-directional self-attention."
)
self.roberta = RobertaModel(config, add_pooling_layer=False)
self.lm_head = RobertaLMHead(config)
# The LM head weights require special treatment only when they are tied with the word embeddings
self.update_keys_to_ignore(config, ["lm_head.decoder.weight"])
self.init_weights()
```
And therefore, to load the weights from `xlm-roberta-base`, the names of your custom model's variables should be exactly the same as that of Roberta, I.e. the name `self.bert` should be changed to `self.reberta`, and so do other variables.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,714 | closed | Align attention_mask dtype to the attention_scores for BERT to speedup mixed precision inference and training | # Why we need to align the attention_mask dtype to the attention_scores
For the encoder layer in BERT model, the attention_scores need to add the attention_mask info before the add and softmax layer to process the masked batch inputs.
The attention_mask is always FP32 in this repository, It works well for fully FP32 data type, but when we use mixed precision, the attention_scores is low precision, such as bf16, and the output of add operation will be FP32 in PyTorch. so the following softmax and dropout will also run with FP32 and introduce addition "to" operation. So, it will obviously reduce the performance for mixed precision performance with FP32 attention_mask.
# Optimization
Convert the attention_mask to the same type of attention_scores in the first encoder layer. With this optimization, we only need to convert the data type once and the following softmax and dropout will also use low precision data type to reduce memory footprint to improve the performance. | 09-23-2021 13:16:21 | 09-23-2021 13:16:21 | Thank you for the PR!
Note that `attention_mask` is actually in whatever the given `dtype` is. For bert like models the mask is prepared using `get_extended_attention_mask` method and checks and casts the mask to correct `dtype`. cf
https://github.com/huggingface/transformers/blob/62832c962f85b5a554ebf8b930d13b76b9028a8d/src/transformers/modeling_utils.py#L232
So the attention operation should also be in mixed precision.<|||||>But for modeling_bert.py the attention_mask is generate with
`extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)`
the dtype of extended_attention_mask is same to attention_mask. while attention_mask is the input of model, it is FP32 in general. For mixed precision in PyTorch, we use autocast to do mixed precision, user doesn't need to convert the input dtype explicitly. So, the attention mask will always be FP32 here. In act, the dtype of mask should be same to the attention_scores. <|||||>>But for modeling_bert.py the attention_mask is generate with
extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)
the dtype of extended_attention_mask is same to attention_mask
if you go through, `get_extended_attention_mask` you'll find that the mask is casted to `self.dtype`, which calls ` get_parameter_dtype`
https://github.com/huggingface/transformers/blob/62832c962f85b5a554ebf8b930d13b76b9028a8d/src/transformers/modeling_utils.py#L130
which returns the `dtype` of parameter, so the `extended_attention_mask` will have the same `dtype` as that of model parameters. So if params are in `fp16` then `extended_attention_mask` will also be in `fp16`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,713 | closed | Documentation Mistake: no_multi_processing for Benchmarks | The documentation on setting the argument [no_multi_processing during benchmarking](https://huggingface.co/transformers/v4.0.1/benchmarks.html#benchmark-best-practices) is unclear: _The option **no_multi_processing should only be set to True** for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure **no_multi_processing is set to True**._
From looking at the [docstrings in benchmark_args_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/benchmark/benchmark_args_utils.py#L86), no_multi_processing should only be set to False for testing and debugging.
| 09-23-2021 11:00:34 | 09-23-2021 11:00:34 | Pinging @patrickvonplaten as the author of https://github.com/huggingface/transformers/pull/5360<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,712 | closed | Add FSNER example in research_projects | # What does this PR do?
- This PR adds example code for FSNER (few-shot named entity recognition) using huggingface's `transformers` library.
- Only prediction/inference code is provided, training code will be provided very soon.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/pull/13155
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-23-2021 09:12:37 | 09-23-2021 09:12:37 | Hey @LysandreJik,
Just pinging to have a look at this PR. Branch is same as before, only one more commit I pushed where I add 2 dependencies in setup.py for using this as a standalone package. |
transformers | 13,711 | closed | problem with using past_key_values in T5ForConditionalGeneration | Hello! Here's my problem, I'm trying to add some modifications to past_key_values(like adding some tunable parameters) during training. To do this, I must pass both the _past_key_values_ and _labels_ parameters during training. I have implemented this in the BARTForConditionalGeneration model, but when I'm trying to do this in T5ForConditionalGeneration, I find the following limitations.
In T5ForConditionalGeneration, when training, we're not able to use the parameter _past_key_values_ together with _labels_.
And even if we use decoder_input_ids rather than labels, only the last target token is used, if the _past_key_values_ parameter is provided.
https://github.com/huggingface/transformers/blob/62832c962f85b5a554ebf8b930d13b76b9028a8d/src/transformers/models/t5/modeling_t5.py#L1596-L1603
However, this limitation doesn't exist in other models like BART (which I think has a similar structure with T5).
https://github.com/huggingface/transformers/blob/62832c962f85b5a554ebf8b930d13b76b9028a8d/src/transformers/models/bart/modeling_bart.py#L1293-L1298
As far as I can see, the limitition in T5ForConditionalGeneration is not necessary. Even if we pass both the _past_key_values_ and the _labels_, there'll be no errors since the inner modules like T5Block and T5Attention can handle the past_key_values properly(I'm trying to delete this limitation in T5 in my own fork, but I've not experimented yet, so I'm not 100% sure). Or are there some other purposes of this limitation in T5?
| 09-23-2021 08:58:27 | 09-23-2021 08:58:27 | cc @patrickvonplaten <|||||>Hey @yssjtu,
I agree with you! Would you like to open a PR to fix this? :-) |
transformers | 13,710 | closed | Fix LayoutLM ONNX test error | # What does this PR do?
Fixes the batch_size and seq_len computation during ONNX export in configuration_layoutlm.py.
PRs: #13702, #13562
Issue: #13300
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@michaelbenayoun @LysandreJik @NielsRogge @mfuntowicz | 09-23-2021 08:07:05 | 09-23-2021 08:07:05 | Hi,
Can you also add LayoutLM to the list of supported models? https://huggingface.co/transformers/serialization.html
Thanks!<|||||>Seems good to me!
@NielsRogge I think [he has already done it](https://github.com/huggingface/transformers/pull/13562/files#diff-f4bad33d844c5d91b09dd0af27395ade640e907920b357daf125dd19458bdd60R82), or you're talking about something different?<|||||>Yeah I'm talking about the documentation (.rst file): https://github.com/huggingface/transformers/blob/master/docs/source/serialization.rst
It also seems like other models (like mBART) are supported to be exported to ONNX, but they are not mentioned in the docs. I asked @sgugger if we can create an automagically updated table for this.<|||||>@NielsRogge I have added LayoutLM to the list of supported models. :) |
transformers | 13,709 | closed | Unable to call .from_pretrained() for Roberta Model | ### Defined a class that inherits from RobertaPretrainedModel
This class is defined to concat image and text embedding
class XLMRobertaWithImageConcatenationMultiOutputClassifier(RobertaPreTrainedModel):
def __init__(self, config_bert , num_labels):
super(XLMRobertaWithImageConcatenationMultiOutputClassifier, self).__init__(config_bert)
self.num_labels = num_labels
self.num_another_labels = num_another_labels
self.bert = RobertaModel(config_bert)
self.backbone = models.resnet50(pretrained = False)
num_features = self.backbone.fc.in_features
self.img_hidden_size = 512
self.pre_classify_hidden = 512
self.pre_classify_fc = torch.nn.Linear(config_bert.hidden_size + self.img_hidden_size, self.pre_classify_hidden)
self.classifier = torch.nn.Linear(self.pre_classify_hidden, num_labels)
self.apply(self._init_weights)
### Trying to call XLMRobertaWithImageConcatenationMultiOutputClassifier from pretrained
model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained( 'roberta-base', num_labels = NUM_CLASSES)
### Error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-d0dbf641e2da> in <module>
---> 17 model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained('roberta-base', num_labels=NUM_CLASSES)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1323 else:
1324 with no_init_weights(_enable=_fast_init):
-> 1325 model = cls(config, *model_args, **model_kwargs)
1326
1327 if from_pt:
TypeError: __init__() missing 1 required positional argument: 'num_labels'
Is there something I am missing? Despite giving num_labels it still asks for that parameter? Any idea or help would be highly appreciated.
| 09-23-2021 07:45:01 | 09-23-2021 07:45:01 | Maybe you can try specifying `num_labels` as a positional argument instead of a keyword argument, like the following:
```
model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained( 'roberta-base', NUM_CLASSES)
```
<|||||>Hi!
@qqaatw is right, model_args to `init` should be passed as positional arguments instead as `kwargs`, the `kwargs` are passed to `config` init.
Another way to do this would be to add `num_labels` to configuration and then access that as `config.num_labels` in the model. Then you could pass it as `kwargs` to `from_pretarained`.<|||||>@qqaatw you are absolutely correct about not giving num_labels as keyword argument! I finally implemented like @patil-suraj suggested to fetch num_labels and another_num_labels from a configuration. |
transformers | 13,708 | closed | Unable to execute RobertaModel from pretrained | ### Defined a class that inherits from RobertaPretrainedModel and concats image embedding
class XLMRobertaWithImageConcatenationMultiOutputClassifier(RobertaPreTrainedModel):
def __init__(self, config_bert , num_labels):
super(XLMRobertaWithImageConcatenationMultiOutputClassifier, self).__init__(config_bert)
self.num_labels = num_labels
self.num_another_labels = num_another_labels
self.bert = RobertaModel(config_bert)
self.backbone = models.resnet50(pretrained = False)
num_features = self.backbone.fc.in_features
self.img_hidden_size = 512
self.pre_classify_hidden = 512
self.pre_classify_fc = torch.nn.Linear(config_bert.hidden_size + self.img_hidden_size, self.pre_classify_hidden)
self.classifier = torch.nn.Linear(self.pre_classify_hidden, num_labels)
self.apply(self._init_weights)
### Trying to call XLMRobertaWithImageConcatenationMultiOutputClassifier from pretrained
model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained( 'roberta-base', num_labels = NUM_CLASSES)
### Error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-d0dbf641e2da> in <module>
---> 17 model = XLMRobertaWithImageConcatenationMultiOutputClassifier.from_pretrained('roberta-base',, num_labels=NUM_CLASSES)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1323 else:
1324 with no_init_weights(_enable=_fast_init):
-> 1325 model = cls(config, *model_args, **model_kwargs)
1326
1327 if from_pt:
TypeError: __init__() missing 1 required positional argument: 'num_labels'
Is there something I am missing? Despite giving num_labels it still asks for that parameter? Any idea or help would be highly appreciated.
| 09-23-2021 07:13:00 | 09-23-2021 07:13:00 | |
transformers | 13,707 | closed | RunTimeError when using prefix_allowed_tokens_fn and top-k/top-p sampling in model.generate | Hi, I'm using T5-large, torch 1.9 and transformers 4.8.2 to generate sentences in the predefined corpus. I tried to use both `prefix_allowed_tokens_fn` and top-k sampling, `do_sample=True; top_k=50`. When I use the two separately, I don't get any problem, but when I try to use the two together, I get a runtimeerror, `RunTimeError: probability tensor contains either 'inf', 'nan' or element<0`. Do you have a guess on why this happens?
The simple code I used for generation is
```
with open("corpus.pkl", "rb") as f:
trie = dict(pickle.load(f))
def get(input_ids: List[int]):
return _get_from_trie(input_ids, trie)
def _get_from_trie(input_ids: List[int], trie_dict: Dict):
if len(input_ids) == 0:
output = list(trie_dict.keys())
return output
elif input_ids[0] in trie_dict:
return _get_from_trie(input_ids[1:], trie_dict[input_ids[0]])
else:
return []
model = T5ForConditionalGeneration.from_pretrained('t5-large').eval()
outputs = model.generate(
**input_args.
num_beams=5,
prefix_allowed_tokens_fn=lambda batch_id, sent: get(sent.tolist()),
do_sample=True,
top_k=50
)
```
| 09-23-2021 06:00:57 | 09-23-2021 06:00:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,706 | closed | how to finetune huggingface MarianMT transformer model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Python3.8
Linux RHEL 6.9
- `transformers` version: 4.10.3
- Platform: Linux RHEL
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?: No (for small dataset)
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
MarianMT
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): MarianMT
The problem arises when using:
* [ ] the official example scripts: (give details below) - i dont see script for translation
* [ ] my own modified scripts: (give details below) - i am using simple transformer library & with customized code too
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am very new with this transformers, requesting you to help me with this
1. Can you please help me code snippet with MarianMT fine-tune model for id (indonesia) to en (english) translation
(or)
2. i will share code snippet i am using, with this code i can able to train the model (it has not created vocab.json, source.spm & target.spm files) but while loading the model. it is expecting vocab files (vocab.json, source.spm & target.spm). Please help me to create these vocab files. Also let me know is this correct way i am following. Please put me in correct way
(trained model with 2 variations using Trainer API & simple transformers library)
#### Sample code i am using
****************** Using Trainer API *************************
from transformers import MarianTokenizer, MarianMTModel
from datetime import datetime
print("start time:", datetime.now())
path= "Lang Translation/opus-mt-id-en"
tokenizer = MarianTokenizer.from_pretrained(path)
model = MarianMTModel.from_pretrained(path)
from transformers import LineByLineTextDataset
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="Lang Translation/dataset/train.json",
block_size=128,
)
*** Sample text from train.json
{"translation": {"id": "Perahu Pinisi yang kokoh dapat dikatakan sebagai salah satu kerajinan kayu yang terkuat di Indonesia. Dibangun untuk bertahan di angin yang berkecamuk dan lautan yang berbadai, kapal yang abadi ini tidak hanya dapat bermanuver saat mengarungi perairan yang bergelombang, layar yang terbentang dapat menangkap dorongan angin \u2013 sebagai elemen yang penting dalam memberikan perlindungan.", "en": "The majestic Phinisi Yacht is said to be the strongest wooden craft in Indonesia. Built to survive raging winds and stormy seas, this timeless vessel is not only good at maneuvering in choppy waters, its sails can be angled to catch the winds \u2014 thereby harnessing the powerful elements to propel itself to safety."}}
test_dataset= LineByLineTextDataset(
tokenizer=tokenizer,
file_path="Lang Translation/dataset/test.json",
block_size=128,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="results",
overwrite_output_dir=True,
num_train_epochs=1,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
)
trainer.train()
trainer.save_model("results")
********************** Using Simple Transformer Library **************************
import logging
import pandas as pd
from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
*******Sample text from train.csv
bahasa_text,english_text
"Perahu Pinisi yang kokoh dapat dikatakan sebagai salah satu kerajinan kayu yang terkuat di Indonesia. Dibangun untuk bertahan di angin yang berkecamuk dan lautan yang berbadai, kapal yang abadi ini tidak hanya dapat bermanuver saat mengarungi perairan yang bergelombang, layar yang terbentang dapat menangkap dorongan angin – sebagai elemen yang penting dalam memberikan perlindungan.","The majestic Phinisi Yacht is said to be the strongest wooden craft in Indonesia. Built to survive raging winds and stormy seas, this timeless vessel is not only good at maneuvering in choppy waters, its sails can be angled to catch the winds — thereby harnessing the powerful elements to propel itself to safety."
train_df = pd.read_csv("Lang Translation/train.csv", names=['input_text','target_text']).astype(str)
eval_df = pd.read_csv("Lang Translation/test.csv", names=['input_text','target_text']).astype(str)
train_df["prefix"] = ""
eval_df["prefix"] = ""
model_args = Seq2SeqArgs()
model_args.max_seq_length = 96
model_args.train_batch_size = 2
model_args.eval_batch_size = 1
model_args.num_train_epochs = 1
model_args.evaluate_during_training = True
model_args.evaluate_during_training_steps = 30000
model_args.use_multiprocessing = False
model_args.fp16 = False
model_args.save_steps = -1
model_args.save_eval_checkpoints = False
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.preprocess_inputs = False
model_args.num_return_sequences = 1
model_args.wandb_project = "Marian Bahasa-English Translation"
model = Seq2SeqModel(
encoder_decoder_type="marian",
encoder_decoder_name="Lang Translation/opus-mt-id-en",
use_cuda=False,
args=model_args)
model.train_model(train_df, eval_data=eval_df)
results = model.eval_model(eval_df)
***************************** Sample Input & Output **************************
I am begginer in using transformer, please help me with this either provide me fine-tune process of marianmt or validate my code & provide solution for creating vocabulary files
| 09-23-2021 04:58:16 | 09-23-2021 04:58:16 | Hi there!
Please use the [forum](https://discuss.huggingface.co) to ask such general questions, we use issues for bugs and feature requests. And the `run_translation.py` script [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) can be used to train/fine-tune Marian.<|||||>HI @patil-suraj ,
Thank you so much for quick response :).
as you suggested when i using run_translation.py script , am getting below error, can you please advice me

python run_translation.py \
--model_name_or_path '/Lang Translation/opus-mt-id-en' \
--do_train \
--do_eval \
--source_lang id \
--target_lang en \
--dataset_name '' \
--train_file '/Lang Translation/dataset/train.json' \
--test_file '/Lang Translation/dataset/test.json' \
--dataset_config_name id-en \
--output_dir '/outputs/marianmt/' \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
*** Sample text from train.json (source : bahasa, target: english)
{"translation": {"id": "Perahu Pinisi yang kokoh dapat dikatakan sebagai salah satu kerajinan kayu yang terkuat di Indonesia. Dibangun untuk bertahan di angin yang berkecamuk dan lautan yang berbadai, kapal yang abadi ini tidak hanya dapat bermanuver saat mengarungi perairan yang bergelombang, layar yang terbentang dapat menangkap dorongan angin \u2013 sebagai elemen yang penting dalam memberikan perlindungan.", "en": "The majestic Phinisi Yacht is said to be the strongest wooden craft in Indonesia. Built to survive raging winds and stormy seas, this timeless vessel is not only good at maneuvering in choppy waters, its sails can be angled to catch the winds \u2014 thereby harnessing the powerful elements to propel itself to safety."}}<|||||>It would be nice if you could post the stack-trace as text :)
Also, when you are passing your own train and test file, you don't need to pass `--dataset_name` and `--dataset_config_name`, that could be the issue.<|||||>Hi Thank you so much for your quick replies :)
as you suggested i have removed --dataset_name & --dataset_config_name parameters.. now i got below error. please help me
Traceback (most recent call last):
File "run_translation.py", line 551, in <module>
main()
File "run_translation.py", line 295, in main
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 1084, in load_dataset
builder_instance = load_dataset_builder(
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 948, in load_dataset_builder
data_files = _resolve_data_files_locally_or_by_urls(".", data_files)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 269, in _resolve_data_files_locally_or_by_urls
return {
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 270, in <dictcomp>
k: _resolve_data_files_locally_or_by_urls(base_path, v, allowed_extensions=allowed_extensions)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 266, in _resolve_data_files_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '/Lang Translation/dataset/train.json' at /app/aafa/lang_translation_poc<|||||>Hi Just tried this way now... for --train_file & --test_file--> given direct file name instead of path with filename. it has given some other error. For your reference i am copying total stack trace. please help
python run_translation.py \
> --model_name_or_path '/Lang Translation/opus-mt-id-en' \
> --do_train \
> --do_eval \
> --source_lang id \
> --target_lang en \
> --train_file 'train.json' \
> --test_file 'test.json' \
> --output_dir '/outputs/marianmt/' \
> --per_device_train_batch_size=4 \
> --per_device_eval_batch_size=4 \
> --overwrite_output_dir \
> --predict_with_generate
/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
09/23/2021 15:29:28 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
09/23/2021 15:29:28 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=1,
greater_is_better=None,
group_by_length=False,
hub_model_id=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=None,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=/outputs/marianmt/runs/Sep23_15-29-28_x01taafaapp1a.vsi.uat.dbs.com,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
output_dir=/outputs/marianmt/,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=4,
per_device_train_batch_size=4,
predict_with_generate=True,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard', 'wandb'],
resume_from_checkpoint=None,
run_name=/outputs/marianmt/,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=False,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
09/23/2021 15:29:38 - INFO - datasets.utils.file_utils - HEAD request to https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py timed out, retrying... [1.0]
09/23/2021 15:29:49 - WARNING - datasets.builder - Using custom data configuration default-3762545e4e6cb49f
09/23/2021 15:29:49 - INFO - datasets.builder - Generating dataset json (/home/devaafabg2/.cache/huggingface/datasets/json/default-3762545e4e6cb49f/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50)
Downloading and preparing dataset json/default to /home/devaafabg2/.cache/huggingface/datasets/json/default-3762545e4e6cb49f/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 7025.63it/s]
09/23/2021 15:29:49 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
09/23/2021 15:29:49 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 513.94it/s]
09/23/2021 15:29:49 - INFO - datasets.utils.info_utils - Unable to verify checksums.
09/23/2021 15:29:49 - INFO - datasets.builder - Generating split train
Traceback (most recent call last):
File "run_translation.py", line 551, in <module>
main()
File "run_translation.py", line 295, in main
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/builder.py", line 1188, in _prepare_split
writer.write_table(table)
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/datasets/arrow_writer.py", line 426, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/app/aafa/lang_translation_poc/env/lib64/python3.8/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<id: string, en: string> to struct using function cast_struct<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,705 | closed | update run_translation.py | Lacks a parameter named "generation_max_length". This one parameter was added.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-23-2021 03:50:28 | 09-23-2021 03:50:28 | Thanks for the PR! `generation_max_length` is already included in `Seq2SeqTrainingArguments`.
https://github.com/huggingface/transformers/blob/62832c962f85b5a554ebf8b930d13b76b9028a8d/src/transformers/training_args_seq2seq.py#L50-L53<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,704 | closed | Error while running GPT-J 6B with revision="float16" | Hi, I am getting `RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'` while running the following snippet of code on the latest master. Suspect that it is because of the `revision="float16"`.
## Code
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \
"previously unexplored valley, in the Andes Mountains. Even more surprising to the " \
"researchers was the fact that the unicorns spoke perfect English."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
```
The error happens in `model.generate`. Stacktrace:
```
/usr/local/lib/python3.7/dist-packages/transformers/models/gptj/modeling_gptj.py in forward(self, hidden_states, layer_past, attention_mask, head_mask, use_cache, output_attentions)
272 ):
273 residual = hidden_states
--> 274 hidden_states = self.ln_1(hidden_states)
275 attn_outputs = self.attn(
276 hidden_states,
```
### Who can help
Models:
- gptj: @patrickvonplaten, @EricHallahan, @StellaAthena
## Information
Model I am using (Bert, XLNet ...): GPT-J 6B
## To reproduce
Steps to reproduce the behavior:
Run the provided code snippet
| 09-23-2021 03:50:27 | 09-23-2021 03:50:27 | You need hardware that supports half-precision to run the model in half-precision. If you have a GPU with float16 support, load the model to GPU. If you only have a CPU without float16 support, you must specify `torch_dtype=torch.float32` instead.<|||||>I got the same error though I use p3.2xlarge on AWS, which have half precision support.
any idea what to do?
|
transformers | 13,703 | closed | Replace torch.set_grad_enabled by torch.no_grad | Replace `torch.set_grad_enabled` by `torch.no_grad` in the ONNX converter | 09-22-2021 23:20:13 | 09-22-2021 23:20:13 | |
transformers | 13,702 | closed | Skip ONNX LayoutLM test | Skips test until https://github.com/huggingface/transformers/pull/13562#issuecomment-925395462 is resolved | 09-22-2021 23:19:46 | 09-22-2021 23:19:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,701 | closed | Fix typo in torchscript tests | null | 09-22-2021 23:02:49 | 09-22-2021 23:02:49 | |
transformers | 13,700 | closed | Patch training arguments issue | Patches an issue with the tokens in the training arguments | 09-22-2021 19:20:54 | 09-22-2021 19:20:54 | |
transformers | 13,699 | closed | Patch training arguments issue | Fix an issue with the hub token. (v4.10 patch) | 09-22-2021 19:16:01 | 09-22-2021 19:16:01 | LGTM! Modifying a dict as you iterate over it with `.keys()` is usually bad I think, since `keys()` returns a non-constant view that will change if the dictionary changes. In the case where you only update values and never add or remove any keys I believe it's okay, though. |
transformers | 13,698 | closed | Fine-tuning GPT-J 6B with 16Gb of VRAM | Hi
I am trying to fine-tune GPT-J 6B with 16Gb of VRAM. Do you have any ideas/pointers on what can be done to lower the amount of VRAM needed for fine-tuning?
@StellaAthena @EricHallahan
| 09-22-2021 18:17:44 | 09-22-2021 18:17:44 | I've tried DeepSpeed with Zero-3 and Zero-Offload, and the model still doesn't fit. <|||||>Try checking out the instructions [here](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/howto_finetune.md)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,697 | closed | Pipeline “zero-shot-classification” gives “TypeError: __call__() takes 2 positional arguments but 3 were given.” | - `transformers` version: 4.11.0.dev0
- Platform: Linux-4.9.253-rt168-tegra-aarch64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: (True)
- Using distributed or parallel set-up in script?: (False)
@LysandreJik
## Information
Model I am using (‘facebook/bart-large-mnli’):
The problem arises when using:
the official example scripts: (basically)
```python
from transformers import pipeline
classify=pipeline('zero-shot-classification')
text=("Give me a weather report")
tags=["request_weather", "catch_fire"]
classify(text, tags)
```
The tasks I am working on is:
my own task or dataset:
It’s a bunch of ugly hacks on each other’s shoulders, in a trench coat, masquerading as a Python script.
## To reproduce
Steps to reproduce the behavior:
1. Run script
Result:
```bash
Traceback (most recent call last):
File "test.py", line 8, in <module>
classify(text, tags)
TypeError: __call__() takes 2 positional arguments but 3 were given
```
## Expected behavior
Literally anything but this. I am very confused. Please help, very much appreciated, getting gray hairs from this, thanks!
| 09-22-2021 18:15:38 | 09-22-2021 18:15:38 | Hello, it seems that there is an issue indeed, the argument is not recognized unless it is a keyword argument (cc @Narsil)
You can do the following in order to have your code work:
```py
classify(text, candidate_labels=tags)
```
which will output
```
{'sequence': 'Give me a weather report', 'labels': ['request_weather', 'catch_fire'], 'scores': [0.9743501543998718, 0.02564983256161213]}
```<|||||>@LysandreJik Ahh, ok so that’s what was happening. I’ll be curious to know when you find out what caused it. Thanks for the fix!<|||||>Hi @Jcwscience ,
There was a bit rework of pipelines to enable new features (GPU streaming most importantly), and this specific call option wasn't tested so we forgot to account for it.
FYI, in return we enabled this:
```python
classify=pipeline('zero-shot-classification', candidate_labels=["request_weather", "catch_fire"])
classify("text")
classify("something")
```<|||||>@Narsil
Makes sense. Happens to my code about 3 times a day. I was honestly thrilled that I wasn't just doing something stupid! Thanks!<|||||>We will make a PR to fix that too.<|||||>PR is up for review. |
transformers | 13,696 | closed | [docs/gpt-j] add a note about tokenizer | # What does this PR do?
This PR adds a note about why the `gpt-j` tokenizer has `vocab_size` of 50400 instead of 50257 as the original tokenizer. | 09-22-2021 15:46:17 | 09-22-2021 15:46:17 | cc @StellaAthena <|||||>Looks good to me |
transformers | 13,695 | closed | Wav2vec2 with different tokenizer ? | Hello,
thank you for having make wav2vec compatible with the transformer package.
I have a question regarding the tokenizer part. By default, the tokenizer is doing prediction at character level. I wanted to know if it is possible to use similar tokenizer than the one use in NLP such as the RobertaTokenizer ? (Or other one based on sentence piece/word piece)
I tried to use the roberta tokenizer for example but got an error with the processor :
> ValueError: `tokenizer` has to be of type <class 'type'>, but is <class 'transformers.models.roberta.tokenization_roberta.RobertaTokenizer'>
Is it possible to use a different tokenizer than char level with this wav2vec implementation ? | 09-22-2021 15:00:57 | 09-22-2021 15:00:57 | [This](https://huggingface.co/transformers/master/model_doc/speechencoderdecoder.html) could be of interest to you. Notice the examples given, you can use the [Speech2Text2Processor](https://huggingface.co/transformers/master/model_doc/speech_to_text_2.html#speech2text2processor) to wrap the feature extractor and the tokenizer you want to use.<|||||>Hey @Shiro-LK,
When using `Wav2Vec2ForCTC` then it is necessary to use the `Wav2Vec2Tokenizer` which operates on the char level. In order to use a Roberta-like tokenizer, one should probably use the `SpeechEncoderDecoder` framework with `Wav2Vec2` being the encoder and roberta the decoder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,694 | closed | Limit number of checkpoint on `examples/pytorch/summarization/run_summarization.py` same as `save_total_limit` on `Trainer` | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Is it possible to limit the amount of checkpoints created in `examples/pytorch/summarization/run_summarization.py`. I finetuned a model and it keeps on creating a new checkpoint every 5000 steps. I know this is already implemented in transformers `Trainer` as `save_total_limit`
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I finetuned my first model using Pegasus and the `run_summarization.py` script provided in the docs. I noticed that colab was always running out of memory so I had to manually delete checkpoints after every 1500 steps. This would make finetuning Pegasus models using `run_summarization.py` less tedious.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I could implement the `save_total_limit` param in `run_summarization.py` if this is not yet implemented. Just need to look on how to do it.
| 09-22-2021 08:51:07 | 09-22-2021 08:51:07 | Nevermind I think I figured it out. Looks like `run_summarization.py` inherits everything from `Trainer` so adding `--save_total_limit 2` to the passed args limits the number of checkpoint savefiles. |
transformers | 13,693 | closed | [Wav2Vec2FeatureExtractor] Fix `extractor.pad()` dtype backwards compatibility | Resolves #13689
This fixes an issue introduced by #13650 with speech feature extractors' tensors being returned as torch.float64 when `.pad()` is called directly:
```python
from transformers import Wav2Vec2FeatureExtractor
import numpy as np
extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
rand_input = np.ones((100,), dtype=np.float64)
out = extractor.pad([{"input_values": rand_input}], return_tensors="pt")
print(out.dtype) # <- this should be `torch.float32`
```
This is due to how pytorch converts float numpy arrays (new padding logic) vs python lists (old padding logic):
* uses torch.float32 for python lists by default: `torch.tensor([1.2, 2.3]).dtype # torch.float32`
* `np.array([1.2, 2.3]).dtype # np.float64`
* uses source dtype for numpy arrays: `torch.tensor(np.array([1.2, 2.3])).dtype # torch.float64` | 09-22-2021 08:05:39 | 09-22-2021 08:05:39 | |
transformers | 13,692 | closed | Assertions to exceptions | # What does this PR do?
This PR addresses the issue [#12789](url)
I have modified the file transformers/src/transformers/tokenization_utils.py file to throw a type error instead of assertions.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-22-2021 07:19:20 | 09-22-2021 07:19:20 | Note that you still need to run `make style` on your branch to fix the code quality check.<|||||>> Thank you for fixing this!
>
> Could you run `make style` and `make quality` from the root of the repo? That should fix the failing test.
Thank you so much, will do! <|||||>> Note that you still need to run `make style` on your branch to fix the code quality check.
Yes, this is my first time contributing to a PR, so I'm sorry if I'm being a little slow. I have run the 'make style', hoping it passes this time. And thanks for approving the changes :)<|||||>No problem, we all have to learn and start somewhere :-)
The last test failure is unrelated to this PR (flaky test) so merging. Thanks again for your contribution! |
transformers | 13,691 | closed | Raise exceptions instead of using assertions for control flow #12789 | # What does this PR do?
Replaces assertions with exceptions for this file - transformers/src/transformers/tokenization_utils.py
In accordance with this issue - https://github.com/huggingface/transformers/issues/12789
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-22-2021 06:39:21 | 09-22-2021 06:39:21 | |
transformers | 13,690 | open | [RFC] adding Tensor and Pipeline Parallelism to transformers | Following up on this proposal https://github.com/huggingface/transformers/issues/12772 I just had a discussion with @hyunwoongko (with great help from @JakeTae who patiently translated for us), and we tried to discuss a strategy of how to best integrate Tensor Parallelism (TP) and Pipeline Parallelism (PP) into `transformers`, making it easy for reviewers and the contributors. Note that
[parallelformers](https://github.com/tunib-ai/parallelformers) currently implements only TP.
So here is a great example of how the TP can be added, as @hyunwoongko already implemented it in his fork for `GPTNeo`
https://github.com/tunib-ai/transformers/commit/5bf8655be624b3aeda799b80fddd220213491b04 (he didn't use `GPT2` since it already has the naive PP implemented). So you can see exactly what we want to merge. It's a very thin layer to the model and most of the functionality is in the helper parallel utils. The end of the change is multiple tests/examples that need to be converted to our test framework.
Now, while adding TP is relatively easy, adding PP is very complex in the current state of HF models because they include many features that interfere with implementing PP - due to the requirements:
1. for the model to be `nn.Sequential `and
2. inputs/outputs to be simple tensors with the first dimension of batch size.
So to implement PP we will most likely have to fork each model, strip the unnecessary for scalability features and only then be able to implement PP.
So my thinking is that perhaps we do it from the get the going? Instead of integrating TP into the normal model - say `GPTNeo`, we fork it to say `GTPNeo3D` from the get going and do all the work including TP and PP on that new model. Once everybody is happy we can rinse and repeat for other models.
I added 3D to `GPTNeo` to make `GTPNeo3D` - 3D = DP/TP/PP - not exactly sure about this particular name or attached to it, this is just something to start with.
Also once TP is implemented in say `GTPNeo3D` we can start replicating it to other models. Because [parallelformers](https://github.com/tunib-ai/parallelformers) has them all covered already. PP will be much harder and we can do this in parallel.
I wanted to check in with the team to see if this approach resonates better, rather than modifying the existing models.
Thank you!
Also see [this blog post explaining parallelforms](https://tunib.notion.site/TECH-2021-07-26-Parallelformers-Journey-to-deploying-big-models_TUNiB-32b19a599c38497abaad2a98727f6dc8).
-----------------------
Additionally see the main pytorch Parallelism discussion at https://github.com/pytorch/rfcs/pull/32
@LysandreJik, @sgugger, @patrickvonplaten
| 09-22-2021 01:23:51 | 09-22-2021 01:23:51 | @stas00 - I like this a lot! And as we're been dragging our feet with implementing some of the Megatron 3D parallelism into `mistral` - I think it might be a great way for us to collaborate; we can just start with the base GPT-2 model perhaps?
I think my (and the Mistral team's) addition the next few weeks will be trying to do some benchmarking of Megatron and existing gains with various subsets of parallelism (at a very fundamental level - profiling which kernels are being called, etc.) and maybe creating a set of unit tests to verify correctness?
Separately - might be worth keeping logs of how to "3D-ify" new models, and ways we might make that procedure even easier moving forward.
Let me know if this makes sense!<|||||>@stas00 @siddk If we are creating a new class, we do not need to modify the existing `parallelize()` method, so we do not need to work with GPTNeo. I think GPT2 would be better.<|||||>Thanks for the feedback, Sidd.
The reason @hyunwoongko thought of starting with GPTNeo was because GPT2 already has the naive PP `parallelize()`. But the problem is that it's not just in the model, it's also in the Trainer. So we probably need to choose some other action name for that function altogether. At least for a time being so that we could move forward.
Note that the intention is to do simple things first and not do too many things at once. So I think starting with GPTNeo on a clean slate is a better idea. Once it's happy it'd be trivial to replicate that to GPT2. And it's already done as you can see from the link in OP.
Here is my vision of _3Difying_ `transformers`:
step 1. implement TP in one model
step 2a. start replicating TP to other models
step 2b. start working on PP in one model
step 3a. start replicating PP to other models.
note how step 2 can be done in parallel by different people.
So I can see that Mistral's team efforts would be parallel work and not sequential. So for example:
step 3b. implement Mistral's GPT2 improvements to GPT2
step 4a. start replicating it to other models.
If were were to start with GPT2 we would interfere with your work, Sidd, so I think it's actually best if we pick 2 different starting models.
But let's stay focused in this discussion on TP+PP, otherwise it'd be too easy to get side-tracked. We already spent too much time talking - let's see some code going into `transformers`! :)
wrt trainers, it'll be a natural part of the work - I'm not worried too much about it. I don't know much about accelerate yet, but HF Trainer should be relatively easy.
<|||||>This makes a lot of sense to me - thanks @stas00 and @hyunwoongko for the clarifications! The steps above form a pretty good concrete plan - but if you both are already planning on tackling it, maybe it makes sense for us to tackle some of the other Megatron-LM improvements first, like the custom loss scaling/kernels/etc. (in mistral, so we can break things 😅)? And as y'all build the "main API" for 3D parallelism, we can just drop that in, and train larger models!
The PR with the mistral's first set GPT-2 improvements is waiting on approval right now - once that's in we can move a bit faster as well.<|||||>That sounds like a perfect plan to me, Sidd.<|||||>@stas00 I think the following method is not good for megatron-friendly method.
```
step 1. implement megatron-friendly TP in one model
step 2a. start replicating megatron-friendly TP to other models
step 2b. start working on megatron-friendly PP in one model
step 3a. start replicating megatron-friendly PP to other models.
```
Ultimately, implementing PP requires rewriting all modeling code. (including `GPT2Attention`, `GPT2MLP`, `GPT2Model`, ...) I wasn't familiar with PP until not long ago. but recently, I became very familiar with PP and found out that we had to rewrite all the code. (`generation_utils.py` used for inference should also be changed.) Therefore, it is recommended that megatron-friendly TP and PP be implemented together. (I think it's inefficient to implement megatron-friendly TP alone.)
<|||||>The transformers-friendly method (=parallelformers) has the advantage of being able to extend the model quickly because it does not need to rewrite the modeling code (it uses the existing transformers code), but it is not compatible with PP. So we have to remove all the transformers-friendly TP when implementing PP. Which strategy we take is a matter of choice. We can quickly expand them in a transformers friendly way, and then change them one by one to be megatron friendly like
```
step 1. implement transformers-friendly TP in one model
step 2a. start replicating transformers-friendly TP to other models
step 2b. start working on megatron-friendly TP + PP in one model
step 3a. start replicating megatron-friendly TP + PP to other models.
```
Or there is a way to not implement transformers-friendly methods because they will be removed anyway. But, since there are thousands of lines of code to write for megatron-friendly and tens of lines of code for transformers-friendly, the megatron-friendly approach will scale very slowly.
```
step 1. start working on megatron-friendly TP + PP in one model
step 2. start replicating megatron-friendly TP + PP to other models.
```
One thing to note is that the transformers-friendly TP implementation is completely eliminated when implementing the megatron-friendly TP. A megatron-friendly TP is implemented differently from a transformers-friendly TP. <|||||>Adding a GPTNeo3D to experiment seems like a good idea to me. At the end of the day, that modeling file can leave in the same folder as `modeling_gptneo.py`.
Note that while you experiment, you can leverage #13467 to share models on the Hub that have no implementation in Transformers and still work with the auto-model API.<|||||>> Adding a GPTNeo3D to experiment seems like a good idea to me. At the end of the day, that modeling file can leave in the same folder as `modeling_gptneo.py`.
Great!
> Note that while you experiment, you can leverage #13467 to share models on the Hub that have no implementation in Transformers and still work with the auto-model API.
The 3D GPTNeo model's weights are the same as a normal GPTNeo model's - i.e. it can be used w/ or w/ PP/TP, so I'm not sure why we need a special API?
And I guess we won't be able to use `AutoModel`, because the `config.model_type` will say 'gpt_neo', but we will want to load it with `GPTNeo3D*` classes.
<|||||>@hyunwoongko, you're bringing up excellent points.
I suppose the main question is how much of a benefit we can give to users by having just TP. My thinking is that if it's easy to add TP to all models and since you have already done this, let's do it.
I'm concerned that adding PP will be a very slow process because as you said it requires massive rewrites to the model's code, and meanwhile those models that are waiting their turn won't be very scalable (except with Deepspeed ZeRO).
Besides we can delegate the TP adding to the rest of the models to others (other developers and even community) since it's mostly just replaying the code you have already written. But it still requires work, at least in adding tests and documentation, and then PRs.
The only concern with adding the transformers-friendly way is that the external API remains the same when we add PP.
How does that sound?<|||||>@stas00 But anyway, I don't prefer PP. As you know, PP is memory inefficient because it is not compatible with ZeRO 2, 3. In fact, we also decided not to use PP when developing language models. So adding just TP would be helpful for many people. So let's go with the following strategy. but, as you said, the API for both methods should remain the same.
```
step 1. implement transformers-friendly TP in one model
step 2a. start replicating transformers-friendly TP to other models
step 2b. start working on megatron-friendly TP + PP in one model
step 3a. start replicating megatron-friendly TP + PP to other models.
```
But transformers-friendly TPs have no reason to rewrite their modeling code. What should we do?
<|||||>That's great, @hyunwoongko!
And once we complete `GPTNeo3D` with TP we can decide whether to fold it back to the normal `GPTNeo` model or keep it separate. I'm saying that if at the end we will do PP only for a few select models (which is too a real possibiilty), then there is absolutely no need to fork 60 models and create a lot more maintenance work for `transformers`, if they will have just TP+DP.<|||||>@stas00
In my opinion, transformers-friendly TP have no reason to write their own modeling code like GPTNeo3D.
1. So the transformers-friendly TP will just use the existing model
2. And let's make a new modeling class such as GPT2For3D when we develop the megatron-friendly TP + PP (GPT2, Bert, T5, etc, It will probably be some models, not all.) <|||||>I'm thinking of an API like this.
```python
from transformers import GPTNeoModel
model = GPTNeoModel.from_pretrained("elutherai/gpt-neo-1.3B", tensor_model_parallel_size=4)
or
model = GPTNeoModel.from_pretrained("elutherai/gpt-neo-1.3B", tp=4)
```
I implemented megatron friendly model internally like
```python
@classmethod
def from_yaml(
cls,
cfg_path: str,
tensor_model_parallel_size: int = 1,
pipeline_model_parallel_size: int = 1,
tp: int = None,
pp: int = None,
):
"""
Create model from yaml config file
Args:
cfg_path: path of configurations
tensor_model_parallel_size: tensor model parallel world size
pipeline_model_parallel_size: pipeline model parallel world size
tp (int): equivalent with `tensor_model_parallel_size`
pp (int): equivalent with `pipeline_model_parallel_size`
"""
if tp is not None:
assert tensor_model_parallel_size == 1, (
"you can't use param `tensor_model_parallel_size` and `tp` at the same time. "
"they are equivalent. so please use one of them."
)
tensor_model_parallel_size = tp
if pp is not None:
assert pipeline_model_parallel_size == 1, (
"you can't use param `pipeline_model_parallel_size` and `pp` at the same time. "
"they are equivalent. so please use one of them."
)
pipeline_model_parallel_size = pp
```<|||||>I totally agree, that this is a much better way to proceed.
@sgugger, is it ok if we change the initial proposal and add TP to the normal model classes? As we continued discussing this and based on my experience with trying to add PP to transformers it'll be a huge amount of work to do it for all models, and so it's very likely many models will never get it. And since TP requires no changes to the models then there is no reason to make it difficult on users and maintainers to fork the model for that feature to work.
And we believe just having TP+DP will already be a great boon to the scalability of the models (if Deepspeed ZeRO doesn't already address this for whatever reason).
For PP new classes will be needed 100%.
Thank you.<|||||>As long as the changes are minimal, no objection from my side. I agree it makes much more sense to get that out if it's faster and deliver the PP later on.<|||||>the problem is the 'parallelize()' method, the API for layerwise naive parallelism in GPT2 and T5. Do you agree to remove this method? The megatron-friendly TP + PP cannot handle it that way. This is because in the case of PP, parallelization occurs at the time of model creation. That's why I let `from_pretrained` takes the tp and pp sizes as input.<|||||>> I'm thinking of an API like this.
>
> ```python
> from transformers import GPTNeoModel
>
> model = GPTNeoModel.from_pretrained("elutherai/gpt-neo-1.3B", tensor_model_parallel_size=4)
>
> or
>
> model = GPTNeoModel.from_pretrained("elutherai/gpt-neo-1.3B", tp=4)
> ```
I think `transformers` tends to go with more spelled out args, but not too too long, so perhaps `tensor_parallel_size=4`
> the problem is the 'parallelize()' method, the naive parallelism (layer-wise) implementation. Do you agree to remove this method? The megatron-friendly TP + PP cannot handle it that way. This is because in the case of PP, parallelization occurs at the time of model creation. That's why I let from_pretrained take the tp and pp sizes as input.
The naive PP is experimental:
https://github.com/huggingface/transformers/blob/50c746eeb71f7b8f95a264b09249c9555cdd2e17/src/transformers/models/gpt2/modeling_gpt2.py#L527-L529
but we shouldn't remove it until we replace it with real PP, because users actively use the naive PP at the moment.
That's why we proposed to work on NeoGPT first so that it's easier to take time and not need to have that older code interfere.
<|||||>@stas00
> I think `transformers` tends to go with more spelled out args, but not too too long, so perhaps `tensor_parallel_size=4`
So I made it support both variables (long name and short name). not good?
> but we shouldn't remove it until we replace it with real PP, because users actively use the naive PP at the moment. That's why we proposed to work on NeoGPT first so that it's easier to take time and not need to have that older code interfere.
I totally agree with you. Let's start from GPTNeo.
---
The second thing to discuss is the embedding layer. When I implemented parallelformers, I didn't actually parallelize the embedding layer. In this case, the embedding layer is copied to all GPUs. Therefore, it is memory inefficient. But in fact we can apply `VocabParallelEmbedding` and `VocabParallelCrossEntropy`. (However, we should not use the original CrossEntropy in this case) we also need to decide whether or not to add `VocabParallelEmbedding` to the transforemrs-friendly TP.
I didn't tell you guys, but I actually experimented little by little. I already figured out that I can do `VocabParallelEmbedding` internally with transformers-friendly TPs.<|||||>> @stas00
>
> > I think `transformers` tends to go with more spelled out args, but not too too long, so perhaps `tensor_parallel_size=4`
>
> So I made it support both variables (long name and short name). not good?
At the moment I don't recall `transformers` using shortcut aliases for arg names, so probably just having `tensor_parallel_size` is fine. (no need to repeat "`model_`" as the shorter name I proposed is not ambiguous)
> The second thing to discuss is the embedding layer. When I implemented parallelformers, I didn't actually parallelize the embedding layer. In this case, the embedding layer is copied to all GPUs. Therefore, it is memory inefficient. But in fact we can apply `VocabParallelEmbedding` and `VocabParallelCrossEntropy`. (However, we should not use the original CrossEntropy in this case) we also need to decide whether or not to add `VocabParallelEmbedding` to the transformers-friendly TP.
Was CrossEntropy the reason for not doing it in the first place in parallelformers? I guess the integration will allow to overcome this then if I understood your comment correctly.
But otherwise by all means let's make TP as efficient as possible.
<|||||>1. I like the name `tensor_parallel_size` more, but I named it `tensor_model_parallel_size` because I wanted to follow the Megatron-LM nomenclature. In fact, if we input the mpu to DeepSpeed, methods such as `mpu.XXX_model_parallel_rank()` are called inside it. Therefore, it is better to unify the names.
2. Since `parallelformers` is inference only toolkit, there was no reason to worry about CrossEntropy. The reason I didn't do it at the time was because it was a bit complicated. (But it's not difficult.)
How about implementing it with options first?
`from_pretrained(tensor_model_parallel_size=4, embedding_parallelism=True)`<|||||>> 1. I like the name `tensor_parallel_size` more, but I named it `tensor_model_parallel_size` because I wanted to follow the Megatron-LM nomenclature. In fact, if we input the mpu to DeepSpeed, methods such as `mpu.XXX_model_parallel_rank()` are called inside it.
Ah, ok, we can use `tensor_model_parallel_size` then to make things easier to cross-code. May be then add a note at why this particular name has been chosen.
> 2. Since `parallelformers` are inference only in the first place, there was no reason to worry about CrossEntropy. The reason I didn't do it at the time was because it was a bit complicated. (But it's not difficult.)
Ah, right, I forgot that `parallelformers` was intended for inference only in the first place. Yes, so what you proposed is a good idea.<|||||>> How about implementing it with options first?
>
> `from_pretrained(tensor_model_parallel_size=4, embedding_parallelism=True)`
Is there a technical reason for not always doing the latter?<|||||>Because of `VocabParallelCrossEntropy`. the user should be able to use a loss function other than CrossEntropy by using the Transformers model. (RMS, Center Loss, Large-margin softmax, ...) With VocabParallelEmbedding, the Loss function should handle this appropriately. You can check this https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/cross_entropy.py
So I thought the default value of embedding_parallelism as false and turning it on when the user wants to.
<|||||>Thank you for the explanation, Hyunwooongko.
Then yes we need that arg. Should the default be `False` then, so the priority is for the user code to work out of the box and we document `embedding_parallelism=True` as an optimization?
Further, `embedding` is ambiguous since we have different types, should we say explicitly `word_embed_parallelism`?<|||||>Oh I was wrong. Only tying the output embeddings have problems with the loss function. I checked and it doesn't matter since neither gpt2 nor gpt neo are tying output embeddings.
In most cases, we don't need to worry about the loss function. Therefore, I will implement embedding parallelism works everytime so this option is unnecessary. and users do not need to worry about it. If I find a model that tying input and output embeddings without an lm head later, I will think about it then. <|||||>But maybe Meg-DS and GPT NeoX use embedsing tying. So this option will be needed in the future.<|||||>If I'm not mistaken many models have input and output embeddings tied.<|||||>Hi all, I helped implement pipeline parallelism in Megatron (and was also one of the lead authors on the PipeDream project). Happy to answer any questions.
I had a question too: what is the current plan for the new PP-friendly model classes? What is going into these, and how will they be different from the vanilla model classes?
Thanks!<|||||>> Hi all, I helped implement pipeline parallelism in Megatron (and was also one of the lead authors on the PipeDream project). Happy to answer any questions.
Thank you for joining in and offering to support this endeavour, Deepak!
Have you looked at the recent version of the PP in the core pytorch? I tried to suggest to make the API much more flexible over the last spring - at least wrt inputs and outputs, which is very limited in most current PP implementations. so the new API is much more encouraging. You can even pass non-tensor inputs/outputs.
They have some other interesting tech in there. For example stashes to which you can push / pop at different stages, which could for example help pass around complex structures.
> I had a question too: what is the current plan for the new PP-friendly model classes? What is going into these, and how will they be different from the vanilla model classes?
The main obstacles to making HF models PP-friendly are:
1. variety of complex inputs: like a tuple of a tuple of tensors, inputs that aren't tensors, inputs that are tensors but aren't of first dimension of batch size, etc. Most of the variables are very optional and are there to support research.
2. models weren't written with `nn.Sequential` in mind and are very difficult to make into such. for example some models have conditionals on running the encoder or decoder stage or not, which is tricky for `nn.Sequential`
So based on my earlier attempts to implement PP in `transformers` we have to fork the existing models, strip down all the unnecessary features, convert to `nn.Sequential` while adjusting the inputs/outputs to work with it.
As I suggested above pytorch core's new PP API should make this work easier as it's more flexible. But of course, there are other options, more on that later.
The other approach is to start from scratch and build the model with PP in mind from the ground up using Megatron and Deepspeed as a reference, and then to try to adjust the outcome to re-use as much as possible from the current `transformers` model arsenal. The goal is to avoid maintaining 2 separate code bases. I think building from scratch would be preferable to @hyunwoongko - but then we have only a few models w/ PP to borrow from (gpt2, bert - not even t5). Remember in `transformers` we have some 50+ models. Some are slightly different, others are quite different from each other.
So these are the 2 ways I have in my mind.
I'm very open to hearing any other propositions. The key need is the ability to replicate the solution to several dozen of models.
And of course, the other essential question is which PP API to use:
1. write our own / borrow from Megatron
2. Deepspeed
3. Pytorch core (will probably require pt-1.9 or even pt-1.10 for some of the recent features, but it should be no problem)
4. I think FairScale has an API as well, but they have been upstreaming it into the pytorch core, so it's probably the best to rely on the latter.
Beside the ease of use, we also want to make sure that the API allows for the most efficient incarnation of the PP tech - since there are quite a few of them. The goal is to minimize the idling bubble.
BTW, for those who perhaps are new to the topic I wrote this doc: https://huggingface.co/transformers/parallelism.html So that you can quickly understand what we are talking about.<|||||>@sgugger @stas00
We need to name the new class. Do you prefer `GPT2Parallel` or `ParallelGPT2` or `GPT23D`? Or any other good names? I think `GPT23D` is very weird. It looks like 23-dimensional parallelism and names like `GPT2Parallel` are also weird. `GPT2ParallelForSequenceClassification` or `GPT2ParallelWithLMHead` looks like Parallel is for SequenceClassification or Parallel is with LMHead. But putting `Parallel` in the prefix makes everything good. (e.g. `ParallelGPT2ForSequenceClassification` or `ParallelGPT2WithLMHead`) If later extended to TF or Flax, naming such as `ParallelTFGPT2` is also possible.<|||||>The initially proposed 3D addition is awkward when it becomes `GPT23DForSequenceClassification`
`ParallelGPT2ForSequenceClassification` rings nicely to me.
Another alternative is to use a postfix: `GPT2ForSequenceClassificationParallel`<|||||>> The initially proposed 3D addition is awkward when it becomes GPT23DForSequenceClassification
Well... It looks like GPT twenty three lol
Well, the name doesn't really matter. But we have to decide. would you like to vote? What's the best way?<|||||>Let's wait for @sgugger to follow up.
We can vote for the appendix variations, but the main structure (should it be prefix/postfix/infix) is up to Sylvain as he is overseeing the big structure. <|||||>I like the `Parallel` prefix for those new models.<|||||>> Have you looked at the recent version of the PP in the core pytorch? I tried to suggest to make the API much more flexible over the last spring - at least wrt inputs and outputs, which is very limited in most current PP implementations. so the new API is much more encouraging. You can even pass non-tensor inputs/outputs.
I have not, but I will try to soon. I agree that flexibility in terms of the number of input and output tensors (as well as types) is good.
> The main obstacles to making HF models PP-friendly are:
This makes sense to me. I agree that having a sanitized model with far fewer optional arguments is nice and easier to support.
I think one way to go is to have the `Parallel*` classes just inherit (or borrow implementations) from the corresponding existing classes (but with the `forward()` method not supporting all the optional arguments). This will ensure that the model implementation really only lives in a single place, but the necessary if guards and other code is elsewhere in the `Parallel*` class implementation. The nice thing about these transformer models is that they are pretty repetitive, so the amount of PP-specific code in the guts of the model implementation is not a lot (e.g., don't run through the embedding layer if the current stage is not the first one, etc.).
For models like T5 with an encoder and decoder, it is important to think through how tensors should be passed through the different stages (e.g., the `encoder_hidden_state` is an input to every decoder stage).
Down the road, you might want to also support interleaved schedules (that trade off a smaller pipeline bubble size for more communication). This will require different execution schedules, so it is also perhaps worthwhile thinking through how to specify these in an easy way.<|||||>HF Transformers doesn't use inheritance in models to help the readers of the model to understand it better, so we will either copy-and-strip or we will build ground-up and then copy what we can to match the original class.
> For models like T5 with an encoder and decoder, it is important to think through how tensors should be passed through the different stages (e.g., the encoder_hidden_state is an input to every decoder stage).
How do you propose to deal with conditional runs of encoder (T5)? I see Megatron-LM added T5 but last I checked it didn't support PP.
> Down the road, you might want to also support interleaved schedules (that trade off a smaller pipeline bubble size for more communication). This will require different execution schedules, so it is also perhaps worthwhile thinking through how to specify these in an easy way.
I thought that was exactly the question of choosing the right PP framework/API, since some of those support interleaved schedule and others don't. It's best to delegate such things to the external API I'd think.
I realize that Megatron-LM's approach is different since it doesn't use any API but builds its own.<|||||>I made first draft PR for this: https://github.com/huggingface/transformers/pull/13726<|||||>> HF Transformers doesn't use inheritance in models to help the readers of the model to understand it better.
Fair enough. I guess there will be some code duplication then.
> How do you propose to deal with conditional runs of encoder (T5)? I see Megatron-LM added T5 but last I checked it didn't support PP.
What do you mean by "conditional run of the encoder"? I have looked at pipeline parallelism with T5, but we always ran inputs through the encoder (and then as I mentioned above, passed the `encoder_hidden_state` through the pipeline stages with decoder layers).
> It's best to delegate such things to the external API I'd think.
I agree that this would be ideal. But unfortunately, the pipeline-parallelism schedule used is pretty problem- and hardware-dependent, so it might make sense to expose a couple of different options to users (especially since `transformers` supports so many different models with different computation characteristics). Even better would be a way to allow a user to specify their own new schedule if they wanted, but this can almost definitely be tabled to later.
Additionally, I don't think any pipeline parallelism API out there is robust and has enough features to be truly useful. For example, Torch's PP support seems pretty nice, but only currently supports an all-forward, all-backward schedule that has high memory footprint; 1F1B is strictly better than this (Section 2.2.1 in https://arxiv.org/pdf/2104.04473.pdf). The all-forward, all-backward schedule won't work super well for really large models.<|||||>> > How do you propose to deal with conditional runs of encoder (T5)? I see Megatron-LM added T5 but last I checked it didn't support PP.
>
> What do you mean by "conditional run of the encoder"? I have looked at pipeline parallelism with T5, but we always ran inputs through the encoder (and then as I mentioned above, passed the `encoder_hidden_state` through the pipeline stages with decoder layers).
It runs the encoder only on the first pass, and not afterwards. You can see the conditional here:
https://github.com/huggingface/transformers/blob/469b80d4e7f9d0ca9411d77845600839e5edf113/src/transformers/models/t5/modeling_t5.py#L1367-L1376
So my question is how to build `nn.Sequential` when half of it is conditional.
> > It's best to delegate such things to the external API I'd think.
>
> I agree that this would be ideal. But unfortunately, the pipeline-parallelism schedule used is pretty problem- and hardware-dependent, so it might make sense to expose a couple of different options to users (especially since `transformers` supports so many different models with different computation characteristics). Even better would be a way to allow a user to specify their own new schedule if they wanted, but this can almost definitely be tabled to later.
Do you mean that once the models are converted to a straightforward `nn.Sequential` then it could be fed to a variety of PP APIs w/o altering the model itself?
How would that work? e.g. Deepspeed's PP uses all kinds of special APIs for Tied layers (e.g. `TiedSpec` if I remember the name correctly) and other features that they model has to call explicitly. So it's far from being a generic plug-n-play.
Perhaps pytorch's PP API is slightly more so.
> Additionally, I don't think any pipeline parallelism API out there is robust and has enough features to be truly useful. For example, Torch's PP support seems pretty nice, but only currently supports an all-forward, all-backward schedule that has high memory footprint; 1F1B is strictly better than this (Section 2.2.1 in https://arxiv.org/pdf/2104.04473.pdf). The all-forward, all-backward schedule won't work super well for really large models.
Doesn't Deepspeed PP do 1F1B as well?
You're making an excellent point about torch's PP not supporting progressive PP protocols. Definitely need to inquire about them supporting interleaved PP. I wonder if faiscale has been working on that.
@hyunwoongko, what's your take - use an API, and which one you resonate the most with, or develop our own, or borrow an internal implementation from Megatron-LM?
Perhaps we should prepare a table of pros and cons for the different approaches. But somehow I feel you already have something you feel is the best in mind.
<|||||>> You're making an excellent point about torch's PP not supporting progressive PP protocols. Definitely need to inquire about them supporting interleaved PP. I wonder if faiscale has been working on that.
@pritamdamania, if I may ask - do you have plans to support the interleaved PP protocol in pytorch?
We are having a discussion on which PP framework we should use in `transformers`. As you know I favour pytorch core because you made it much more user friendly than most other PP frameworks I have seen, but as Deepak says above it may have trouble with huge models because it uses all-forward, all-backward schedule. And we happen to work a lot with huge models as of recent (currently using Megatron-Deepspeed for that). Or perhaps my notion is outdated and the interleaved PP is in works already in pytorch?
Thank you!
<|||||>> @hyunwoongko, what's your take - use an API, and which one you resonate the most with, or develop our own, or borrow an internal implementation from Megatron-LM?
my plan is DeepSpeed pipeline module. because we should consider ZeRO. Note that my implementation is a variant of Megatron-DeepSpeed.
plus) DeepSpeed PP is based on 1F1B https://github.com/microsoft/DeepSpeed/issues/1110<|||||>So Deepspeed PP with ZeRO-1, correct?
From what I understand while ZeRO-2/3 could technically work, they won't give any performance improvements over ZeRO-1.
But the key feature is that we could easily turn off PP and enable Z2/3 + offload for inference, which is why we use Megatron-Deepspeed for the BigScience and not just Megatron-LM. That is we use DS PP + Z1.
<|||||>Currently, DeepSpeed PP and ZeRO 2 and 3 are incompatible. If users don't want to use PP, TP + ZeRO DP is enough. However, the reason we want to provide `ParallelGPT2` with DeepSpeed PP is because there are many other things. (PP, Kernel fusion, Sparse attention, Activation checkpoint offloading ...). I think DeepSpeed is the best API to provide all these features and I think it is recommended to unify it with one toolkit as much as possible.<|||||>> It runs the encoder only on the first pass, and not afterwards. You can see the conditional here.
I believe this is only on the first pass _during inference_. You would run it every time for training.
> Do you mean that once the models are converted to a straightforward nn.Sequential then it could be fed to a variety of PP APIs w/o altering the model itself? How would that work?
The only real things we need to know are: a) What computation should run on each "virtual stage" (and how these stages are mapped to actual GPUs)? b) How should tensors be routed between stages? Using this, you should in theory be able to implement any pipeline schedule that takes care of forward and backward passes.
Things like weight tying can be thought of as postprocessing steps at the end of these forward and backward steps (and before the optimizer step runs). For example, in Megatron, we perform an all-reduce of the gradients of all copies of the embedding layers after completing the forward and backward passes in a given batch (and the logic here is the same regardless of the schedule used for forward and backward passes).<|||||>I think names such as GPT2Model3D is better. ParallelGPT2 is also nice, but it seems like an ambiguous name in that the existing GPT2 is capable of tensor model parallelism.<|||||>Additionally, on slack we have now started discussing a totally new direction for PP and that's where we don't touch the model and get it parallelized automatically.
For example see SageMaker's PP overview https://aws.amazon.com/blogs/aws/amazon-sagemaker-simplifies-training-deep-learning-models-with-billions-of-parameters/ Note that it now doesn't even mention `nn.Sequential ` (it used to back in the spring) - it now totally automates the process. Some months back it preferred nn.Sequential but didn't require it - now it doesn't even mention it. Unfortunately, there is no disclosure on how they do it. I think we should be able to do something similar.
The closest approach to doing it that is publicly disclosed is FlexFlow https://huggingface.co/transformers/parallelism.html#flexflow
https://github.com/flexflow/FlexFlow
<|||||>> @pritamdamania, if I may ask - do you have plans to support the interleaved PP protocol in pytorch?
@stas00 I'm assuming you are referring to Interleaved Schedule mentioned here: https://developer.nvidia.com/blog/scaling-language-model-training-to-a-trillion-parameters-using-megatron/. As you maybe aware there is a lot of research out there in terms of improving pipeline parallel performance on various axes (memory, speed) and each of them have their tradeoffs. It is not feasible to include all those algorithms in Pytorch. Although there are a few options here:
1) If there is an algorithm that is clearly superior than the rest and if there is enough demand for it from the community, we can implement that in Pytorch.
2) We could evaluate making pipeline parallelism in PyTorch extensible so that the community can quickly try out different algorithms on top of a core pipelining framework without having to reimplement the whole algorithm from scratch themselves.
> Or perhaps my notion is outdated and the interleaved PP is in works already in pytorch?
Currently we are not working on interleaved PP, but as I mentioned above we have a couple of options in terms of how we can enable this.
@deepakn94 Would love to get your thoughts on this regarding the two options I mentioned above since a lot of your research work is in this area :) My initial feeling is that there probably isn't one algorithm that would be the best for all use cases and even if that is true today, new research a few months later might make that algorithm obsolete. Do you feel it is valuable to have an extensible pipelining framework in PyTorch where researchers like yourself can quickly try out different algorithms/schedules? :)<|||||>Thank you for your follow up, @pritamdamania87!
Yes, Megatron, Deepspeed and Sagemaker all support the interleaved schedule.
> * If there is an algorithm that is clearly superior than the rest and if there is enough demand for it from the community, we can implement that in Pytorch.
I'd defer to @deepakn94 as he has much more experience with the various schedules.
Perhaps @ShadenSmith has some insights to share as well, as he has built the PP framework in Deepspeed.
(Perhaps we need a PP-creators/users thread where we can share what works the best and how to make the different implementations interchangeable - i.e. creating a standard API).
> * We could evaluate making pipeline parallelism in PyTorch extensible so that the community can quickly try out different algorithms on top of a core pipelining framework without having to reimplement the whole algorithm from scratch themselves.
That is definitely the best approach it seems.
I think for the plethora of HF Transformers users and uses - being able to choose the best schedule would be a great boon to the whole community.<|||||>@pritamdamania87, have you by chance contemplated an approach where any model could be made to support PP w/ only minor mods or none using automatic splitting based on the graph? For context please see 3 comments up: https://github.com/huggingface/transformers/issues/13690#issuecomment-934756008<|||||>> @pritamdamania87, have you by chance contemplated an approach where any model could be made to support PP w/ only minor mods or none using automatic splitting based on the graph? For context please see 3 comments up: #13690 (comment)
Yes, we have thought about an automated partitioning solution where you don't need to rewrite your model as `nn.Sequential` and you just need pass in an `nn.Module` and the pipelining framework takes care of the rest. One potential idea was to extract the graph using `torch.fx`, inspect the graph and appropriately partition it across devices. cc @wanchaol Was wondering if the design we brainstormed is in a state that we can share it publicly in an RFC?<|||||>> One potential idea was to extract the graph using `torch.fx`, inspect the graph and appropriately partition it across devices.
Yes, this is precisely what we hope could be done in pytorch! Yes, please!
Looking forward to hearing about your research, Pritam.
And I also would like to invite @jiazhihao of [FlexFlow](https://github.com/flexflow/FlexFlow) to comment on this subject matter, since FF has been actively pursuing this direction, among various other scalability dimensions his project has been exploring. I wonder if the efforts could be joined as there is more than one group that could be already having something working. FF's paper is here: https://arxiv.org/abs/1807.05358
On the HF side we have made the required preparations to make our models `torch.fx` ready wrt to symbolic tracing. And @thomasw21 is now actively trying to figure out how to get FF working with HF Transformers. <|||||>@stas00 Thanks for all the feedback! I had a general question reading the discussion on this issue. It looks like we are talking about using TP, PP and also ZeRO. I was curious to know more about how do you plan to leverage all of these parallelism strategies. Are you planning to use one or a combination of them for a particular model and a different strategy for a different model? If so, how are we making this decision? Is it based on some sort of experimental results?
As an example, why not just use something like ZeRO-3 for all of your models? Are there certain use cases you have where ZeRO-3 doesn't fit well and something like TP and PP is preferred? <|||||>> @deepakn94 Would love to get your thoughts on this regarding the two options I mentioned above since a lot of your research work is in this area :) My initial feeling is that there probably isn't one algorithm that would be the best for all use cases and even if that is true today, new research a few months later might make that algorithm obsolete. Do you feel it is valuable to have an extensible pipelining framework in PyTorch where researchers like yourself can quickly try out different algorithms/schedules? :)
I think it's better to provide an easy way for users to specify their own schedule. Of course you could have some initial set of defaults (and perhaps the community can add to these if there's enough interest). I don't think PyTorch should be in the business of trying to keep up with state-of-the-art here.
I am happy to provide input and help brainstorm on what the best way of specifying such schedules is (and perhaps even help write code if useful)!<|||||>> As an example, why not just use something like ZeRO-3 for all of your models? Are there certain use cases you have where ZeRO-3 doesn't fit well and something like TP and PP is preferred?
I don't think you should think of ZeRO-3 as a panacea. In particular, I think it's pretty communication heavy, so might not make sense for certain models and hardware deployments (some of the communication time can be hidden with clever overlapping, but not all of it). In such cases, PP + TP might be a better option.<|||||>> @stas00 Thanks for all the feedback! I had a general question reading the discussion on this issue. It looks like we are talking about using TP, PP and also ZeRO. I was curious to know more about how do you plan to leverage all of these parallelism strategies. Are you planning to use one or a combination of them for a particular model and a different strategy for a different model? If so, how are we making this decision? Is it based on some sort of experimental results?
>
> As an example, why not just use something like ZeRO-3 for all of your models? Are there certain use cases you have where ZeRO-3 doesn't fit well and something like TP and PP is preferred?
Yes, of course. So there is a big group of us involved with the BigScience project https://github.com/bigscience-workshop/bigscience
Having found a working ZeRO-3 I was relieved we didn't have to make HF Transformers' models work with PP.
Until we did benchmarks before starting the first BigScience training and discovered that the HPC we were planning to use (JeanZay) had slow internode interconnects, which were further impacted by a shared network. And while on a fast interconnect ZeRO-3 performs on par with TP+PP, it was lagging quite behind in TFLOPS - if you're curious I have the summaries of all experiments here: https://github.com/bigscience-workshop/bigscience/blob/master/experiments/gpt2.md, so of course we ended up choosing Megatron-LM as our framework for its TP+PP which scored much better, and later added ZeRO-1 to it and Deespeed's PP, which has become https://github.com/microsoft/Megatron-DeepSpeed. This allowed us more flexibility, e.g. use of CPU offload for inference/finetuning and most importantly added an additional group of very experienced minds to our mindpool.
The trouble is now we had to figure out a new framework and all the new development had to be done for both Megatron and HF Transformers, we had to figure out how to convert checkpoints, and we are still sorting out some differences between gpt2 implementations as they aren't the same and a model trained with Megatron doesn't currently give the same performance with HF Transformers (I'm in the process of figuring out those differences). In other words lots of wasted time trying to develop a lot more than could have been.
That's why we want to make sure HF Transformers has its own TP ([almost there](https://github.com/huggingface/transformers/pull/13726)) and PP (this discussion), so that when we do research and apply it we don't need to do it twice and so that the models that we train perform identically under HF Transformers for inference and finetuning.
So long story short, it'd be much easier for us to move forward if we had similar capacity to Megatron-LM.
(and just to clarify in case it wasn't clear, Megatron-LM's team has been amazingly supportive of the BigScience project and they have an awesome framework, and the same goes for Deepspeed and their amazing support. We just need to have in-house tools to do the same.)
Please let me know if I have addressed your question to why we still want TP+PP.
<|||||>> Yes, we have thought about an automated partitioning solution where you don't need to rewrite your model as nn.Sequential and you just need pass in an nn.Module and the pipelining framework takes care of the rest.
Another approach that AWS Sagemaker's model parallelism employs (https://aws.amazon.com/blogs/aws/amazon-sagemaker-simplifies-training-deep-learning-models-with-billions-of-parameters/) is to trace the model on the level of submodules so the structure represents the call tree (i.e. like `model.modules()` but ordered at each level and annotated with input sizes). Then it swaps out modules for the remotely executed versions (for the pipeline).
You can poke around the source code by downloading the `smdistributed_modelparallel` wheel here: https://github.com/aws/deep-learning-containers/blob/5ecc625683887edd54c28431ac3cf6b95f30e84d/pytorch/training/docker/1.9/py3/cu111/Dockerfile.gpu#L51 . The relevant code is in `modelparallel/torch/{patches/tracing.py,module_partition.py}`.
The upside of this approach is that it doesn't require any model code changes. The downside is that the granularity is fixed on how model code was written, but in practice people write models in pretty granular form anyway.<|||||>@stas00 @deepakn94 Thanks for all of your feedback, this is very valuable for the PyTorch team!
<|||||>> > @pritamdamania87, have you by chance contemplated an approach where any model could be made to support PP w/ only minor mods or none using automatic splitting based on the graph? For context please see 3 comments up: #13690 (comment)
>
> Yes, we have thought about an automated partitioning solution where you don't need to rewrite your model as `nn.Sequential` and you just need pass in an `nn.Module` and the pipelining framework takes care of the rest. One potential idea was to extract the graph using `torch.fx`, inspect the graph and appropriately partition it across devices. cc @wanchaol Was wondering if the design we brainstormed is in a state that we can share it publicly in an RFC?
Let me refresh the RFC to address the latest comments in the coming days, and will share it to the public soon. <|||||>> > Yes, we have thought about an automated partitioning solution where you don't need to rewrite your model as nn.Sequential and you just need pass in an nn.Module and the pipelining framework takes care of the rest.
>
> Another approach that AWS Sagemaker's model parallelism employs (https://aws.amazon.com/blogs/aws/amazon-sagemaker-simplifies-training-deep-learning-models-with-billions-of-parameters/) is to trace the model on the level of submodules so the structure represents the call tree (i.e. like `model.modules()` but ordered at each level and annotated with input sizes). Then it swaps out modules for the remotely executed versions (for the pipeline).
@dzhulgakov I think in theory we can run `torch.fx` in order to swap out the to swap "patterns" for new blocks. I'm slightly unclear on how one can define the pipeline flow using only the graph and input/output sizes? After reading the blogpost, it seems one still needs to run dummy tensors to profile the job.
> Amazon SageMaker runs an initial profiling job on your behalf in order to analyze the compute and memory requirements of your model. This information is then fed to a partitioning algorithm which decides how to split the model and how to map model partitions to GPUs, while minimizing communication.<|||||>Hi @pritamdamania87 @wanchaol , just wanted to check in and see if the RFC for pipeline parallelism has been made public? Thanks!<|||||>@deepakn94 Thanks for checking in, @jamesr66a was looking into this recently. @jamesr66a Would be great to share the RFC for pipeline parallelism with Deepak once we have something that we can share publicly. His feedback would be very valuable to ensure what we build here is valuable to the research community.<|||||>Hi folks, I think there are actually two RFC's in flight right now:
* @wanchaol has an RFC for pipeline parallelism front-end to better automate partitioning of models using techniques like `torch.fx` graph extraction
* I have an RFC for adding support for cross-host support to the `torch.distributed` pipeline parallel APIs, fleshing out the execution engine (e.g. adding support for 1F1B or programmable schedules), and further trying to drive coherence across the different types of parallelism (e.g. PP, TP, DP)
I can work on polishing my RFC and publishing it by, say, end of next week. Does that sound reasonable?<|||||>Also note the release of a new paper: [Varuna: Scalable, Low-cost Training of Massive Deep Learning Models](https://arxiv.org/abs/2111.04007) and its code base https://github.com/microsoft/varuna.
The relevant keys to this discussion:
1. is that it supposedly turns any model into PP on the fly - no changes required by the user. edit: I may have misunderstood the paper, I see now it requires manual inserting of cutpoints in the code: https://github.com/microsoft/varuna#cutpoint-demarcation), so this is not on the fly. I thought it said it can do it automatically)
2. it sports a superior PP scheduling algorithm which allows it to thrive in slow network environment
<|||||>@jamesr66a That sounds great to me!
@wanchaol Would love to see your RFC as well!<|||||>@deepakn94, shared elsewhere a new relevant to our discussion paper:
**Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training**
Can Karakus, Rahul Huilgol, Fei Wu, Anirudh Subramanian, Cade Daniel, Derya Cavdar, Teng Xu, Haohan Chen, Arash Rahnama, Luis Quintela
https://arxiv.org/abs/2111.05972<|||||>Hi folks, as promised I have just PRed the RFCs mentioned above: https://github.com/pytorch/rfcs/pull/32
* RFC-0020 Pipeline Parallelism Strategic Plan
* RFC-0021 Pipeline Parallelism Technical Approach Proposal
* RFC-0022 Model Partitioning in Pipeline Parallelism Proposal (@wanchaol's RFC)
Please note that these RFCs only represent our current thinking given the information we have, and we encourage sharing feedback on the contents. We are open to revising our plans, particularly with respect to technical approach. We are considering between:
* Quickly delivering an API with a more constrained interface (i.e. `nn.Sequential`) to start design and implementation iteration quickly
* Deferring delivery of an API to explore techniques that will generalize the front-end of the pipeline parallelism API for better UX and applicability.
Your feedback on this^ question and on the RFCs in general will be greatly appreciated!<|||||>@jasmesr66a Hi jasmesr66a.
We made totally new pipeline parallel engine. We will share the news soon.
Thanks. <|||||>@hyunwoongko any update on that?<|||||>@jamesr66a We'll release code 12/23. It might be faster ;) <|||||>https://github.com/tunib-ai/oslo
We opened our codebase (framework) ! <|||||>@hyunwoongko 🔥. Let me know if you need contributions to help get other models supported for this.<|||||>@KMFODA any way that you can help us is welcome. would you like to contribute to extend the models?
<|||||>yes! I have a personal preference for adding t5-11b (or 3b) to the library.<|||||>I think it's good idea. Let me know if you need my help. I'm happy to help.<|||||>Thank you! Yes any help would be much appreciated. It would be good to have a guideline for adding new models that way anyone from the community can add more models later as well.
Would copying the code for one of the GPT models and tweaking it for T5 be the best way to start? If so are there any modules in the GPT models that are not transferrable to T5?<|||||>Hi there, we are developing an auto 3D parallelism distributed software for transformers models.
We plan to integrate this software into `transfomers.trainer` class and make it use as easily as the original trainer.
We use `torch.fx` and `transformers.utils.fx` for graph extraction, automatically partition the extracted graph and use pipeline runtime engine from Deepspeed for pipeline parallelism.
For tensor parallelism we use a config mapping to support `megatron.mpu.layers` in transfomers models automatically.
A prototype version is finished now. Theoretically, any fx traceable model could be run in 3D parallelism. More models are under testing and we are making this software more functional intensively. We will open source it very soon.
Any advice will be appreciate~<|||||>Hey @lucasleesw - this sounds awesome! A team of us here at HuggingFace (@stas00 @thomasw21) have been slowly starting to understand the best path forward to 3D Parallelism (right now, we're looking at the Oslo framework linked above).
Do you have code you can point us to for your implementation, and after we've had time to go through it, maybe we can set up a Slack channel/have a call to talk about the possibility of integration?<|||||>Hi @siddk,we are busily polishing our code and completing functions.
The code will be available within a mouth.<|||||>Hello @lucasleesw. We are also very interested in parallelization based on torch.fx.
I hope to conduct collaborations and seminars when the codebase is opened.<|||||>from https://github.com/microsoft/DeepSpeed/pull/1512#discussion_r791312559
@stas00
Context for others: I recently made a new flexible tensor parallelization engine with torch.fx. This is a discussion about it.
> I'm missing the full context. Do you suggest to have a policy record for each model like in the example you have shown here:
The following is the abstract class
```python
# oslo/parallelism/mpu.py
class LayerInfo:
@staticmethod
def base():
"""
Returns:
the base transformer block of model
Examples:
>>> return BertLayer
"""
raise NotImplementedError
@staticmethod
def attention():
"""
Returns:
the last element of attention modules
Examples:
>>> return BertAttention
"""
raise NotImplementedError
@staticmethod
def mlp():
"""
Returns:
the last element of mlp modules
Examples:
>>> return BertOutput
"""
raise NotImplementedError
@staticmethod
def reducing_required():
"""
Returns:
arguments that are required reducing
Examples:
>>> return ["all_head_size", "num_attention_heads"]
"""
raise NotImplementedError
```
and you can use like this. (I'll predefine almost all layer info classes)
BERT:
```python
from transformers.models.bert.modeling_bert import BertAttention, BertOutput, BertLayer
from oslo.parallelism.mpu import LayerInfo
class BertLayerInfo(LayerInfo):
@staticmethod
def base():
return BertLayer
@staticmethod
def attention():
return BertAttention
@staticmethod
def mlp():
return BertOutput
@staticmethod
def reducing_required():
return ["all_head_size", "num_attention_heads"]
```
GPT2:
```python
from transformers.models.gpt2.modeling_gpt2 import GPT2Block, GPT2Attention, GPT2MLP
from oslo.parallelism.mpu import LayerInfo
class GPT2LayerInfo(LayerInfo):
@staticmethod
def base():
return GPT2Block
@staticmethod
def attention():
return GPT2Attention
@staticmethod
def mlp():
return GPT2MLP
@staticmethod
def reducing_required():
return ["embed_dim", "split_size", "num_heads"]
```
The reason I created these classes is so that I can't modify the code inside the transformers because oslo is an external library now. but It would be easier if we can define them like `config_class` in XXXPreTrainedModel class like:
```python
class BertPreTrainedModel(PreTrainedModel):
config_class = BertConfig # already exist.
last_attention_class = BertAttention
last_mlp_class = BertOutput
base_class = BertLayer
reducing_required = ["all_head_size", "num_attention_heads"]
```
Then, we don't need info class anymore because I can access them like `model.last_attention_class`.
To parallelize without parameter names, I at least needed to know what the attention class and mlp class were.
@lucasleesw could you let me know If you have some better ideas?
And you can use engine like this (distributed launcher is needed)
```python
from oslo import MPU, TensorParallelEngine
model = BertForXXX.from_pretrained(...)
mpu = MPU(tensor_parallel_size=4, pipeline_parallel_size=1)
enigne = TensorParallelEngine(model, BertLayerInfo, mpu)
engine.parallelize()
# rest code is same. training, inference, generation, ...
```
and this engine also supports training.
To deparallelize the model, you can do this:
```python
engine = TensorDeparallelEngine(model, BertLayerInfo)
engine.deparallelize()
# all parameters are deparallelized and are moved to cpu.
# likewise, training, inference, generation can be performed using usual code
```
If you have multiple bass transformer block classes, like Bart, you can do this.
```python
class BartEncoderLayerInfo(LayerInfo):
@staticmethod
def base():
return BartDecoderLayer
... omitted
class BartDecoderLayerInfo(LayerInfo):
@staticmethod
def base():
return BartEncoderLayer
... omitted
model = BartForXXX.from_pretrained(...)
mpu = MPU(tensor_parallel_size=4, pipeline_parallel_size=1)
encoder_enigne = TensorParallelEngine(model, BartEncoderLayerInfo, mpu)
encoder_engine.parallelize()
decoder_enigne = TensorParallelEngine(model, BartDecoderLayerInfo, mpu)
decoder_engine.parallelize()
```
I'll also make a pipeline parallelization engine during the week. 3d parallel training/inference/generation/deployent will be possible only with this level of info class.
I'm going to unify the two parallelization engines and make it look like this:
```python
from oslo import ParallelEngine, DeparalleEngine
# 2d parallelization
p_engine = ParallelEngine(model, tensor_parallel_size=4, pipeline_parallel_size=2, infos=the_list_of_info_classes)
p_engine.parallelize()
# 2d deparallelization
d_engine = DeparalleEngine(model, tensor_parallel_size=4, pipeline_parallel_size=2, infos=the_list_of_info_classes)
d_engine.deparallelize()
# if you want to parallelize data dimension, you can use DDP module in torch or ``deepspeed.initialize`` in deepspeed
# the rest code is same.
# training/generation/inference/deployment can be performed without training code change
```
Since the current version code has not been added in OSLO yet, I will write some example code after deployment so that you can easily apply it. To make the integration easier, I will write some simple monkey-examples for anyone to use and put it in the `tests` folder of OSLO. Does it make sense?
To add a little more, currently, [some elements like "all_head_size", "num_attention_heads" are being managed inside deepspeed engine now](https://github.com/microsoft/DeepSpeed/pull/1512/files#diff-cf74152f0933aa49e65fc12cb0017a40f5b6e2880bd7c01e0d2b26693b6c88e7R400). but I think it's not good structure. so I pushed them towards info class. because they are model specific parts. This is because unexpected bugs may occur if this is managed at the engine level. For example, in T5, `inner_dim` is a variable that must be reduced, but it may not be the case in other models If another model uses a variable named `inner_dim`.
@lucasleew Likewise, if you know of a way to automatically detect these variables, please let me know ;)
But as I said before, I want it to be mounted as a transformers internal solution instead of `import oslo`. what do you think? I also want to move engine class into the transformers, so ultimately I want the following api.
```python
model.from_pretrained(..., tensor_parallel_size=4, pipeline_parallel_size=2)
```
I think it is good to have these APIs internally from a long-term perspective, and if you consider tensorflow and jax, it is not good to depend too much on the outside. As soon as the torch version integration is finished, I will create a tensorflow version, deploy it to oslo, test it, and move it to transformers.<|||||>I think your proposal for the models to provide additional meta data is great, especially if multiple frameworks could use that information.
So in your example:
```
class BertPreTrainedModel(PreTrainedModel):
config_class = BertConfig # already exist.
last_attention_class = BertAttention
last_mlp_class = BertOutput
base_class = BertLayer
reducing_required = ["all_head_size", "num_attention_heads"]
```
we just need to think about good naming for each of these and then propose a PR to start adding those to the existing classes.
I'm not saying your naming is not good, just saying that others may have a different opinion.
@RezaYazdaniAminabadi, the above map can be used in Deepspeed-Inference as well, right? Are they any other bits that would help your side?
I wonder if this subject-matter is best split off into a separate issue here and discussed there:
[RFC] adding additional meta information for each model
with your example above and defining the target "consumers" - so in this case it's primarily frameworks that want to tap into TP, correct?
Or is some of it going to be used for PP as well?
The specifics matter for making a well thought out proposal to the maintainers of HF Transformers - to why each config entry is added and who is it going to serve.
And we can start with proposing this to insert into 3-5 popular models and then down the road expand to more if it works out well.
<|||||>> I think your proposal for the models to provide additional meta data is great, especially if multiple frameworks could use that information.
Cool ! Me and Jake will make PR about this.
> [RFC] adding additional meta information for each model
Also good.
> with your example above and defining the target "consumers" - so in this case it's primarily frameworks that want to tap into TP, correct? Or is some of it going to be used for PP as well?
Only for TP. My new PP engine does not require any meta information and modeling code changes.
> And we can start with proposing this to insert into 3-5 popular models and then down the road expand to more if it works out well.
I understand what you are saying. I'll make a new RFC, and let's discuss on there.<|||||>> @RezaYazdaniAminabadi, the above map can be used in Deepspeed-Inference as well, right? Are they any other bits that would help your side?
I think it's no. DS-inference and my engine have difference mechanism. <|||||>> > @RezaYazdaniAminabadi, the above map can be used in Deepspeed-Inference as well, right? Are they any other bits that would help your side?
>
> I think it's no. DS-inference and my engine have difference mechanism.
Right, so then let's think of a few consumers and declare additional metadata for those (if the maintainers will agree of course). That's also another reason why I'm proposing to start small with just a few models in case it doesn't accepted.
So looking at https://github.com/huggingface/transformers/blob/d169b8dadf67ac179c286257349fba319132873a/src/transformers/deepspeed.py#L35-L50 it looks like Reza's version gets all the info automatically for Bert, but needs hints for other models.
And so you, Kevin, won't have any need for these fields in your engine?
https://github.com/huggingface/transformers/blob/d169b8dadf67ac179c286257349fba319132873a/src/transformers/deepspeed.py#L42-L49
I'm asking that since if we are going to add meta-data to the models, so we will then want to fold the Deepspeed-Inference metadata into each model as well.<|||||>the other approach is to of course maintain this table separately, as I linked to in the comment above, so for your example, it could just be:
```
# src/transformers/oslo.py
oslo_pp_map = dict(
bert=dict(
last_attention_class = BertAttention,
last_mlp_class = BertOutput,
base_class = BertLayer,
reducing_required = ["all_head_size", "num_attention_heads"],
t5==dict(
last_attention_class = T5Attention,
last_mlp_class = T5Output,
base_class = T5Layer,
reducing_required = ["foo", "bar"],
),
...
)
```
which would be much easier to maintain should tomorrow you'd want to add/remove/rename some of these.
the detriment is that they are not following the locality rule so should the model change, this will break. This is of course is very unlikely to happen with stable models, and would be more of the case for recently added ones.
**Perhaps I may recommend to follow this path first, as it'd make things much more flexible for your work**. If you get to the new metadata past the maintainers it'd be very difficult to change this down the road due to backward compatibility requirement.
If you decide to change oslo and make a matching PR to transformers and all that is modified is `src/transformers/oslo.py` that doesn't impact anything else it'll be much much easier for you and all involved. This is based on my experience of maintaining `src/transformers/deepspeed.py` which requires barely any review from maintainers as it's self-contained and doesn't impact the rest of the project.
<|||||>Hello @stas00.
> it looks like Reza's version gets all the info automatically for Bert, but needs hints for other models. And so you, Kevin, won't have any need for these fields in your engine?
Yes, I was automating the parameter name searching via torch.fx. but for this, I need to know the class name of attention and mlp. I thought defining class is easier than defining parameter names for users and integration point of view. However, for manageability of transformers, I will change the code to use the same tp map with deepspeed. It's not very difficult.
> the other approach is to of course maintain this table separately, as I linked to in the comment above, so for your example, it could just be:
I will try that method (configuring with dict object, not the class) and I'm going to use a map in much the same format as ds-inference.
> If you decide to change oslo and make a matching PR to transformers and all that is modified is src/transformers/oslo.py that doesn't impact anything else it'll be much much easier for you and all involved. This is based on my experience of maintaining src/transformers/deepspeed.py which requires barely any review from maintainers as it's self-contained and doesn't impact the rest of the project.
Yes I don't think it affects anything else. If we use oslo in `from_pretrained` as I suggested, we can use it by simply calling `engine.parallelize()` there. that's all. and it will be used briefly in the trainer class like deepspeed or sagemaker.
<|||||>@stas00 how about this?
- `src/transformers/utils/model_parallel_utils.py`
- this file already exists, let's insert tp_map to this file because it's not used independently only by deepspeed now.
- `src/transformers/oslo.py`
- independant oslo code.
- `src/transformers/deepspeed.py `
- independant ds code.
Currently transformers has `integrations.py`. It is also desirable to manage them here rather than separate them. There are various integration code such as `ray`.<|||||>@siddk @jaketae Let's set the role like this. I will only upload the new engine part to OSLO and I'll open a PR for this to the transformers, so please add the part that makes this available in `trainer`, `from_pretrained` and `save_pretrained (it's for deparallelization)`. In other words, I am in charge of the parallel engine part of oslo, and you guys are in charge of the transformers integration part. I was going to complete the PP engine and write a PR, but it would be better to put it on the transformers side by step rather than that. How about this?<|||||>to @RezaYazdaniAminabadi
1. I've rewritten the code to use the same format map as yours. How about unifying our map in this format? The code on your side probably doesn't need to change much.
2. And let's keep this map on the transformers side and let me and my colleagues (Sidd and Jake, if they agree) quickly expand models. This is perhaps the best form of extending the ds-inference model we discussed in the beginning.
3. It would also be better for scalability of ds-inference to manage reducing_required variables here as well. Otherwise you will have to modify the inference-engine code whenever a support model is added. This would be very cumbersome.
4. Since `linear_all_reduce` is not the official name, so let's use the official name `row_parallel_linear` from the Megatron-LM paper.
Do these make sense?
---
to @stas00
`BertForSeqeuenceClassification`, `BertForMaskedLM` and all the `BertForXXX..` are subclasses of `BertPreTrainedModel`, so it will be useful to detect them using `isinstance(model, cls)`. it would be better than managing all the keys as strings.
And I decided to manage OSLO as an external library and let transformers call it by `import oslo`. That seems like a more desirable structure. If something goes wrong, I'll fix the OSLO part. If all OSLO's code is managed on the transformers side, there is no one to manage it.
```python
src/transformers/utils/model_parallel_utils.py
from ..models.gpt2.modeling_gpt2 import GPT2Block, GPT2PreTrainedModel
from ..models.bert.modeling_bert import BertLayer, BertPreTrainedModel
from ..models.bart.modeling_bart import BartEncoderLayer, BartDecoderLayer, BartPretrainedModel
TENSOR_PARALLEL_MAPPING = {
BertPreTrainedModel: {
BertLayer: {
"row_parallel_linear": ["output.dense"],
"reducing_required": ["all_head_size", "num_attention_heads"],
},
},
GPT2PreTrainedModel: {
GPT2Block: {
"row_parallel_linear": ["c_proj"],
"reducing_required": ["embed_dim", "num_heads", "split_size"],
},
},
BartPretrainedModel: {
BartEncoderLayer: {
"row_parallel_linear": ["out_proj", "fc2"],
"reducing_required": ["embed_dim", "num_heads"],
},
BartDecoderLayer: {
"row_parallel_linear": ["out_proj", "fc2"],
"reducing_required": ["embed_dim", "num_heads"],
},
},
}
```
---
to @jaketae @siddk
In addition, the similar maps for almost all models are already defined in [parallelformers](https://github.com/tunib-ai/parallelformers/tree/main/parallelformers/policies). So we can be able to increase the number of models very quickly. It would be nice if Sidd, Jake and I were responsible for extending the model. How about you guys?
<|||||>@hyunwoongko I think the additional meta data is great! Our code for tensor parallelism is rely on a similar feature dict like yours. Our dict is look like:
```
'bert':{
'col_para_list':['query', 'key', 'value', 'intermediate.dense'],
'row_para_list':['output.dense'],
'mp_attr_list':['num_attention_heads','all_head_size']
},
```
Obviously the feature dict is not good for a long-term perspective.
Additional metadata is a great idea and I am wandering your opinion about modules apart from attention and mlp, how about other module such as linear module and convolution module? <|||||>Hello @lucasleesw. Thanks to answer!
I did something like this:
1. Trace the model using torch.fx to make graph
2. Find the last forwarded linear or conv layer in the Attention module and row-parallelize them.
3. Find the last forwarded linear or conv layer in the MLP module and row-parallelize them.
4. Column-parallelize all linear and conv layers in the all module lists.
5. Parallelize embedding layer and do special processing for tied head module.
The reason this works is there is the premise that the output of the column-parallel linear layer is input to row-parallel linear layer. It worked well for almost all transformers models. Also, almost all attention modules and MLP modules of the transformers do not perform linear again after a row-parallelization required linear or conv layer. Of course there may be exceptions in the future. <|||||>Currently, there are no implementations that use the tracing method like me. So I'm going to return to the previous version for ease of management of the transformers.
In addition, my implementation is rather risky, so it's a good idea to safely create a dict map. Prior to this version of implementation, I and Reza succeeded in omitting column parallel dict map by the following method:
1. get the name of row parallel linear from user
2. row-parallelize the corresponding layers
3. finally column-parallelize all the remaining layers of the module list.
<|||||>In fact, I think that the method of omitting column parallel linear is also somewhat risky. Therefore, @lucasleesw's method, which defines both column parallel linear and row parallel linear names, is the safest. @stas00 Be aware that the more omissions there are, the less reliable the engine is. I think @lucasleesw's dictionary is the safest and best.<|||||>Hi @hyunwoongko. Thank you for your reply. I got your idea about linear or conv1d layer in the Attention module or MLP module and I think it is great.
And indeed this would be kind of risky. For example, how about conv2d layers in `src/transformers/models/vit/modeling_vit.py`?
In our implementation, we built some api to help users built the dict map to handle this exception but we found it is hard to use for now.
I think it would be helpful if the metadata class has some api to let user extend the tensor parallelism methods.
<|||||>@lucasleesw I totally agree with you. Defining both columns and rows is probably the safest and most extensible way.<|||||>@stas00 I initially used the method of defining both column and row parallel parameters, but since the process of defining them is quite difficult, I experimented with many ways to create a simple tensor parallelization map. But the simplification got the more possibility that can makes exceptions. So, like @lucasleesw's method, it would be best to use all three pieces of information: column, row, and mp_param.
We all parallelize in a similar way, and so does sagemaker too. Therefore, it would be convenient if we unify and manage this inside the transformers.<|||||>@lucasleesw Omitting column parallel linear and tracining method won't cause any problems in vit. Because embedding is not in the module list. https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit/modeling_vit.py#L146
They parallelize only the layers inside the base layer module (like BertLayer), not all existing layers. Even so, these simplifications can always make exceptions.<|||||>@lucasleesw I'm also wondering about your pp implementation. could you let me know? I used deepspeed pp in the beginning, but now we are implementing the same method with sagemaker pp.<|||||>@hyunwoongko You are right, thanks again for your inspiration.
Our implementation will be available very soon, we look forward for your advice. <|||||>@lucasleesw I will upload a PR for tensor parallel mapping today. It would be great if you could reply to make a more general PR. How did you deal with the fused attention module (in the gpt2, transfo_xl)? it means attention layer that has the size like `linear(3 * dim, dim)`. and If we create GPT2 with `EncoderDecoderModel`, then GPT2 has cross attention (q_attn) which is `linear(2 * dim, dim)`. These shouldn't be handled simply because they are all appended with q, k, and v (or k and v for cross attention). How did you deal with them? Were you able to automate this without some mapping?<|||||>So we have at least 3 possible "consumers" of additional model metadata at the moment: @hyunwoongko, @lucasleesw and @RezaYazdaniAminabadi - so perhaps instead of maintaining 3 different tables, would you agree on having one that contains all the fields that you need? and you can discuss between yourselves how you prefer to call those. We can give the new names a period of "experimental and a subject to change" until the dust settles and then they will get carved in stone at a later date to support backward compatibility. And I'm sure there will be other consumers for that type of metadata.
I don't see any reason not to have all the desired components written out explicitly, instead of being derived automatically. There is absolutely no reason to take a risque here, this is software engineering and not a stock market.
I propose to start with having a dedicated file for it with the first few models and then down the road we can see if it makes sense to move these into their model files. I just want to create minimal disturbance to the models code until we are ready to do so.<|||||>I opened a PR about tensor parallel mappings !<|||||>@stas00 For TP, it will go with the megatron-lm's way to do the parallelism in a python way if I understood correctly. If that's the case, it leaves us the opportunity to support the tpu and etc since the only question is about the allgather&allreduce API.
If that's case, I'd like to move this direction ahead for transformer. I'm not sure where we are now and what's the right branch to start. It will be great if you can share the end-2-end impl and I can start from there.<|||||>At the moment we have 2 projects that support TP (tensor parallelism):
- oslo https://github.com/tunib-ai/oslo
- Deepspeed-Inference https://www.deepspeed.ai/tutorials/inference-tutorial/
Both are not yet integrated into transformers. Oslo we are just slow to integrate since I'm busy with BigScience and @jaketae is backing me up and has started to work on the integration. Deepspeed-Inference is still a work in progress on the core, and I have some initial PR that integrates it but there are some hanging issues as HF Trainer is not MPU-aware yet.
So at the moment Deepspeed-ZeRO is the only solid and working solution for scalability on the free side, and Sagemaker on the paid side (though I have never tried the latter myself).
PP is much more difficult, and we are leaving it to the end, in hope that pytorch will provide us a new much easier to use PP-api that is somewhat similar to sagemaker's paper https://arxiv.org/abs/2111.05972
<|||||>@stas00
1. OSLO has the MPU, and this is compatible with deepspeed, ds-inference and megatron. If you need mpu, how about using this? maybe `from oslo import MPU` could work with ds-inference.
2. I am wonder your ds-inference integration plan. we need to integrate it without `Trainer` (because it's not about training). What's your plan? Since OSLO TP can be used for both training and inference, we need to discuss how to provide it from inference view.
3. I almost have implemented sagemaker-like PP internally, but I am not currently integrating it into the main branch. Because it can interfere with TP integration. So, when the TP integration work is finished, the PP will be merged into the main branch.<|||||>I was just saying that it doesn't have MPU at the moment ;) And it's needed to sync the ds-inference tp-processes I think. But my mind is the BigScience at the moment so I don't have the brain cycles for indepth analysis at the moment.
Making ds-inference integration depend on oslo would be odd, but it could be ok at the beginning and eventually have an internal one - it's just one module that's already written.
--------
why integrate ds-inference w/o Trainer? Trainer is just a name for both inference and training. |
transformers | 13,689 | closed | New Wav2Vec2 padding has slightly backward breaking changes | The PR: https://github.com/huggingface/transformers/pull/13650 introduced some quite tricky backwards breaking changes that we should try to fix.
The problem is the following: A user might directly use `feature_extractor.pad(...)` instead of `feature_extractor(...)` to just pad already preprocessed inputs in, *e.g.* a data collator.
The following code correctly returned `torch.float32` before merging the PR while the new PR returns `torch.float64` which is slighly breaking and can lead to errors in current fine-tuning Wav2Vec2 scripts:
```python
from transformers import Wav2Vec2FeatureExtractor
import numpy as np
extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
rand_input = np.ones((100,), dtype=np.float64)
out = extractor.pad([{"input_values": rand_input}], return_tensors="pt")
print(out.dtype) # <- this should be `torch.float32`
```
Here is a colab showing how the "old" version works correctly: https://colab.research.google.com/drive/10TlRWvwKx34UORmYdCFAyMWKUU3OtPRf?usp=sharing
Here is a colab showing how the "new" version works incorrectly:
https://colab.research.google.com/drive/1cXGuG4Rnypmivdm-vdE-61BA1f4hC4e8?usp=sharing | 09-21-2021 23:38:16 | 09-21-2021 23:38:16 | @anton-l - could you maybe look into it? :-) It's quite a tricky backwards compatible bug and we should have had tests to catch this problem. Would be great if you could try to open a PR to fix it :-)<|||||>Good catch! This is due to how pytorch converts float numpy arrays vs python lists:
* torch.float32 for python lists by default: `torch.tensor([1.2, 2.3]).dtype # torch.float32`
* `np.array([1.2, 2.3]).dtype # np.float64`
* source dtype for numpy arrays: `torch.tensor(np.array([1.2, 2.3]).dtype # torch.float64` |
transformers | 13,688 | closed | [FlaxWav2Vec2] Revive Test | Revive test as fixed in https://github.com/huggingface/transformers/commit/8565d38f3015e3fd83288eb6a21015fba694fe62 | 09-21-2021 22:14:28 | 09-21-2021 22:14:28 | cc @sgugger for notification |
transformers | 13,687 | closed | Allow only textual inputs to VisualBert | # What does this PR do?
This PR fixes #12827. Sorry for the delay.
@patrickvonplaten @patil-suraj | 09-21-2021 20:53:36 | 09-21-2021 20:53:36 | |
transformers | 13,686 | closed | Fix FNet reference to tpu short seq length | # What does this PR do?
This PR fixes #13684.
Should there also be a test for this particular case (TPU usage, less than 4096 seq length) ?
@LysandreJik @patrickvonplaten | 09-21-2021 20:25:59 | 09-21-2021 20:25:59 | @gchhablani are you btw. working on a flax integration of FNet :thinking: -> I've seen an attempt in this PR #12454 but it was closed. Would be really interesting to train FNet on TPU - GPU is working for me, but I only have one available... |
transformers | 13,685 | closed | Wav2vec2 pretraining | How can I pretrain the base wav2vec2 model using the *transformers.Wav2Vec2ForPreTraining* class on my own data?
I saw the example given on the official website -https://huggingface.co/transformers/model_doc/wav2vec2.html#wav2vec2forpretraining
But it wasn't of much help. Is there any notebook or script which I can follow? | 09-21-2021 19:34:55 | 09-21-2021 19:34:55 | https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_pretrain.py
Does this script helps you ?<|||||>cc @patrickvonplaten <|||||>Hey @aniket7joshi,
I'm currently working on it. Hope to have a good blog/notebook in 2,3 weeks!<|||||>I'll let you know :-)<|||||>Hey @patrickvonplaten , Any update regarding this blog really looking forward to it. <|||||>Wav2vec2 Pretraining at the moment only works for PyTorch. I want to make it work for Flax to be able to showcase it nicely in a google colab with 8 TPUs. Still working on it! <|||||>Hey @patrickvonplaten Can you help me as to how I use the script here: https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_pretrain.py
As input, I have a folder of audio files based on which I want to pretrain my model. I don't want to use any datasets available online or on huggingface.
TIA :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Planning to look into this again in 2 weeks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,684 | closed | modeling_fnet.py config option | https://github.com/huggingface/transformers/blob/b7d264be0d70d5309dbf24a6c7ebba1e073bdda5/src/transformers/models/fnet/modeling_fnet.py#L177
Should refer to config.tpu_short_seq_length
| 09-21-2021 19:30:22 | 09-21-2021 19:30:22 | Maybe of interest to @gchhablani :)<|||||>So sorry about that @ontocord.
I will fix it immediately. Thanks a lot for pointing it out.<|||||>I made a few more fixes. See this notebook. Basically I moved the buffer for doing fft to the encoder class, and made it a parameter of the module. in this way you can fix problems with cuda vs non cuda. the parameter weren't converting to cuda. i also made one fix relating to dtype conversion so you can run in half mode. You can do a diff and feel free to accept and PR it if you wish!
https://colab.research.google.com/drive/19A-qrta2yqVcfSjUtjEv6w1HqOyYXuEo#scrollTo=olgITNUmyQxA
<|||||>As an aside @gchhablani, did you run any speed tests. My small test didn't show much diff w/ bert. but I suppose the authors said that training is faster. I supposed I have to test it on something longer?
https://arxiv.org/abs/2105.03824<|||||>@ontocord I ran fine-tuning script on GLUE and the FNet model is faster than BERT. It takes 70% time (train + eval). The original ratios mentioned are only training and model is in Flax.
If you are talking about fourier transform only, then I didn't perform a test of speed for the three different ways they are implemented - GPU, TPU short sequence, TPU long sequence. I assume it will be slower because of PyTorch.
An interesting comparison could be creating a HuggingFace Flax model and comparing with the original.
@patrickvonplaten what do you think?<|||||>Could you all share the fine-tuning script or a colab that shows the run? 70% of the time of BERT or 70% faster than BERT?
<|||||>@ontocord It is 70% of the time of BERT in our case, but our script is different, and we used PyTorch. Torch does not have `vmap` as of now, because of which I expect slow down in the performance.
You can see the runs (go to Metrics) in the model cards: https://huggingface.co/models?other=fnet-bert-base-comparison<|||||>@ontocord I checked your changes, maybe you can open a PR and we can discuss? It'll be easier to involve others as well.<|||||>Sorry didn't see your message. Looks like you already did this... Great job :) |
transformers | 13,683 | closed | pipeline fill_mask.py - needs to convert input_ids to cpu before calling numpy | https://github.com/huggingface/transformers/blob/b7d264be0d70d5309dbf24a6c7ebba1e073bdda5/src/transformers/pipelines/fill_mask.py#L127
Should be
tokens = input_ids.cpu().numpy()
```
from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM, pipeline
#tokenizer = AutoTokenizer.from_pretrained("google/fnet-base")
#model = AutoModelForMaskedLM.from_pretrained("google/fnet-base").cuda()
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased").cuda().half()
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer, device=0)
unmasker("Hello I'm a [MASK] model.")
[
{"sequence": "hello i'm a new model.", "score": 0.12073223292827606, "token": 351, "token_str": "new"},
{"sequence": "hello i'm a first model.", "score": 0.08501081168651581, "token": 478, "token_str": "first"},
{"sequence": "hello i'm a next model.", "score": 0.060546260327100754, "token": 1037, "token_str": "next"},
{"sequence": "hello i'm a last model.", "score": 0.038265593349933624, "token": 813, "token_str": "last"},
{"sequence": "hello i'm a sister model.", "score": 0.033868927508592606, "token": 6232, "token_str": "sister"},
]
```
Will produce the right result with the change. But will raise an error without the change:
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. | 09-21-2021 19:15:04 | 09-21-2021 19:15:04 | Indeed! Would you like to open a PR to fix this issue?<|||||>Maybe after I finish finding some more bugs... I'm looking at fnet and found another bug. I'm tryin to get fnet working in half mode...
<|||||>Exactly same issue.
https://colab.research.google.com/drive/1F7XxC1tTVCALBS6X8DEtEPZwXvznuLs7?usp=sharing
Reproducable at Colab.
Don't know just that line is issue, or more big issues are coming...
I'm looking on it too!
<|||||>Hi,
This was normally fixed on master (well the exact proposed example does not work because the model is `half()` and `softmax` is not implemented for `f16` in PyTorch (you can either use `f32` or override the `_forward` method to cast things back to `f32` after it went through the model.
Does that help ?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This was fixed on master, feel free to reopen. |
transformers | 13,682 | closed | bug in movement-pruning | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@VictorSanh, @JetRunner
## Information
Model I am running the `masked_run_glue.py` on RTE datasets (I see similar results also on WNLI). I added one line to subsample the dataset for faster iteration of debugging as below:
```
if args.max_train_samples is not None:
sampled_indices = list(range(len(train_dataset)))
random.shuffle(sampled_indices)
sampled_indices = sampled_indices[:args.max_train_samples]
train_dataset = torch.utils.data.Subset(train_dataset, sampled_indices)
```
Here is the command I run:
```
python masked_run_glue.py --task_name rte --do_train --do_eval --max_seq_length 128 --num_train_epochs 3 --output_dir test --do_lower_case --model_type masked_bert --model_name_or_path bert-base-uncased --warmup_steps 500 --learning_rate 3e-5 --mask_scores_learning_rate 1e-2 --initial_threshold 1 --final_threshold 0.15 --initial_warmup 1 --final_warmup 2 --pruning_method topK --mask_init constant --mask_scale 0. --data_dir data/RTE/ --overwrite_output_dir --per_gpu_train_batch_size 64 --max_train_samples 100
```
No matter how many examples I set, the results are always the `exact` the same as below:
```
09/21/2021 18:27:44 - INFO - __main__ - ***** Eval results *****
09/21/2021 18:27:44 - INFO - __main__ - acc = 0.5270758122743683
09/21/2021 18:27:44 - INFO - __main__ - eval_avg_entropy = 1.8548108
```
To me, there must be a bug causing it.
Thanks for your help in advance @VictorSanh
## Expected behavior
The code needs to return different results for different train_datasets given. | 09-21-2021 18:34:38 | 09-21-2021 18:34:38 | I believe you are getting a random performance (RTE is a binary classification dataset) which means the loss has diverged during the training. Could you increase *significantly* the number of steps you are training for? You can take inspiration from these [hyper-parameters](https://docs.google.com/spreadsheets/d/17JgRq_OFFTniUrz6BZWW_87DjFkKXpI1kYDSsseT_7g/edit?usp=sharing).
In my experience, if you prune too fast (like you are doing if you don't have enough steps of fine-pruning), the loss will likely diverge at some point.
<|||||>Hi @VictorSanh
Thank you for coming back to me, I confirm I ran the experiment for num_steps = 2000 and this is getting the same results and equal to the one posted above for both num_samples=100 and num_samples=1000. I greatly appreciate your help and any suggestions on this. thanks <|||||>Hi @VictorSanh
I run the method for a very large number of steps (30K steps), and still sampling for N=100 and N=1000 both gives the exact same results posted above. I think there might be a bug causing it, I appreciate a lot your help.
- Also for the commands given in the repo, you have set initial_threshold=0 final_threshold=0.1 for soft-pruning, but this is 1 and 0.15 for topK one, basically the final one is less than the start for TopK while this is reversed for soft-version. I am confused how to set the threshold value, could you give me some intuition how one can do it? Maybe this helps to solve the issue thanks <|||||>> Hi @VictorSanh
> Thank you for coming back to me, I confirm I ran the experiment for num_steps = 2000 and this is getting the same results and equal to the one posted above for both num_samples=100 and num_samples=1000. I greatly appreciate your help and any suggestions on this. thanks
Could you look at the tensorboard and more particularly the training loss? and then report back<|||||>> * Also for the commands given in the repo, you have set initial_threshold=0 final_threshold=0.1 for soft-pruning, but this is 1 and 0.15 for topK one, basically the final one is less than the start for TopK while this is reversed for soft-version. I am confused how to set the threshold value, could you give me some intuition how one can do it? Maybe this helps to solve the issue thanks
For topK, the threshold value is the percentage of remaining weights (100% at the beginning, 15% at the end).
For the soft version, the threshold corresponds to the \lambda_{mvp} described in the paper.<|||||>Dear @VictorSanh
Thank you for the advice, I will try with it, do you mind also giving me some advice on how one can set initial warmup and final_warmup? as for the threshold values, although this does not make sense perhaps, but I see soft pruning gives a better results with setting initial threshold to 1. I see the final results in the doc you shared, could you also kindly share the range of values you tried, this is hard for me to understand how to set warmup and threshold for different methods. I appreciate any input on this.
thanks <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,681 | closed | [Trainer] Make sure shown loss in distributed training is correctly averaged over all workers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, the only the loss of the first workes is shown in distributed training. However I think it would be cleaner to show the averaged loss over all workers. In some setups (CTC speech recognition e.g.) the loss can vary quite a bit from worker to worker.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2021 16:45:32 | 09-21-2021 16:45:32 | What do you think @sgugger , @stas00 ? Any good ideas of how to write a test for it?<|||||>> This would made things inconsistent, since all other metrics are for rank 0 and aren't averaged or even measured.
All metrics (apart from the speed/RAM metrics) are computed on gathered predictions and labels, so this would actually make things more consistent. The validation loss for instance is the validation loss across all processes.<|||||>Oh, if it's just speed/mem metrics, then of course go for it. I wonder if we should then flag in the reports that speed/mem metrics are for gpu0. But we can tackle it in another PR.<|||||>Verified that the new line works for distributed training. Should I add a test here @sgugger ? |
transformers | 13,680 | closed | Update modeling_flax_wav2vec2.py | conv kernel_size to Tuple,
Flax Version 0.3.5 breaking change, https://github.com/google/flax/releases/tag/v0.3.5
PR https://github.com/huggingface/transformers/pull/13393
test failed link - https://app.circleci.com/pipelines/github/huggingface/transformers/28187/workflows/e0df5370-aebd-4fe7-99fb-ff49e2ddcec8/jobs/276846
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 09-21-2021 16:37:20 | 09-21-2021 16:37:20 | Thanks a lot for the PR @kamalkraj!
@patil-suraj - I checked that FlaxViT and FlaxCLIP already use Tuples so this PR should cover all Flax Conv modules<|||||>Thanks for the quick fix @kamalkraj ! |
transformers | 13,679 | closed | Fix non-negligible difference between GPT2 and TFGP2 | # What does this PR do?
Fix #13666 | 09-21-2021 16:33:40 | 09-21-2021 16:33:40 | The following test should have caught that error: `test_pt_tf_model_equivalence` in `modeling_tf_common.py` - would you like to check why it was passing even though it shouldn't have been?<|||||>>
>
> The following test should have caught that error: `test_pt_tf_model_equivalence` in `modeling_tf_common.py` - would you like to check why it was passing even though it shouldn't have been?
Sure. Will do it tomorrow :)<|||||>Thank you @ydshieh, this is very helpful.<|||||>Hi, @LysandreJik ,
In `test_modeling_tf_gpt2.py` , `TFGPT2ModelTester` uses a config with `"gelu"` activation.
https://github.com/huggingface/transformers/blob/8e908c8c74f556a82534f4cf1e7a1b4f7b55d24c/tests/test_modeling_tf_gpt2.py#L56
In `test_pt_tf_model_equivalence.py`, both tf & pytorch models are created using this same config, and therefore both use `"gelu"`.
Since the issue in `TFGPT2Model` was that it used `self.act = get_tf_activation("gelu")`, the test would pass.
The issue would be observed when the config uses an activation functions other than `"gelu"`, which is the case when I used the pretrained GPT2 (`"gelu_new"` is used).
https://huggingface.co/gpt2/blob/main/config.json
So `test_pt_tf_model_equivalence` is fine in my opinion, but it might be good to add other tests to prevent this situation (using pretrained models' configs, but reduce the hidden size & no. of layers).
(well, this doesn't 100% guarantee, if an attribute in a pretrained config happens to be the same as a hard coded attribute in a model).<|||||>@ydshieh, thank you for your detailed analysis, this is perfect! I understand the limitation of the test and this edge case.
Merging this PR! |
transformers | 13,678 | closed | Modified TF train_step | Starting a new cleaned-up branch for this one | 09-21-2021 14:45:53 | 09-21-2021 14:45:53 | |
transformers | 13,677 | closed | CUDA out of memory even for GPT-NEO-125M | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
- Deepspeed version: 0.5.3
### Who can help
@StellaAthena @sgugger @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I am running the scripts on a machine with 128G CPU memory and 4 RTX 2080 (11GB).
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Running example training script with
```bash
transformers/examples/pytorch/language-modeling$ python run_clm.py --model_name_or_path EleutherAI/gpt-neo-125M --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --overwrite_output_dir
```
And got OOM error. I guess RTX2080 should be enough to fit a 125M model?
And besides I tried deepspeed with this command on the machine with 4 RTX 2080
```bash
transformers/examples/pytorch/language-modeling$ deepspeed --num_gpus 4 run_clm.py --model_name_or_path EleutherAI/gpt-neo-125M --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --deepspeed ../../../tests/deepspeed/ds_config_zero3.json --overwrite_output_dir
```
Still got OOM error. I tried to vary `-num_gpus` from 1 to 4.
Use gpt-2, gpt-2-medium, gpt-neo-1.3B, gpt-neo-2.7B give the same OOM error (with or without deepspeed).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The example script should run without producing an error.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-21-2021 14:36:33 | 09-21-2021 14:36:33 | Please use the [forums](https://discuss.huggingface.co/) for questions like this, as we use the issues for bugs and feature requests only. You should probably reduce the sequence length you are using (with `--block_size`) to avoid the OOM issue, since it defaults to a large number, or reduce the batch size.<|||||>Ok thanks, sorry I was unaware of the forums. |
transformers | 13,676 | closed | [GPT-J] Use the `float16` checkpoints in integration tests | This PR switches GPTJ checkpoints in the integration tests to fp16 to test if they're able to run on our daily CI.
At the moment, fp32 checkpoints are timing out either during model downloads or initialization:
```
600.01s call tests/test_modeling_gptj.py::GPTJModelTest::test_batch_generation
600.00s call tests/test_modeling_gptj.py::GPTJModelLanguageGenerationTest::test_lm_generate_gptj
600.00s call tests/test_modeling_gptj.py::GPTJModelLanguageGenerationTest::test_gptj_sample_max_time
600.00s call tests/test_modeling_gptj.py::GPTJModelTest::test_model_from_pretrained
600.00s call tests/test_modeling_gptj.py::GPTJModelLanguageGenerationTest::test_gptj_sample
```
Note that this doesn't guarantee reproducibility of the old tests (some tokens may be different), but it could help with **caching the models** on the runner to avoid timeouts.
:warning: The tests should be revisited once more, once https://github.com/huggingface/transformers/pull/13466 is merged | 09-21-2021 14:26:00 | 09-21-2021 14:26:00 | `test_gptj_sample()` and `test_gptj_sample_max_time()` were disabled due to GPU OOM during more than one call to `.generate()` |
transformers | 13,675 | closed | Typo "UNKWOWN" -> "UNKNOWN" | # What does this PR do?
Fix typo "UNKWOWN" -> "UNKNOWN"
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 09-21-2021 12:20:53 | 09-21-2021 12:20:53 | |
transformers | 13,674 | closed | UnicodeDecodeError while loading pretrained model from AutoModel.from_pretrained() | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Ubuntu/Linux
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0
- Tensorflow version (GPU?): 2.6.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I wanted to train a custom language model in devangari dataset. I trained the model using google-research BERT repo using the scripts there.

This was the result after running run_pretraining,py
Then i used the huggingface script convert_bert_original_tf_checkpoint_to_pytorch.py to change the model to pytorch model.
Pytorch Model was produced using the script.
Then I used this script to load the model using huggingface TFAutomodel.
from transformers import AutoModel
from transformers import AutoTokenizer
save_directory = "/home/info/Documents/language_model_nlp/bert_language_model/language_model_pytorch/devanagari_language_model.bin"
tokenizer = AutoTokenizer.from_pretrained("/home/info/Documents/language_model_nlp/bert_language_model/devanagari_tokenizer")
model = AutoModel.from_pretrained(save_directory,from_pt=True)
I got error

| 09-21-2021 11:57:50 | 09-21-2021 11:57:50 | The path for the `AutoModel` should be to a directory pointing to a `pytorch_model.bin` and to a `config.json`. Since you're pointing to the `.bin` file directly, the configuration cannot be loaded.<|||||>> The path for the `AutoModel` should be to a directory pointing to a `pytorch_model.bin` and to a `config.json`. Since you're pointing to the `.bin` file directly, the configuration cannot be loaded.
I am giving the path of the model that I got when I ran this script convert_bert_original_tf_checkpoint_to_pytorch.py on the original model produced by training language model. I am confused which model file should I provide on AutoModel.from_pretrained() function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,673 | closed | retain_graph=True required when using a custom BigBird-based model | ## Environment info
- `transformers` version: 4.10.2
- Platform: Linux-3.10.0-1160.25.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.1 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Single device, multi-GPU set up with 4x NVIDIA A100s (40GB) (I'm also using gradient accumulation on top)
### Who can help
Models:
- Not entirely sure who's familiar with BigBird, ... maybe @patrickvonplaten?
Library:
- Since it could be related to the Trainer ... also @sgugger?
## Information
Model I am using (Bert, XLNet ...): Custom model based on BigBirdForPreTraining
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## Error Message
My custom model class contains the following forward function:
```
class ProtSTonKGsForPreTraining(BigBirdForPreTraining):
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
masked_lm_labels=None,
ent_masked_lm_labels=None,
prot_masked_lm_labels=None,
return_dict=None,
head_mask=None,
):
"""Perform one forward pass for a given sequence of text_input_ids + ent_input_ids + prot_input_ids.
Due to having more than two parts (and a RoBERTa base in the default BigBird model), the NSP objective is
omitted in this forward function.
:param input_ids: Concatenation of text + KG (random walk) + protein sequence embeddings
:param attention_mask: Attention mask of the combined input sequence
:param token_type_ids: Token type IDs of the combined input sequence
:param masked_lm_labels: Masked LM labels for only the text part
:param ent_masked_lm_labels: Masked entity labels for only the KG part
:param prot_masked_lm_labels: Masked protein labels for only the protein part
:param return_dict: Whether the output should be returned as a dict or not
:param head_mask: Used to cancel out certain heads in the Transformer
:return: Loss, prediction_logits in a BigBirdForPreTrainingOutputWithPooling format
"""
# 1. Use the LM backbone to get the pre-trained token embeddings
# batch x number_text_tokens x hidden_size
# The first element of the returned tuple from the LM backbone forward() pass is the sequence of hidden states
text_embeddings = torch.cat(
[
self.lm_backbone(input_ids[:, i*(self.kg_start_idx//3): (i+1)*(self.kg_start_idx//3)])[0]
for i in range(3)
],
dim=1,
)
# 2. Use the KG backbone to obtain the pre-trained entity embeddings
# batch x number_kg_tokens x hidden_size
ent_embeddings = torch.stack(
[
# for each numeric index in the random walks sequence: get the embedding vector from the KG backbone
torch.stack([self.kg_backbone[i.item()] for i in j])
# for each example in the batch: get the random walks sequence
for j in input_ids[:, self.kg_start_idx : self.prot_start_idx]
],
)
# 3. Use the Prot backbone to obtain the pre-trained entity embeddings
# batch x number_prot_tokens x hidden_size
prot_embeddings_original_dim = self.prot_backbone(input_ids[:, self.prot_start_idx:])[0]
prot_embeddings = self.prot_to_lm_hidden_linear(prot_embeddings_original_dim)
# Concatenate token, KG and prot embeddings obtained from the LM, KG and prot backbones and cast to float
# batch x seq_len x hidden_size
inputs_embeds = (
torch.cat(
[
text_embeddings,
ent_embeddings.to(text_embeddings.device),
prot_embeddings.to(text_embeddings.device),
],
dim=1,
)
.type(torch.FloatTensor)
.to(self.device)
)
# Get the hidden states from the basic STonKGs Transformer layers
# batch x seq_len x hidden_size
outputs = self.bert(
inputs_embeds=inputs_embeds,
encoder_attention_mask=attention_mask,
return_dict=True,
)
# batch x seq_len x hidden_size
sequence_output, pooled_output = outputs[:2]
# Generate the prediction scores (mapping to text and entity vocab sizes + NSP) for the training objectives
# prediction_scores = Text MLM, entity "MLM" and protein "MLM" scores
prediction_scores, _ = self.cls(sequence_output, pooled_output)
# The custom STonKGsELMPredictionHead returns a triple of prediction scores for tokens, entities,
# and protein sequences, respectively
(
token_prediction_scores,
entity_predictions_scores,
prot_predictions_scores,
) = prediction_scores
# Calculate the loss
total_loss = None
if (
masked_lm_labels is not None
and ent_masked_lm_labels is not None
and prot_masked_lm_labels is not None
):
loss_fct = nn.CrossEntropyLoss()
# 1. Text-based MLM
masked_lm_loss = loss_fct(
token_prediction_scores.view(-1, self.config.vocab_size),
masked_lm_labels.view(-1),
)
# 2. Entity-based masked "language" (entity) modeling
ent_masked_lm_loss = loss_fct(
entity_predictions_scores.view(-1, self.config.kg_vocab_size),
ent_masked_lm_labels.view(-1),
)
# 3. Protein-based masked "language" (entity) modeling
prot_masked_lm_loss = loss_fct(
prot_predictions_scores.view(-1, self.config.prot_vocab_size),
prot_masked_lm_labels.view(-1),
)
# Total loss = the sum of the individual training objective losses
total_loss = masked_lm_loss + ent_masked_lm_loss + prot_masked_lm_loss
if not return_dict:
output = prediction_scores + outputs[2:]
return ((total_loss,) + output) if total_loss is not None else output
return BigBirdForPreTrainingOutputWithPooling(
loss=total_loss,
prediction_logits=prediction_scores,
hidden_states=sequence_output,
attentions=outputs.attentions,
pooler_output=pooled_output,
)
```
and I am using this model in a Trainer:
```
# Initialize the Trainer
trainer = Trainer(
model=stonkgs_model,
args=training_args,
train_dataset=pretraining_data,
)
train_result = trainer.train(resume_from_checkpoint=last_checkpoint)
```
This results in the following error:
```
Traceback (most recent call last):
File "stonkgs_pretraining.py", line 234, in <module>
pretrain_stonkgs()
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "stonkgs_pretraining.py", line 220, in pretrain_stonkgs
train_result = trainer.train(resume_from_checkpoint=last_checkpoint)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/transformers/trainer.py", line 1284, in train
tr_loss += self.training_step(model, inputs)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/transformers/trainer.py", line 1799, in training_step
self.scaler.scale(loss).backward()
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/hbalabin/software/conda/envs/stonkgs/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward.
```
## Expected behavior
My guess is that this error message is caused by loss.backward() being called multiple times (maybe there is something wrong with the gradient accumulation procedure). The expected behavior would be that the backward procedure is only applied once to the graph/that loss.backward() being called once per training step.
Has anyone seen this error before? Thanks a lot in advance!! :hugs:
| 09-21-2021 11:47:44 | 09-21-2021 11:47:44 | You should debug your custom model on your own training loop first (pass a batch, compute the loss and call backward), as it's more likely to come from your custom model than the `Trainer`.<|||||>Makes sense, I'll try that out and see if I can get any insights from that. Thanks! :smile: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,672 | closed | [examples/flax] use Repository API for push_to_hub | # What does this PR do?
This PR updates all flax examples scripts to use the `Repository` API for `push_to_hub` and removes all instructions to manually create/clone a repo in the READMEs. | 09-21-2021 11:22:01 | 09-21-2021 11:22:01 | > Thanks a lot for taking care of this - can we leave the symbolic links though so that people can continue following the steps like a recipe?
@patrickvonplaten the issue with this is that, we need to pass "./" or "." as the `output_dir` which does not work with `Repository`.
What these changes do is that they create a directory for model/tokenize/config in `/tmp` and uses that path to save the `tokenizer` and `config` and use that as the `output_dir`. So the users should still be able to follow the steps only difference is the script will nbe run from its own directory rather than creating a symlink.<|||||>Thanks, @LysandreJik for pointing that out, we could change the path from `/tmp` to something else. Some of our PyTorch examples also use `/tmp`, so we should also correct those no?
> Also Repository(".", clone_from="xxx") should work!
It does! Issue is
```python
if training_args.hub_model_id is None:
repo_name = get_full_repo_name(Path(training_args.output_dir).name, token=training_args.hub_token)
else:
repo_name = training_args.hub_model_id
repo = Repository(training_args.output_dir, clone_from=repo_name)
```
if `hub_model_id` is not passed, `repo_name` is derived from `output_dir`, and when the path is `"."`, then `path.name` returns an empty string, which then results in this error
`requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/repos/create - Only regular characters and '-', '_', '.' accepted`<|||||>Maybe the `Path(training_args.output_dir).name` should be updated to `Path(training_args.output_dir).absolute().name` in that case?<|||||>Aah, yeah! Thanks Sylvain :) |
transformers | 13,671 | closed | AttributeError: 'T5ForConditionalGeneration' object has no attribute 'linear' | ## Environment info
- `transformers` version: 4.9.1
- Platform: Linux-5.4.0-84-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?:NO
- Using distributed or parallel set-up in script?:NO
Models:
- model_name = "flexudy/t5-small-wav2vec2-grammar-fixer"
Library:
- text generation: @patrickvonplaten
Documentation: @sgugger
## Information
I am trying to do the static quantization on the T5 model(flexudy/t5-small-wav2vec2-grammar-fixer) for reducing the inference time.
## code used
```
import torch
import transformers
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = "flexudy/t5-small-wav2vec2-grammar-fixer"
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
model.eval()
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_fused = torch.quantization.fuse_modules(model,[['linear', 'linear']])
```
## Output
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'linear'
## Expected behavior
Model modules should be fused for the quantization
| 09-21-2021 11:13:41 | 09-21-2021 11:13:41 | Hey @pradeepdev-1995,
I'm not super familiar with `torch.quantization.fuse_models(...)`. Could you explain a bit more what those lines do:
```
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_fused = torch.quantization.fuse_modules(model,[['linear', 'linear']])
```
?<|||||>@patrickvonplaten
could you please check the official documentation https://pytorch.org/docs/stable/quantization.html<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,670 | closed | DPR AutoModel loading incorrect architecture for DPRContextEncoders | ## Environment info
- `transformers` version: 4.10.2
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Model type `dpr`: @LysandreJik @patrickvonplaten @lhoestq
## Information
Model I am using:
* https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base
* https://huggingface.co/facebook/dpr-question_encoder-single-nq-base
## To reproduce
Loading a DPR context encoder `DPRContextEncoder` using `AutoModel.from_pretrained` is actually loading `DPRQuestionEncoder` instead, and later fails.
Steps to reproduce the behavior:
`AutoModel.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')`
```
File "venv/lib/python3.7/site-packages/transformers/modeling_utils.py", line 579, in _init_weights
raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")
NotImplementedError: Make sure `_init_weigths` is implemented for <class 'transformers.models.dpr.modeling_dpr.DPRQuestionEncoder'>
```
Note in the above that it's trying to use the `DPRQuestionEncoder` even though the config for this context encoder is correct and points to `architecture=DPRContextEncoder`.
Using explicitly the `DPRContextEncoder.from_pretrained` works just fine, so it looks like this is somewhere in `AutoModel`.
`DPRContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')`
## Expected behavior
Using `AutoModel.from_pretrained` should pick the correct architecture for a `DPRContextEncoder`.
| 09-21-2021 11:10:29 | 09-21-2021 11:10:29 | Unfortunately, `AutoModel` and its variants currently only support 1-to-1 model mapping according to the model name e.g. `DPR`. So in this case, the one model that maps to AutoModel is `DPRQuestionEncoder`<|||||>Ok, that kind of makes sense 🙃 Is there an easy way to change that or `DPR` models so it also looks at the architecture in the config?<|||||>To the best of my knowledge, this would be a major change of auto factory because the mapping file defines all `Auto-` models all together, not for each specific model. Only modifying `DPR`-related models might break the consistency of them.<|||||>@joshdevins - could you check whether the PR linked above solves the issue? <|||||>@patrickvonplaten Sorry, I realise now that there are two problems. Your PR fixes the problem that they didn't implement `_init_weights`, so that error is now gone. The `AutoModel` problem is still that `AutoModel.load_pretrained` is selecting `DPRQueryEncoder` even when the model architecture (as specified also in the `config.json`) is actually `DPRContextEncoder`.
```python
import torch
import transformers
model_id = "facebook/dpr-ctx_encoder-single-nq-base"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
input_ids = tokenizer("This is an example sentence.", return_tensors="pt")["input_ids"]
auto_model = transformers.AutoModel.from_pretrained(model_id)
context_model = transformers.DPRContextEncoder.from_pretrained(model_id)
auto_output = auto_model(input_ids)
context_output = context_model(input_ids)
```
```python
> type(auto_model)
transformers.models.dpr.modeling_dpr.DPRQuestionEncoder
> type(context_model)
transformers.models.dpr.modeling_dpr.DPRContextEncoder
> torch.all(torch.eq(auto_output["pooler_output"], context_output["pooler_output"]))
tensor(False)
```<|||||>Note that my workaround is basically this 🤷
```python
config = AutoConfig.from_pretrained(model_id)
getattr(transformers, config.architectures[0]).from_pretrained(model_id)
```<|||||>@joshdevins - ah yeah I think we can't really do anything against the second problem the way it is implemented now...maybe it might makes sense to implement a `AutoModel.from_pretrained(...)` that relies on `config.architectures` in the future...<|||||>I guess that makes sense. I wonder if this is the only model that has this scenario? It seems the way `sentence-transformers` does things also makes sense. They have a second config containing all the pooling and normalization layers after the transformer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,669 | closed | LEDForSequenceClassification example throws a ValueError on missing decoder_input_ids/embeds | ## Environment info
- `transformers` version: 4.9.0
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0
### Who can help
@patrickvonplaten @beltagy
## Information
LEDForSequenceClassification example throws an error on line 207 of modeling_led.py:
```
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
## To reproduce
Run the code from the documentation of [LEDForSequenceClassification](https://huggingface.co/transformers/model_doc/led.html#ledforsequenceclassification)
```
>>> from transformers import LEDTokenizer, LEDForSequenceClassification
>>> import torch
>>> tokenizer = LEDTokenizer.from_pretrained('allenai/led-base-16384')
>>> model = LEDForSequenceClassification.from_pretrained('allenai/led-base-16384')
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss = outputs.loss
>>> logits = outputs.logits
``` | 09-21-2021 09:52:59 | 09-21-2021 09:52:59 | The same example in [BartForSequenceClassification](https://huggingface.co/transformers/model_doc/bart.html#bartforsequenceclassification) works just fine.<|||||>Yes, however this is simply because BART can automatically generate `decoder_input_ids` for training: https://arxiv.org/abs/1910.13461 this would never be used for a downstream task even for BART. E.g. See this comment: https://github.com/huggingface/transformers/blob/a3ded170e22b37027dab456a12ff2f523c99d998/src/transformers/models/bart/modeling_bart.py#L594. Now LED was never pretrained from scratch and therefore there is no use case where the `decoder_input_ids` should **not** be provided. See comment here: https://github.com/huggingface/transformers/blob/a3ded170e22b37027dab456a12ff2f523c99d998/src/transformers/models/led/modeling_led.py#L1510 for comparison. => In LED we should always provide the `decoder_input_ids` in the forward pass |
transformers | 13,668 | closed | [AutoTokenizer] Allow creation of tokenizers by tokenizer type | # What does this PR do?
This PR enables the `Case #4` as discussed here: https://github.com/huggingface/transformers/pull/13623#issuecomment-923112500
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2021 08:59:10 | 09-21-2021 08:59:10 | |
transformers | 13,667 | closed | switch to inference_mode from no_gard | # What does this PR do?
Use `torch.inference_mode()` for pipelines.
https://pytorch.org/docs/stable/generated/torch.inference_mode.html
`with torch.no_grad()`

`with torch.inference_mode()`

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @Narsil | 09-21-2021 06:26:38 | 09-21-2021 06:26:38 | Hi @kamalkraj,
Thanks for this ! Do you mind sharing on which hardware this was done ?
We ran some tests too and found very little difference between the 2 modes (<1%).
We decided against putting that line because `transformers` needs to support older versions of `pytorch` which do not have `inference_mode` (so we would need an extra switch and maintain remove it when necessary). Because the performance was so little we decided it was not worth the investment.
If we can show performance differences, we're more than happy to add such switches.
Here is a small benchmark I ran on a GTX970:
```python
from transformers import pipeline
from transformers.pipelines.base import KeyDataset
import datasets
import tqdm
pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
dataset = datasets.load_dataset("superb", name="asr", split="test")
print("New style of pipeline")
for i, out in tqdm.tqdm(enumerate(pipe(KeyDataset(dataset, "file"))), total=100):
# print(out)
if i >= 100:
break
```
And the results are
`no_grad`:
```
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The model 'Wav2Vec2ForCTC' is not supported for automatic-speech-recognition. Supported models are [(<class 'transformers.models.speech_encoder_decoder.configuration_speech_encoder_decoder.SpeechEncoderDecoderConfig'>, <class 'transformers.models.speech_encoder_decoder.modeling_speech_encoder_decoder.SpeechEncoderDecoderModel'>), (<class 'transformers.models.speech_to_text.configuration_speech_to_text.Speech2TextConfig'>, <class 'transformers.models.speech_to_text.modeling_speech_to_text.Speech2TextForConditionalGeneration'>), (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>), (<class 'transformers.models.hubert.configuration_hubert.HubertConfig'>, <class 'transformers.models.hubert.modeling_hubert.HubertForCTC'>)].
Reusing dataset superb (/home/nicolas/.cache/huggingface/datasets/superb/asr/1.9.0/b185de2966d0d6025cd53df6b41f89e2d100ee17139797b793f62b2c7c7612bd)
New style of pipeline
100%|███████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:08<00:00, 12.22it/s]
real 0m21.202s
user 0m23.972s
sys 0m17.884s
```
`inference_mode`:
```
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The model 'Wav2Vec2ForCTC' is not supported for automatic-speech-recognition. Supported models are [(<class 'transformers.models.speech_encoder_decoder.configuration_speech_encoder_decoder.SpeechEncoderDecoderConfig'>, <class 'transformers.models.speech_encoder_decoder.modeling_speech_encoder_decoder.SpeechEncoderDecoderModel'>), (<class 'transformers.models.speech_to_text.configuration_speech_to_text.Speech2TextConfig'>, <class 'transformers.models.speech_to_text.modeling_speech_to_text.Speech2TextForConditionalGeneration'>), (<class 'transformers.models.wav2vec2.configuration_wav2vec2.Wav2Vec2Config'>, <class 'transformers.models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForCTC'>), (<class 'transformers.models.hubert.configuration_hubert.HubertConfig'>, <class 'transformers.models.hubert.modeling_hubert.HubertForCTC'>)].
Reusing dataset superb (/home/nicolas/.cache/huggingface/datasets/superb/asr/1.9.0/b185de2966d0d6025cd53df6b41f89e2d100ee17139797b793f62b2c7c7612bd)
New style of pipeline
100%|███████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:08<00:00, 12.30it/s]
real 0m21.570s
user 0m23.568s
sys 0m17.807s
```
I ran the benchmark multiple times and basically the numbers are exactly the same for each mode.<|||||>Hi @Narsil,
The above screenshot I shared is using TITAN RTX. I have also tested the same on A100. Similar performance as TITAN RTX
But when I use the below benchmark, I don't really see any performance improvement. (Only tested on A100)
getting the same performance as you shared.
> Here is a small benchmark I ran on a GTX970:
>
> ```python
> from transformers import pipeline
> from transformers.pipelines.base import KeyDataset
> import datasets
> import tqdm
>
> pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
> dataset = datasets.load_dataset("superb", name="asr", split="test")
>
> print("New style of pipeline")
> for i, out in tqdm.tqdm(enumerate(pipe(KeyDataset(dataset, "file"))), total=100):
> # print(out)
> if i >= 100:
> break
> ```
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.4.0-84-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu111 (True)
<|||||>https://twitter.com/PyTorch/status/1437838236529868803/photo/

2<|||||>Thanks you @kamalkraj.
I checked the numbers and had similar results for `distilbert-base-uncased-finetuned-sst-2-english` as you.
the odd part is that I also had similar total runtime numbers on a much smaller GPU.
I think this model is super small and so the pipeline does showcase a bit more the difference in `inference_mode`.
But only because we're not feeding it enough, so it's starving (and probably inference_mode sync is faster ?)
As the tweet mentions:
> Note that the highest speedups are for lightweight operations that are bottlenecked by the tracking overhead.
However I am currently thinking that maybe we're just under utilizing the GPU for this and re enabling batching could speedup this use case which would result in similar times for both `no_grad` and `inference_mode` again (both much faster than current implementation but with batching which has it's own set of caveats at inference time).
So my opinion, is that we should:
1- Enable the switch as it's is indeed faster on some cases by more than what was anticipated (but current PR needs to be modified to support older `torch` versions)
2- Add again `batch_size` support for pipeline but with a sane default of `1`, clear explanations of the caveats (alignment issues can really blow up the results and worsen inference times) in the docs and enable users to speed up on small-models with pipelines where data is sufficiently aligned. This was already planned, but having a clear use case will definitely help !
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>Thank you for your work @kamalkraj! |
transformers | 13,666 | closed | non-negligible difference between GPT2 and TFGP2 | ## Environment info
- `transformers` version: 4.11.0.dev0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
(I provide a potential fix at the end)
Model I am using: TFGPT2Model
In the current TFGPT2 model file, the activation used in `class TFMLP` is fixed to be `gelu`, as shown in the following line
https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/src/transformers/models/gpt2/modeling_tf_gpt2.py#L177
However, in PyTorch version GPT2, it is set to `ACT2FN[config.activation_function]`, see
https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/src/transformers/models/gpt2/modeling_gpt2.py#L278
I checked the git history, and found in the earlier versions of `modeling_tf_gpt2.py`, the function `gelu` is defined within the model file itself, but it is in fact the current version's `gelu_new`.
[Early version gelu]
https://github.com/huggingface/transformers/blob/1487b840d3457bf8b0f1fcacd02d3a2fae407fe5/src/transformers/modeling_tf_gpt2.py#L46
[Early version gelu being used]
https://github.com/huggingface/transformers/blob/1487b840d3457bf8b0f1fcacd02d3a2fae407fe5/src/transformers/modeling_tf_gpt2.py#L164
[Current gelu_new]
https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/src/transformers/activations_tf.py#L34
This causes larger difference between `TFGPT2` and `GPT2` (I observed this when worked on #13222).
## To reproduce
```
import numpy as np
import torch
import tensorflow as tf
from transformers import GPT2Model, TFGPT2Model
pt_gpt2 = GPT2Model.from_pretrained('gpt2').to('cpu')
tf_gpt2 = TFGPT2Model.from_pretrained('gpt2')
input_ids = [list(range(13))]
pt_input_ids = torch.tensor(input_ids, dtype=torch.int32).to('cpu')
tf_input_ids = tf.constant(input_ids, dtype=tf.int32)
pt_out = pt_gpt2(pt_input_ids, output_hidden_states=True)
tf_out = tf_gpt2(tf_input_ids, output_hidden_states=True)
for idx, (pt_h, tf_h) in enumerate(zip(pt_out.hidden_states, tf_out.hidden_states)):
layer_name = 'embedding layer' if idx == 0 else f'hidden layer {idx}'
pt_h = pt_h.detach().numpy()
tf_h = tf_h.numpy()
print(f"difference within {layer_name}: {np.max(np.abs(pt_h - tf_h))}")
```
gives the outputs
```
difference within embedding layer: 0.0
difference within hidden layer 1: 0.01474761962890625
difference within hidden layer 2: 0.0545654296875
difference within hidden layer 3: 0.09014892578125
difference within hidden layer 4: 0.109375
difference within hidden layer 5: 0.09521484375
difference within hidden layer 6: 0.093017578125
difference within hidden layer 7: 0.0931396484375
difference within hidden layer 8: 0.09295654296875
difference within hidden layer 9: 0.09246826171875
difference within hidden layer 10: 0.09307861328125
difference within hidden layer 11: 0.09442138671875
difference within hidden layer 12: 0.10050582885742188
```
## Expected behavior
The difference should be even smaller (< 1e-3).
## Potential fix
Change
https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/src/transformers/models/gpt2/modeling_tf_gpt2.py#L177
to `self.act = get_tf_activation(config.activation_function)`.
Rerun the above debugging script gives:
```
difference within embedding layer: 0.0
difference within hidden layer 1: 3.0517578125e-05
difference within hidden layer 2: 6.103515625e-05
difference within hidden layer 3: 0.000732421875
difference within hidden layer 4: 0.00048828125
difference within hidden layer 5: 0.000244140625
difference within hidden layer 6: 0.000244140625
difference within hidden layer 7: 0.0001220703125
difference within hidden layer 8: 0.0001220703125
difference within hidden layer 9: 6.103515625e-05
difference within hidden layer 10: 6.103515625e-05
difference within hidden layer 11: 7.62939453125e-05
difference within hidden layer 12: 7.62939453125e-05
```
But this probably will break users' fine-tuned models?
**[Extra question]**: Currently, is there some enforced check for each model between its different versions' (PyTorch/TF/Flax) results? | 09-21-2021 05:31:28 | 09-21-2021 05:31:28 | |
transformers | 13,665 | closed | [SinusoidalPositionalEmbedding] incorrect dtype when resizing in `forward` | This PR fixes a potential performance issue in general and a failure under Deepspeed when the following models are used under mixed precision with positional embedding resizing at `forward` time:
- speech_to_text
- m2m_100
- fsmt
Currently when `SinusoidalPositionalEmbedding.forward` is called if it resizes the embeddings it ignores the original correct dtype and forces the embeddings into `fp32`, so the inputs are in `fp32` now.
I detected the issue with deepspeed, which doesn't use `amp` but forces the model into `fp16` and then of course if the input is in the wrong dtype we get:
```
deepspeed examples/pytorch/translation/run_translation.py --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --source_lang en --target_lang ro --model_name_or_path hf-internal-testing/tiny-random-m2m_100 --do_train --max_train_samples 4 --per_device_train_batch_size 2 --num_train_epochs 1 --fp16 --report_to none --overwrite_output_dir --deepspeed tests/deepspeed/ds_config_zero2.json --output_dir /tmp/tmpi4k4wz8s --save_steps 1
[...]
File "/mnt/nvme1/code/huggingface/transformers-ds-model-zoo-2/src/transformers/models/m2m_100/modeling_m2m_100.py", line 393, in forward
hidden_states = self.final_layer_norm(hidden_states)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 173, in forward
return F.layer_norm(
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/functional.py", line 2346, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type Float but found Half
```
So `hidden_states` ends up being `fp32` instead of `fp16` because the pos_emb is `fp32`.
I checked all models matching `SinusoidalPositionalEmbedding` and all the others that aren't modified by this PR don't do dynamic resizing at run time.
I haven't checked non-`SinusoidalPositionalEmbedding` - perhaps those have an issue too.
The test will be in https://github.com/huggingface/transformers/pull/12695 as soon as this PR gets merged.
@patil-suraj, @LysandreJik, @sgugger
| 09-21-2021 03:10:02 | 09-21-2021 03:10:02 | |
transformers | 13,664 | closed | T5ForConditionalGeneration.from_pretrained load pytorch *.pt checkpoint fails | I am using T5ForConditionalGeneration and want to load a custom *.pt checkpoint:
model = T5ForConditionalGeneration.from_pretrained(my_custom_checkpoint)
model = model.to(device)
However, I am getting the error below:
File "/home/usr/lib/python3.7/site-packages/transformers/modeling_utils.py", line 962, in from_pretrained
**kwargs,
File "/home/usr/lib/python3.7/site-packages/transformers/configuration_utils.py", line 372, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/usr/anaconda2/envs/maml_adaptation/lib/python3.7/site-packages/transformers/configuration_utils.py", line 423, in get_config_dict
config_dict = cls._dict_from_json_file(resolved_config_file)
File "/home/usr/anaconda2/envs/maml_adaptation/lib/python3.7/site-packages/transformers/configuration_utils.py", line 506, in _dict_from_json_file
text = reader.read()
File "/home/usr/anaconda2/envs/maml_adaptation/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
What is causing this error? I can load the *.pt checkpoint fine using torch.load, but how to do it using T5ForConditionalGeneration? Thank you! | 09-21-2021 01:12:33 | 09-21-2021 01:12:33 | The `from_pretrained` method expects `pytorch_mode.bin` file in the checkpoint folder, if you have saved your checkpoint with a different name, then you could initialize the model with the config and load the weights from `state_dict`
```python
model = T5ForConditionalGeneration.from_config("....")
model.load_state_dict(torch_state_dict)
# save the model with save_pretrained, so next time it can be loaded using from_pretrained
model.save_pretrained("....")
```
Also looking at the stack trace, there seems to be some issue with your config file, could you maybe check if it contains valid JSON ?
<|||||>Thank you for your response. How do I convert from a *.pt checkpoint to a *.bin checkpoint in Pytorch? Does the transformers library provide any such utility function? Thank you!<|||||>@Crista23, Simply rename your `.pt` checkpoint into `pytorch_model.bin` - it's a different name for the same file.<|||||>@LysandreJik Thank you for the answer! I have renamed the checkpoint I am trying to load from *.pt to *.bin, but I get the same UnicodeDecodeError.<|||||>`UnicodeDecodeError` error seems related to `config.json`, could you verify that it contains valid JSON, try to just load the json or maybe post the file, so we could take a look.<|||||>Hi @patil-suraj , thanks for your response. Where is this config.json supposed to be found? I keep on looking and cannot find it among my files.<|||||>It should be in the directory where you saved your checkpoint. the `save_pretrained` saves that file in the given directory.<|||||>@patil-suraj thank you for the clarification! Unfortunately there is no config.json file in the saved checkpoints directory. What I am doing is take t5-small, adapt it for my own purposes using the fairseq library, and then try to load the latest fairseq checkpoint into T5ForConditionalGeneration (which gives me the error) - the format in which fairseq saves checkpoints is a directory containing *.pt checkpoints only. <|||||>I see in that case, see if you can save the model using `.save_pretrained` as well, which will let you use `from_pretrained`.
If it's a t5-small model then you could load the config and pass it to `from_pretrained`, for example
```python
config = T5Config.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("path_to_model_dir", config=config)
```
this assumes you have at least `pytorch_model.bin` in the model dir. If not then rename the `.pt` file to `.bin` file as Lysandre suggested.<|||||>Thank you so much @patil-suraj, that fixed it! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,663 | closed | Image classification script overrides preprocessing configuration with script defaults | As observed by @stas00 when working on the DeepSpeed model zoo, the image classification script overrides the preprocessor configuration values with the script defaults.
Running the following command:
```
python examples/pytorch/image-classification/run_image_classification.py
--output_dir output_dir
--model_name_or_path hf-internal-testing/tiny-random-vit
--dataset_name hf-internal-testing/cats_vs_dogs_sample
--do_train
--do_eval
--learning_rate 1e-4
--per_device_train_batch_size 2
--per_device_eval_batch_size 1
--remove_unused_columns False
--overwrite_output_dir True
--dataloader_num_workers 16
--metric_for_best_model accuracy
--max_steps 10
--train_val_split 0.1
--seed 42
```
Will result in the following error:
```
Traceback (most recent call last):
File "examples/pytorch/image-classification/run_image_classification.py", line 360, in <module>
main()
File "examples/pytorch/image-classification/run_image_classification.py", line 334, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/trainer.py", line 1302, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/trainer.py", line 1817, in training_step
loss = self.compute_loss(model, inputs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/trainer.py", line 1849, in compute_loss
outputs = model(**inputs)
File "/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/vit/modeling_vit.py", line 642, in forward
outputs = self.vit(
File "/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/vit/modeling_vit.py", line 543, in forward
embedding_output = self.embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/vit/modeling_vit.py", line 112, in forward
embeddings = self.patch_embeddings(pixel_values, interpolate_pos_encoding=interpolate_pos_encoding)
File "/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/vit/modeling_vit.py", line 152, in forward
raise ValueError(
ValueError: Input image size (224*224) doesn't match model (30*30).
```
Updating the script command to include `--image_size=30` works - but the configuration of that model is already set to 30: https://huggingface.co/hf-internal-testing/tiny-random-vit/blob/main/preprocessor_config.json#L16
cc @NielsRogge @nateraw | 09-21-2021 00:38:44 | 09-21-2021 00:38:44 | |
transformers | 13,662 | closed | Add ESM to huggingface | # What does this PR do?
Adding ESM-1b to huggingface following the steps in https://huggingface.co/transformers/add_new_model.html
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2021 00:15:52 | 09-21-2021 00:15:52 | @sgugger , thanks for the feedback!
There's two common tests that I'm failing; do you have any insight into what the proper fix would be?
```
❯ pytest tests/test_modeling_esm.py --disable-warnings
==================================================== test session starts ====================================================
platform linux -- Python 3.7.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /private/home/jasonliu/work-huggingface/transformers-dev, configfile: setup.cfg
plugins: dash-1.21.0, forked-1.3.0, xdist-2.3.0, timeout-1.4.2, hydra-core-1.1.0
collected 67 items
tests/test_modeling_esm.py .....................................s..............FF.....sss..ss. [100%]
========================================================= FAILURES ==========================================================
______________________________________ ESMModelTest.test_save_load_fast_init_from_base ______________________________________
self = <tests.test_modeling_esm.ESMModelTest testMethod=test_save_load_fast_init_from_base>
def test_save_load_fast_init_from_base(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
> base_class = MODEL_MAPPING[config.__class__]
tests/test_modeling_common.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = _LazyAutoMapping(), key = <class 'transformers.models.esm.configuration_esm.ESMConfig'>
def __getitem__(self, key):
> model_type = self._reverse_config_mapping[key.__name__]
E KeyError: 'ESMConfig'
src/transformers/models/auto/auto_factory.py:513: KeyError
_______________________________________ ESMModelTest.test_save_load_fast_init_to_base _______________________________________
self = <tests.test_modeling_esm.ESMModelTest testMethod=test_save_load_fast_init_to_base>
def test_save_load_fast_init_to_base(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
> base_class = MODEL_MAPPING[config.__class__]
tests/test_modeling_common.py:253:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = _LazyAutoMapping(), key = <class 'transformers.models.esm.configuration_esm.ESMConfig'>
def __getitem__(self, key):
> model_type = self._reverse_config_mapping[key.__name__]
E KeyError: 'ESMConfig'
src/transformers/models/auto/auto_factory.py:513: KeyError
================================================== short test summary info ==================================================
FAILED tests/test_modeling_esm.py::ESMModelTest::test_save_load_fast_init_from_base - KeyError: 'ESMConfig'
FAILED tests/test_modeling_esm.py::ESMModelTest::test_save_load_fast_init_to_base - KeyError: 'ESMConfig'
============================== 2 failed, 59 passed, 6 skipped, 30 warnings in 99.74s (0:01:39) ==============================
```<|||||>It doesn't look like you added your model in the configuration_auto mappings, just the modeling_auto mappings. That's why you get this error.<|||||>Thanks! I think this is ready for review again (rebased to upstream)<|||||>Hey @liujas000,
Can we help you in any way to get this PR merged? :-)<|||||>@patrickvonplaten sorry for the delay; I will land this week!<|||||>Test failure seems unrelated<|||||>Thanks a lot for making this PR more or less mergeable @liujas000 . I think there are just some final comments from @sgugger and @patrickvonplaten to be taken care of and the PR is good to go :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'd love to use this model, what still needs to be done to get this merged?<|||||>cc @liujas000 @Rocketknight1 <|||||>@gianhiltbrunner We're still waiting on an internal review from Facebook from the contributors, I believe! I'll let you know if there's any update.<|||||>I would also be very happy if this gets merged! Any progress?<|||||>Any updates here? |
transformers | 13,661 | closed | Finetuning BART on a multi-input sequence to sequence task | I finetuned bart-base on a sequence to sequence task and I have the following questions:
a) Currently I structured the input and output for the bart model in "t5-style" by adding prefixes in front of each piece of input. For bart how should I give multiple inputs (or train it to return multiple outputs) to the model (is there special token to separate inputs, should I continue the t5-style prefixes, etc.)? Also, how would I do this for gpt-2/gpt-neo?
b) When finetuned with prefixes, the target data is formatted with "output: ......", however, the finetuned-bart returns "outputoutput: ......". Why is this repetiton occurring? Also, does the Bart tokenizer automatically add the eos token?
c) Also does the trainer api automaticallly handle ```adjust_logits_during_generation``` and ```decoder_start_token_id``` as discussed in this [post](https://discuss.huggingface.co/t/bart-base-rouge-scores/683)?
Could @patil-suraj or @patrickvonplaten help with this? This is my first project training an nlp model, and I would really appreciate any information you can offer regarding my questions. | 09-21-2021 00:15:27 | 09-21-2021 00:15:27 | Hi there! It would be better if you post this on the [forum](https://discuss.huggingface.co/) instead since this is much general question and not an issue. You can tag me on the forum using `@valhalla` :)
Use issues to report bugs or for feature requests. Thanks!
<|||||>Thanks for your reply! I will close this issue and repost it on the forum.
EDIT:
@patil-suraj I have posted this on the huggingface forum [here](https://discuss.huggingface.co/t/finetuning-bart-on-a-multi-input-sequence-to-sequence-task/10201?u=nr1). Can you please take a look at it? Thank you! |
transformers | 13,660 | closed | Fine-Tuning Wav2Vec2 with PyTorch DDP | ## Environment info
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.20
- JaxLib version: 0.1.71
- Using GPU in script?: yes (8 or 1)
- Using distributed or parallel set-up in script?: yes
### Who can help
@sgugger @stas00 @anton-l
## Problem:
I'm running some experiments on fine-tuning a [pretrained XLSR-Wav2Vec2 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Turkish dataset of Common Voice](https://huggingface.co/datasets/common_voice).
The fine-tuning script is an updated version of the existing [`run_common_voice.py`](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py) script that can be seen in this PR: https://github.com/huggingface/transformers/blob/97936d3aacc04f6253ff178415b8a57768fc8ce6/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
It leverages the `Trainer` for CTC training of Wav2Vec2.
I'm running the training script for both distributed training (as follows):
```bash
python -m torch.distributed.launch \
--nproc_per_node 8 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-large-xlsr-turkish-demo-dist" \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="4" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--save_steps="400" \
--eval_steps="100" \
--logging_steps="1" \
--save_total_limit="3" \
--fp16 \
--freeze_feature_extractor \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--do_train --do_eval
```
and single-GPU training:
```bash
CUDA_VISIBLE_DEVICES="0" python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-large-xlsr-turkish-demo" \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--save_steps="400" \
--eval_steps="100" \
--logging_steps="1" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--fp16 \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--do_train --do_eval
```
As you can see the only difference between the two scripts is that distributed training (DDP) does **not** use gradient checkpointing and has a "per-GPU" batch size of 4 resulting in an effective batch size of 32, whereas the single GPU training has a "per-GPU" batch size of 16 and uses gradient accumulation of 2 (and gradient checkpointing). So the training scripts are more or less identical in terms of learning rate decay, optimizer, effective batch size, ...
Now what is quite surprising to me is that single-GPU training works very well. Here is a report with the most important metrics of the run: https://wandb.ai/patrickvonplaten/huggingface/reports/Wav2Vec2-1-GPU-V100--VmlldzoxMDQwNzI0?accessToken=5xhtxrgy59l7dl2sds08bfk8xq1l30uf1ae0i5lio2r7dpx43vzxufsjmxkkbkig
while distributed training doesn't work at all - here a report of the run: https://wandb.ai/patrickvonplaten/huggingface/reports/Wav2Vec2-DistributedDataParallel-DDP-8-GPU-V100--VmlldzoxMDQwMDU3?accessToken=rsxt5n2s31bfg3kmbtvb982zcqlg8hby7mrjniftnx4n87kephus81zeaj92xfbu
While Wav2Vec2's CTC loss isn't super stable the single-GPU script is quite robust to changes in the batch size, learning rate, random seed (I've tried a bunch of slight changes and the script always manages to push the training/eval loss below 1 and yield a reasonable word error rate in the beginning. On the other hand the distributed script doesn't seem to work at all (tried out a variety of dropout rates, learning rates, batch sizes, layerdrop, ....) -> none of them converge.
That's quite surprising to me as the scripts should in theory be more or less the same.
Some possible reasons I thought could be:
- In distributed training the gradients are computed for each process/gpu separately and then averaged (reduced). However this is slighly different to single GPU training because of the following:
For a single GPU, each input sample in the batch can have a different number of losses, e.g. if the labels are `[["Hello my name is"], ["hey <pad> <pad> <pad>"]]` then on a single GPU the loss is correctly averaged (5 words = 5 losses = sum(losses) / 5). However in DDP the losses are averaged locally and then the gradients are averaged globally which would mean (1st GPU: 4 words = 4 loss & 2nd GPU: 1 word = 1 loss => (sum(losses_gpu1) / 4 + sum(losses_gpu2) / 1) / 2 which is not the same as on a single GPU. However, I've also played around with `group_by_length` - in this case the inputs per batch should be roughly similar and there shouldn't be a problem, but that didn't help either. I've also summed the losses and scaled the gradients correctly - see: https://github.com/huggingface/transformers/blob/97936d3aacc04f6253ff178415b8a57768fc8ce6/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L286 which also didn't help. Related issue/discussion on PyTorch: https://discuss.pytorch.org/t/average-loss-in-dp-and-ddp/93306
- Could gradient-checkpointing be the reason? I really don't see how this could make a difference though...
- Could gradient accumulation be the reason? I also tried out using gradient accumulation of 2 in distributed training and batch size 2 per GPU for DDP which didn't help either.
- Another specialty about Wav2Vec2 is that the first 7 Conv layers are frozen here: https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/examples/research_projects/wav2vec2/run_common_voice.py#L454 which calls this function: https://github.com/huggingface/transformers/blob/ea92136597c49a20c5e2c31ef20ccec1693a8858/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1416. Could it be that in DDP just one of the eight models does that, but not the rest? (Actually I could easily check this...)
- Could it be that fp16 amp scaling somehow messes differently in DDP than in single GPU training?
- ... other possible reasons?
@stas00 @sgugger - Have you previously heard about this kind of problem before (that single GPU works but DDP doesn't?). Think it's very hard to debug or dive into this problem, but I thought maybe you have some useful next step debugging strategies or tips!
@anton-l - have you used DDP training during the Wav2Vec2 sprint? I've pretty much only used single GPU training which works well, but not DDP... have you had similar problems before?
I've also tried running DDP on other/bigger datasets without success, so I'm a bit confused why it doesn't work here. Think the CTCLoss: https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html is quite special and definitely more prone to instabilities than simple CE loss, but it's still very surprising to me that I get single GPU rather easily working but DDP not at all.
Some things I was planning on trying out next:
- Running DDP with just 2 GPUs to see whether the more GPUs, the more instable the loss becomes or not...
Do you guys have maybe any other good debugging strategies ?
| 09-20-2021 23:39:50 | 09-20-2021 23:39:50 | > * Could gradient-checkpointing be the reason? I really don't see how this could make a difference though...
I'm dealing with a somewhat similar report here: https://github.com/huggingface/transformers/issues/13653 - and I requested the user to repeat the experiment with gradient-checkpointing off. I don't think this feature is used a lot, perhaps we have bugs in there? I don't remember using it ever myself in `transformers` - we have been using it in Megatron-Deepspeed all the time. so it's a bit of an unknown to me on the HF side.
> Could gradient accumulation be the reason?
gradient accumulation should be a pretty solid feature.
- I'd inspect whether `LayerDrop` and/or `weight_norm` could be impacting this under DDP.
Any difference if you use a similar Deepspeed setup? If you remember I had to tweak LayerDrop to synchronize the gpus there. Perhaps the data goes out of sync under DDP due to this feature? Do we know that it even works under DDP?
> Do you guys have maybe any other good debugging strategies ?
1. Making a tiny 2-gpu setup where you can clearly and quickly observe the different rate at loss diminishing (so you have 1 and 2-gpu setup) - so smaller model so it's a fast cycle
2. Removing all the configurable suspects, so you start with a baseline of 1 gpu and 2-gpu DDP giving very similar converging rate and adding those features back one by one or in groups until you see a clear difference emerging
<|||||>Thanks for your reply @stas00! Regarding `gradient_checkpointing` - the problem is that it **does work** with gradient checkpointing on single GPU, but **does not work** with or without gradient checkpointing on multi-GPU.
`Layerdrop` is disabled as well. `weight_norm` could definitely be a reason! I'll verify this!<|||||>Just ran both experiments also in full fp32 precision and it's the same problem:
Single GPU: https://wandb.ai/patrickvonplaten/huggingface/reports/Wav2Vec2-1-GPU-V100-FP32--VmlldzoxMDQxODMw
8 GPUs: https://wandb.ai/patrickvonplaten/huggingface/reports/Wav2Vec2-DistributedDataParallel-DDP-8-GPU-V100-FP32--VmlldzoxMDQxODI2
=> So it also doesn't really seem to be related to `amp` or gradient scaling<|||||>DDP with just 2 GPUs also doesn't work. Will dive a bit deeper into it then - think it's quite important to get DDP working for Wav2Vec2<|||||>I have some question around that piece of code:
```
# divide gradients by number of labels
if self.args.world_size > 1:
num_losses = (inputs["labels"] >= 0).sum()
dist.all_reduce(num_losses)
constant = self.args.world_size / num_losses
self.multiply_grads(model.module.parameters(), constant)
loss *= constant
```
It's only executed in a distributed setup but you don't multiply all gradients by `1 / num_losses` when there is only one process. I have no idea of the size of your labels tensors and the number of positive elements it has, but it seems like it could be big (batch_size * sequence_length - the number of padded elements at first glance?)<|||||>Yeah, in the training runs above, I actually disabled this (I just use `Trainer` instead of `CTCTrainer`).
I copied that code more or less from fairseq's Trainer. The idea here to only use `ctc_loss_reduction="mean"` in single GPU setup, but then use `ctc_loss_reduction="sum"` in the DDP setup and sum all losses and later scale the gradients correctly.
With this code, all local copies of the model will get a gradient d(sum(loss_1)) /d(params) -> so that the average reduced gradient is d(sum(loss_1) + sum(loss_2) + ....)/8 * d(params)) with 8 being the world_size. Then I multiply by 8 and divide by all losses (batch_size * seq_length of gpu1 + batch_size * seq_length of gpu_2 + ....) which should then make the gradients identical to `ctc_loss_reduction="mean"` in single GPU setup.
I tried this out a couple of times, but it didn't solve the problem and also given that the sequence lengths in common voice are quite similar just using `ctc_loss_reduction="mean"` should work fine as well (see the first possible reason for a bug above).
=> So in short I currently don't use that code as I just use `Trainer` instead of `CTCTrainer` (sorry should probably have commented out `CTCTrainer` completely - see: https://github.com/huggingface/transformers/blob/97936d3aacc04f6253ff178415b8a57768fc8ce6/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L511<|||||>Issues is solved. Will post a more detailed reason as an explanation<|||||>This was the problem essentially: https://github.com/huggingface/transformers/pull/13620#discussion_r714154511
Every process created a different vocabulary<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,659 | closed | Add push_to_hub to no_trainer examples | # What does this PR do?
This PR adds the `--push_to_hub` flag to the `no_trainer` examples in the Transformers repository. For now, it only deals with `run_glue_no_trainer.py` but once everyone has commented, the final version will be duplicated on the other examples. | 09-20-2021 22:08:28 | 09-20-2021 22:08:28 | this is great!<|||||>Also, since `blocking` argument to `push_to_hub` was only introduced `v0.0.17` maybe we should add this in the requirements.<|||||>@patil-suraj The requirements `huggingface_hub >= 0.0.17` is in the install requirements for Transformers now. |
transformers | 13,658 | closed | Syntax for from_pretrained proxies (downloading model behind corp proxy) | Facing trouble using from_pretrained
```
import torch
from transformers import DistilBertModel
from transformers import AutoModel, AutoTokenizer, BertTokenizer
torch.set_grad_enabled(False)
bert_distil = DistilBertModel.from_pretrained('distilbert-base-cased', proxies={'proxy-abc.xyz.com:123'} )
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased', proxies={'http': 'proxy-abc.xyz.com:123'})
input_pt = tokenizer.encode_plus(
'This is a sample input to demonstrate performance of distiled models especially inference time',
return_tensors="pt"
)
%time _ = bert_distil(input_pt['input_ids'])
%time _ = model_pt(input_pt['input_ids'])
```
(hf) username@st-rocket-lake-nv3090:~/dev/hf$ **set | grep http**
**http_proxy=http://proxy-abc.xyz.com:123/**
-----------------------
Error: ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. | 09-20-2021 21:50:43 | 09-20-2021 21:50:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same issue here, if I use models like this
`instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl",
model_kwargs={"device": "cpu"})`
I am behind a company proxy and have set it in the environment variables, which works fine for pip. There is a dirty workaround, namely to set verify=False in the 'file_download.py'. But that can't be it, right? 😏
Is it possible to react like pip, take the environment variables for the proxy (http_proxy proxy address, https_proxy ...)? It'd be much appreciated.
Edit: I just found out about the offline option, which offers an ok-workaround: [Offline Mode](https://huggingface.co/docs/transformers/v4.28.1/en/installation#offline-mode)
In case anybody stumbles over this issue: Works for me in my run configuration of the IDE. |
transformers | 13,657 | closed | Make gradient_checkpointing a training argument | # What does this PR do?
This PR reworks the logic behind gradient accumulation. It is currently set as a configuration argument which is annoying because:
- it's not easily discoverable
- when someone pushes a model trained with gradient checkpointing activated to the Hub, that models keeps this gradient checkpointing even if new users don't want to use it.
That's why this PR depractes the `gradient_checkpointing` argument in any config and adds:
- a method `gradient_checkpointing_enable` to `PreTrainedModel` to activate gradient checkpointing
- a training argument for the users using the `Trainer` API that will call that `gradient_checkpointing` method.
Internally, the implementation still relies on the config as it's the easiest place to set something that needs to pass several layers of a model (if we have a `BertForMaskedLM` for instance, the actual gradient checkpointing only applies to the `BertEncoder` inside the `BertModel` inside that `BertForMaskedLM`) but that argument is made private and not saved to the model Hub. | 09-20-2021 19:28:58 | 09-20-2021 19:28:58 | I took the liberty to also document this feature in https://huggingface.co/transformers/performance.html and pushed it here, so if you rename the method please adjust the doc as well. Thank you!<|||||>I'm not very happy about keeping `gradient_checkpointing` in the config internally as it adds IMO significantly more complexity to what a user has to know now about model configurations. Before this PR, every configuration parameter that one sees in `configuration_utils.py` is stored when saving the configuration file. If we introduce now private configuration parameters that are not saved when the model is saved, it forces users to learn/understand a new exception and makes the code harder to understand/read.
I'm very much in favor of removing `gradient_checkpointing` from the config, but the better option IMO is not to go over the config anymore at all but to provide `_disable_gradient_checkpointing`, `_enable_gradient_checkpointing` functions to all sub-modules. It's much more work, but IMO there are also much more upsides to having this approach. <|||||>> I'm not very happy about keeping gradient_checkpointing in the config internally as it adds IMO significantly more complexity to what a user has to know now about model configurations. Before this PR, every configuration parameter that one sees in configuration_utils.py is stored when saving the configuration file. If we introduce now private configuration parameters that are not saved when the model is saved, it forces users to learn/understand a new exception and makes the code harder to understand/read.
I am not following since this is all private. The user does not have to know anything about model configurations for this option. I'm also not sure which new exceptions you are mentioning?
> I'm very much in favor of removing `gradient_checkpointing` from the config, but the better option IMO is not to go over the config anymore at all but to provide `_disable_gradient_checkpointing`, `_enable_gradient_checkpointing` functions to all sub-modules. It's much more work, but IMO there are also much more upsides to having this approach.
Note that those submodules are often not even `PreTrainedModel`, so we will have to add those functions manually to a tons of `nn.Module`. For backward compatibility, we will also need to still have something stored in the config, since the config can't call the method `gradient_checkpointing_enable` on the model, so this effort is a bit pointless before v5 in the sense that there will be private parameters not saved anyway.
In any case, if this second approach is selected, I would still urge to merge this PR as soon as possible to avoid any merge conflict or many user diverging from the templates. We can then change the internal implementation on the models added more progressively.<|||||>I'm just a bit worried that we'll start using the "private" configuration parameters of `PreTrainedConfig` just as a way to easily pass flags to all the `nn,Modules` even though those parameters shouldn't be in the config at all. For me the configuration should really just be static configuration and not serve any other purpose than defining the model architecture.
For a user that just looks at the configuration on the hub this PR is great, but for users that actually looks into the code, adding a `NO_SAVE_CONFIG_KEYS` option to `PreTrainedConfig` adds a new layer of complexity for the reader to understand. This could be avoided IMO.
Think we should be able to add a single method to the `BertPreTrainedModel` like this:
```python
def _enable_gradient_checkpointing(self):
model = self
if hasattr(model, self.base_model_prefix):
model = getattr(model, self.base_model_prefix)
# set gradient checkpointing to True in the encoder
model.encoder.gradient_checkpointing = True
```
=> this should work just fine no?
Given that we will have to leave it in the config anyways until v5, I'm fine with leveraging the config I guess - I just don't think it's good practice to introduce "special" configuration parameters with `NO_SAVE_CONFIG_KEYS`<|||||>If we leave the config as is, as proposed by Patrick, should we perhaps discuss the ability for the user to choose what goes into the published model's config? We are sort of trying to do DWIM (do what I mean) and magically have the published model have all the right settings.
So adding to the model saving interface our default filters which for example will automatically disable `gradient_checkpointing` and then allowing users to override those if they need to? So we have the ease of use of having sensible defaults and then allow users to override any of the defaults?
In the current PR the user has no control over `NO_SAVE_CONFIG_KEYS`
And we won't need to wait till v5 to do so.<|||||>@stas00 This is out of scope of this PR (which does not contain the `NO_SAVE_CONFIG_KEYS` anymore btw, to address Patrick's comments), so maybe the discussion should be moved elsewhere? <|||||>I was just following up to Patrick's comment. I have no problem with not discussing it here. |
transformers | 13,656 | closed | [WIP] Model type tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-20-2021 16:28:16 | 09-20-2021 16:28:16 | |
transformers | 13,655 | closed | Add Speech AutoModels | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds two auto models for speech:
- one for CTC (Wav2Vec2, Hubert)
- one for Seq2Seq (SpeechEncoderDecoder, Speech2text)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-20-2021 14:56:36 | 09-20-2021 14:56:36 | > yields
I'm not really in favor of aligning pipelines and `AutoModels` 1-to-1 to be honest. IMO there should be a 1-to-1 alignment between model classes `Wav2Vec2For...` and `AutoModelFor...`, but not between `AutoModelFor...` and the pipelines.
In this case here an auto model class called `AutoModelForAutomaticSpeechRecognition` would be problematic as:
1. It would include models with different API's: `Wav2Vec2ForCTC` does one forward pass and leverages just the `forward()` method whereas `SpeechEncoderDecoder` leverages the `generate()` function. I think we don't want to follow this practice anymore
2. It could lead to problems as soon as there are multiple `Wav2Vec2...` classes that can do ASR
On the other hand, I agreed with @Narsil that for the pipelines it doesn't really make sense to have `AutomaticSpeechRecognitionWithCTC/ WithSeq2Seq / ...` as the target group of the pipelines should have to know the difference betweetn CTC / WithSeq2Seq / ...
More generally, I believe that the `AutoModel...` classes should follow our "barebone"/"no magic"/"easy-to-read code" design whereas the pipelines rather fall in the category "very user-friendly"/"complexity is absorbed at the cost of hard to read/understand code".
So I'm not really in favor of the 1-to-1 alignment I think.
=> Would maybe be a good idea to have a chat about this more generally though! <|||||>I agree with @patrickvonplaten actually.
In my mind, `AutoModelFor...` designates a kind of Head on a particular model, which includes some specific weights and API.
It should have nothing to do with `pipelines` and needs to be completely separated (at least models shouldn't care about pipelines).
On the other hand, for pipelines, relying on `AutoModelFor..` in a 1-1 fashion is tempting, but it does fall apart for `Seq2Seq` (`text-generation`, `translation`, `summarization`) (1-n) and again for `ForCTC` and `ForSpeechSeq2Seq` for instance. (n-1).
For those, it really seems that it just doesn't any sense to keep the 1-1 relationship. But what we do want is a 1-1 relationship
between the `AutoModel` and the widget being displayed and that's possible because there is a sane default for `Seq2Seq` which is `text-generation` (we called it `text2text-generation` but really it's exactly the same API as `text-generation`.
Forcing 1-1 here would mean forcing model developers to care about pipeline and enabling all potential uses for a given head. It would also means that users wouldn't necessarily know which `AutoModelFor..` to use as there would now be aliases. In addition, it would mean we would be forced to unnecessarily add new tasks `automatic-speech-recognition-ctc` and `automatic-speech-recognition-seq2seq`.
Both are highly undesirable IMHO.
Keeping the current architecture is alright I think.
In case of (n-1) mapping, there needs to be a default task that works (`text-generation` will always work for instance).
And in case of (1-n) mapping, adding switches within the pipeline is OK IMO. As long as there's 1 switch only, and the rest of the pipeline is common it's working fine, but we might want to consider other designs as the pipelines for different `AutoModelFor` start to differ significantly (like https://github.com/huggingface/transformers/pull/13622?notification_referrer_id=MDE4Ok5vdGlmaWNhdGlvblRocmVhZDI0MzAwMDQwODg6MjA0MzIx#pullrequestreview-758714543). This is a code architecture issue that is very doable IMO.
|
transformers | 13,654 | closed | Update modeling_tf_deberta.py | Fixed expand_dims axis
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 09-20-2021 14:39:26 | 09-20-2021 14:39:26 | |
transformers | 13,653 | closed | DeepSpeed and HF trainer return hugely different losses and perplexity | I have been trying to pre-train GP2 models with HF Trainer and Deepspeed, but have noticed large differences between HF trainer's final loss and perplexity vs. that of Deepspeed trainer.
For the GPT-2 (100M) model on Wikitext-2-raw dataset on 4 A100 80GB GPU, with the same batchsize=32 per GPU:
HF trainer returns:
```
{'loss': 5.5565, 'learning_rate': 3.6842105263157892e-06, 'epoch': 46.32}
{'loss': 5.5723, 'learning_rate': 3.1578947368421056e-06, 'epoch': 46.84}
{'loss': 5.5063, 'learning_rate': 2.631578947368421e-06, 'epoch': 47.37}
{'loss': 5.5367, 'learning_rate': 2.105263157894737e-06, 'epoch': 47.89}
{'loss': 5.5109, 'learning_rate': 1.5789473684210528e-06, 'epoch': 48.42}
{'loss': 5.5516, 'learning_rate': 1.0526315789473685e-06, 'epoch': 48.95}
{'loss': 5.5457, 'learning_rate': 5.263157894736843e-07, 'epoch': 49.47}
{'loss': 5.5739, 'learning_rate': 0.0, 'epoch': 50.0}
100%|#########################################| 950/950 [27:00<00:00, 1.28s/it][INFO|trainer.py:1391] 2021-09-20 13:44:34,292 >>
***** train metrics *****
epoch = 50.0
train_loss = 6.118
train_runtime = 0:27:16.45
train_samples = 2318
train_samples_per_second = 70.824
train_steps_per_second = 0.581
09/20/2021 13:44:39 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2209] 2021-09-20 13:44:39,362 >> ***** Running Evaluation *****
[INFO|trainer.py:2211] 2021-09-20 13:44:39,362 >> Num examples = 240
[INFO|trainer.py:2214] 2021-09-20 13:44:39,362 >> Batch size = 8
100%|#############################################| 8/8 [00:00<00:00, 8.73it/s]
***** eval metrics *****
epoch = 50.0
eval_loss = 5.9233
eval_runtime = 0:00:00.92
eval_samples = 240
eval_samples_per_second = 260.484
eval_steps_per_second = 8.683
perplexity = 373.6332
```
While the Deepspeed enabled trainer return:
```
{'loss': 23.5148, 'learning_rate': 2.631578947368421e-06, 'epoch': 47.37}
{'loss': 23.2578, 'learning_rate': 2.105263157894737e-06, 'epoch': 47.89}
{'loss': 23.187, 'learning_rate': 1.5789473684210528e-06, 'epoch': 48.42}
{'loss': 23.3219, 'learning_rate': 1.0526315789473685e-06, 'epoch': 48.95}
{'loss': 23.1348, 'learning_rate': 5.263157894736843e-07, 'epoch': 49.47}
{'loss': 23.293, 'learning_rate': 0.0, 'epoch': 50.0}
100%|#########################################| 950/950 [35:25<00:00, 1.65s/it][INFO|trainer.py:1391] 2021-09-20 07:29:19,207 >>
***** train metrics *****
epoch = 50.0
train_loss = 31.7806
train_runtime = 0:35:25.78
train_samples = 2318
train_samples_per_second = 54.521
train_steps_per_second = 0.447
09/20/2021 07:29:29 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2209] 2021-09-20 07:29:29,273 >> ***** Running Evaluation *****
[INFO|trainer.py:2211] 2021-09-20 07:29:29,273 >> Num examples = 240
[INFO|trainer.py:2214] 2021-09-20 07:29:29,273 >> Batch size = 8
100%|#############################################| 8/8 [00:01<00:00, 6.59it/s]
***** eval metrics *****
epoch = 50.0
eval_loss = 21.939
eval_runtime = 0:00:01.40
eval_samples = 240
eval_samples_per_second = 171.321
eval_steps_per_second = 5.711
perplexity = 3372914502.3404
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.4.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
- Deepspeed version: '0.5.3'
-
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- deepspeed: @stas00
-->
## Information
GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Slightly modified clm [script](https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/my_run_clm.py) to allow training from scratch and gradient checkpointing
2. Native trainer launch:
```
LOG_DIR = "./models/gpt2-small-a100"
!rm -rf $LOG_DIR
cmd = """python3 -m torch.distributed.launch --nproc_per_node=4 run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--eval_steps=10 \
--logging_steps=10 \
--save_steps=200 \
--save_total_limit=1 \
--fp16=true \
--per_device_train_batch_size=32 \
--output_dir {} \
--num_train_epochs=50 \
--overwrite_output_dir
""".format(LOG_DIR)
!$cmd
````
3. Deepspeed laucher
```
LOG_DIR = "./models/gpt2-deepspeed-a100"
!rm -rf $LOG_DIR
cmd = """deepspeed --num_gpus=4 run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--eval_steps=10 \
--logging_steps=10 \
--save_steps=200 \
--fp16=true \
--per_device_train_batch_size=32\
--output_dir {} \
--save_total_limit=1 \
--num_train_epochs=50 \
--overwrite_output_dir=true \
--deepspeed=deepspeed-gp2-A100.config.json
""".format(LOG_DIR)
! $cmd
```
With the following deepspeed config file:
```
%%writefile deepspeed-gp2-A100.config.json
{
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Matching loss and perplexity between HF & deepspeed trainer.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-20-2021 14:21:14 | 09-20-2021 14:21:14 | Very interesting report, @vinhngx. Perfect details too.
The 2 setups look very similar (identical). I don't see anything standing out.
I don't currently have an access to a similar hardware setup. I'm curious what happens if you disable gradient checkpointing? (and reduce the bs to fit the higher memory usage). Surely the diverging results if any should appear very quickly - no need to run for more than a few minutes.
Also, I realize that all work I have done was using DS optimizer and scheduler. It was only recently that the Deepspeed team said it's ok to use both as external ones. So perhaps that new ability has some issues? Does it make a large positive difference (making the results similar) if you enable the DS opt+sched by adding:
```
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
```
finally there is the mixed precision section, which could slightly impact things as well. Usually I use the following:
```
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
```
Finally if you could attach both log files (as a file attachment) - which might help uncover something I'm missing.
Thank you!<|||||>Thanks @stas00 . I carry out a quick experiment on a DGX-1V with 8xV100 32GB, disabling gradient checkpointing this time. BS is reduced to 4 for both HF trainer and DeepSpeed.
The stark difference remains. This is for the smallest GPT model after 1 epoch on Wikitext-2.
DeepSpeed trainer always starts with a huge loss (228 vs. 9.8). Full log and notebook for reproduction is [here](https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/GPT-small-V100-debug.ipynb)
Will test other suggestions.
HF trainer:
```
***** train metrics *****
epoch = 1.0
train_loss = 8.7321
train_runtime = 0:00:38.80
train_samples = 2318
train_samples_per_second = 59.737
train_steps_per_second = 1.881
09/21/2021 00:59:20 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2209] 2021-09-21 00:59:20,591 >> ***** Running Evaluation *****
[INFO|trainer.py:2211] 2021-09-21 00:59:20,592 >> Num examples = 240
[INFO|trainer.py:2214] 2021-09-21 00:59:20,592 >> Batch size = 8
100%|#############################################| 4/4 [00:01<00:00, 3.54it/s]
***** eval metrics *****
epoch = 1.0
eval_loss = 7.9691
eval_runtime = 0:00:01.15
eval_samples = 240
eval_samples_per_second = 208.103
eval_steps_per_second = 3.468
perplexity = 2890.3225
```
DeepSpeed:
```***** train metrics *****
epoch = 1.0
train_loss = 80.659
train_runtime = 0:01:34.65
train_samples = 2318
train_samples_per_second = 24.489
train_steps_per_second = 0.771
09/21/2021 01:02:05 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2209] 2021-09-21 01:02:05,291 >> ***** Running Evaluation *****
[INFO|trainer.py:2211] 2021-09-21 01:02:05,291 >> Num examples = 240
[INFO|trainer.py:2214] 2021-09-21 01:02:05,291 >> Batch size = 8
100%|#############################################| 4/4 [00:01<00:00, 3.51it/s]
***** eval metrics *****
epoch = 1.0
eval_loss = 45.3922
eval_runtime = 0:00:01.60
eval_samples = 240
eval_samples_per_second = 149.585
eval_steps_per_second = 2.493
perplexity = 5.171071334294747e+19
```<|||||>Adding some more experiments, which don't seem to help DeepSpeed convergence:
- No amp: https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/GPT-small-V100-debug-noAMP.ipynb
- Using DeepSpeed optimizer+LR scheduler: https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/GPT-small-V100-debug-DS-optimizer.ipynb<|||||>Added a quick test for fairscale. It behaves well.
https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/GPT-small-V100-debug-fairscale.ipynb<|||||>Thank you for the code and logs and the experiments I asked you to run, @vinhngx! This helps a lot to see what's being run. But I'm yet to see the culprit.
@samyam, any chance you could have a look at one of the log files linked above. @vinhngx has done a ton of different variations and the problem is the same in all of them. And that's Deepspeed's training first step's loss is an order of magnitude larger than the same training w/o Deepspeed (i.e. plain HF Trainer). Thank you!
<|||||>@stas00, @vinhngx this is very strange. The configs I looked at above have ZeRO Stage 3 enabled. Could you please try i) without any ZeRO, and ii) with ZeRO Stage 2, and see if the difference still persists? I wonder if this model is hitting a corner case in ZeRO Stage 3 where the communication hooks are not triggered properly.<|||||>@samyam good guess. With zero-2, the loss looks inline with the native HF trainer, as well as HF trainer+fairscale.
```
***** train metrics *****
epoch = 1.0
train_loss = 8.5525
train_runtime = 0:00:34.08
train_samples = 2318
train_samples_per_second = 67.998
train_steps_per_second = 2.141
09/23/2021 04:11:04 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2209] 2021-09-23 04:11:04,653 >> ***** Running Evaluation *****
[INFO|trainer.py:2211] 2021-09-23 04:11:04,653 >> Num examples = 240
[INFO|trainer.py:2214] 2021-09-23 04:11:04,653 >> Batch size = 8
100%|#############################################| 4/4 [00:00<00:00, 7.36it/s]
***** eval metrics *****
epoch = 1.0
eval_loss = 7.2266
eval_runtime = 0:00:00.56
eval_samples = 240
eval_samples_per_second = 422.895
eval_steps_per_second = 7.048
perplexity = 1375.4861
```
https://github.com/vinhngx/transformers/blob/deepspeed-test/examples/pytorch/language-modeling/GPT-small-V100-debug-DS-optimizer-zero2.ipynb<|||||>@vinhngx, probably the best way to proceed is to repost this issue at https://github.com/microsoft/DeepSpeed since as you have discovered with @samyam's help this is something to do with deepspeed core, rather than HF integration.<|||||>Alright. Submitted a bug for DeepSpeed here: https://github.com/microsoft/DeepSpeed/issues/1402
Thanks for looking into this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,652 | closed | Use `transformers` models as Spark estimators | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
It would be great to have the possibility to use `transformers` models and pipelines for inference on massive Spark `DataFrames`.
Given a model trained the standard way, one would have a syntax of this kind (or any other):
```python
from transformers.spark import SparkWrapper
from transformers import BertForSequenceClassification
model = BertForSequenceClassification(...)
estimator = SparkWrapper(model)
estimator.fit(trainingData)
```
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
`huggingface/transformers` models **and** tokenizers are great, but it seems they are not suitable for making inference on Big Data (example use case: I have a classifier I wanna use to make predictions on millions of sentences).
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
[Gluon NLP based on Apache MXNet](https://nlp.gluon.ai/) and [Spark NLP](https://nlp.johnsnowlabs.com/) seem to be the only NLP libraries that run natively on Spark. The export from `huggingface/transformers` models/tokenizers to them is not a one-liner, when it's possible. Maybe a solution for my problem would be to write conversion scripts, but I find it interesting to have the spark logic built in `transformers` anyway.
@sgugger @LysandreJik @patrickvonplaten | 09-20-2021 12:26:50 | 09-20-2021 12:26:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,651 | closed | some error when I finetune wav2vec2 by rum_common_voice.py | when I run run rum_common_voice.py with **--max_train_samples 100 \ --max_val_samples 10,**
I meet an error.
`Traceback (most recent call last):
File "rum_common_voice.py", line 537, in <module>
main()
File "rum_common_voice.py", line 404, in main
train_dataset = train_dataset.select(range(data_args.max_train_samples))
AttributeError: 'DatasetDict' object has no attibute 'select'` | 09-20-2021 12:25:41 | 09-20-2021 12:25:41 | Hello! Could you link to `run_common_voice.py`?<|||||>Ping @patrickvonplaten related to https://github.com/huggingface/transformers/pull/13620<|||||>Hey @xzwworkplace :-),
I haven't encountered this error before - every dataset object should have a `select` method. What version of `datasets` are you using? And could you maybe add a google colab to reproduce the error?<|||||>> Hey @xzwworkplace :-),
>
> I haven't encountered this error before - every dataset object should have a `select` method. What version of `datasets` are you using? And could you maybe add a google colab to reproduce the error?
**Thanks reply,I'm use datasets=1.12.1,and I'm a student from china,I don't know how to use google colab,here are the run_common_voice.py I modified.**
`train_csv = "/media/shiyanshi/E/2020_XZW/fairseq/work/train/together.txt.csv"
dev_csv = "/media/shiyanshi/E/2020_XZW/fairseq/work/dev/together.txt.csv"
test_csv = "/media/shiyanshi/E/2020_XZW/fairseq/work/test/together.txt.csv"
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Detecting last checkpoint.
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
+ f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
if is_main_process(training_args.local_rank):
transformers.utils.logging.set_verbosity_info()
logger.info("Training/evaluation parameters %s", training_args)
# Set seed before initializing model.
set_seed(training_args.seed)
## Get the datasets:
train_dataset = datasets.load_dataset(
'csv', data_files={'train': train_csv}, cache_dir='/media/shiyanshi/E/2020_XZW/fairseq/work/huggingface'
)
eval_dataset = datasets.load_dataset('csv', data_files={'test': test_csv}, cache_dir='/media/shiyanshi/E/2020_XZW/fairseq/work/huggingface')
# Create and save tokenizer
#chars_to_ignore_regex = f'[{"".join(data_args.chars_to_ignore)}]'
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["text"]).lower() + " "
return batch
train_dataset = train_dataset.map(remove_special_characters, remove_columns=["id"])
eval_dataset = eval_dataset.map(remove_special_characters, remove_columns=["id"])
def extract_all_chars(batch):
all_text = " ".join(batch["text"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
vocab_train = train_dataset.map(
extract_all_chars,
batched=True,
batch_size=-1,
keep_in_memory=True,
remove_columns=train_dataset.column_names["train"]
)
vocab_test = train_dataset.map(
extract_all_chars,
batched=True,
batch_size=-1,
keep_in_memory=True,
remove_columns=train_dataset.column_names["train"]
)
vocab_list = list(set(vocab_train["train"]["vocab"][0]) | set(vocab_test["train"]["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(vocab_list)}
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
print(vocab_dict)
with open("vocab.json", "w") as vocab_file:
json.dump(vocab_dict, vocab_file)
tokenizer = Wav2Vec2CTCTokenizer(
"./vocab.json",
unk_token="[UNK]",
pad_token="[PAD]",
word_delimiter_token="|",
cache_dir='/media/shiyanshi/E/2020_XZW/fairseq/work/huggingface'
)
feature_extractor = Wav2Vec2FeatureExtractor(
feature_size=1, sampling_rate=16_000, padding_value=0.0, do_normalize=True, return_attention_mask=True, cache_dir='/media/shiyanshi/E/2020_XZW/fairseq/work/huggingface'
)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
model = Wav2Vec2ForCTC.from_pretrained(
model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
activation_dropout=model_args.activation_dropout,
attention_dropout=model_args.attention_dropout,
hidden_dropout=model_args.hidden_dropout,
feat_proj_dropout=model_args.feat_proj_dropout,
mask_time_prob=model_args.mask_time_prob,
gradient_checkpointing=model_args.gradient_checkpointing,
layerdrop=model_args.layerdrop,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
if data_args.max_val_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_val_samples))
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays and tokenize the targets.
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["file"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
batch["sampling_rate"] = 16_000
batch["target_text"] = batch["text"]
return batch
train_dataset = train_dataset.map(
speech_file_to_array_fn,
remove_columns=train_dataset.column_names["train"],
num_proc=data_args.preprocessing_num_workers,
)
eval_dataset = eval_dataset.map(
speech_file_to_array_fn,
remove_columns=eval_dataset.column_names["test"],
num_proc=data_args.preprocessing_num_workers,
)
def prepare_dataset(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values
# Setup the processor for targets
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
train_dataset = train_dataset.map(
prepare_dataset,
remove_columns=train_dataset.column_names["train"],
batch_size=training_args.per_device_train_batch_size,
batched=True,
num_proc=data_args.preprocessing_num_workers,
)
eval_dataset = eval_dataset.map(
prepare_dataset,
remove_columns=eval_dataset.column_names["test"],
batch_size=training_args.per_device_train_batch_size,
batched=True,
num_proc=data_args.preprocessing_num_workers,
)
# Metric
wer_metric = datasets.load_metric("/media/shiyanshi/E/2020_XZW/fairseq/work/wer.py")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
if model_args.freeze_feature_extractor:
model.freeze_feature_extractor()
# Data collator
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Initialize our Trainer
trainer = CTCTrainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset['train'] if training_args.do_train else None,
eval_dataset=eval_dataset['test'] if training_args.do_eval else None,
tokenizer=processor.feature_extractor,
)
# Training
if training_args.do_train:
if last_checkpoint is not None:
checkpoint = last_checkpoint
elif os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
# Save the feature_extractor and the tokenizer
if is_main_process(training_args.local_rank):
processor.save_pretrained(training_args.output_dir)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model()
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate()
max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)
metrics["eval_samples"] = min(max_val_samples, len(eval_dataset))
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
return results`<|||||>> Ping @patrickvonplaten related to #13620
Thanks reply,I also meet an error(cuda out of memory),Can these two hyperparameters help solve this problem.
My audio file is no longer than 15s.I have see another two issue https://github.com/huggingface/transformers/issues/10366 and https://github.com/huggingface/transformers/issues/10965.I don't know how to pass one chunk at a time to the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>We will soon support energy-based chunking for Wav2Vec2. But essentially when you get an OOM error it means that your input sequences are too long<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@xzwworkplace - can you try filtering out sequences that are too long or chunk them?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @anton-l - this should be rather easy to solve with a chunking method provided by us very soon :-)<|||||>thank you very much!
---- 回复的原邮件 ----
| 发件人 | Patrick von ***@***.***> |
| 日期 | 2021年12月13日 19:28 |
| 收件人 | ***@***.***> |
| 抄送至 | ***@***.******@***.***> |
| 主题 | Re: [huggingface/transformers] some error when I finetune wav2vec2 by rum_common_voice.py (#13651) |
cc @anton-l - this should be rather easy to solve with a chunking method provided by us soon
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,650 | closed | [SequenceFeatureExtractor] Rewrite padding logic from pure python to numpy | # What does this PR do?
Resolves #13539
Since speech models universally use Numpy `float32` arrays as input features (standard way of representing waveforms), it was decided to rewrite `SequenceFeatureExtractor` from pure python lists (akin to traditional tokenizers) to numpy arrays. It will also help with solving some inconsistent normalization issues (#13538, #13585) due to `float->np.float32` conversions.
The feature extractor itself is still dtype-agnostic (can pad `np.float64` in the future if needed), while the model-specific feature extractors were updated to only work with `np.float32` | 09-20-2021 12:16:58 | 09-20-2021 12:16:58 | Also did you notice a speed-up for larger inputs?<|||||>@patrickvonplaten the benchmarking results are pretty promising:
1. Input lengths from 8000 to 16000 (1 sec max), batch size 64, feature_extractor only:
- Python: 52.1 ms ± 2.35 ms
- Numpy: **32.1 ms** ± 1.13 ms
2. Input lengths from 8000 to 160000 (10 sec max), batch size 64, feature_extractor only:
- Python: 276 ms ± 950 µs
- Numpy: **68.2 ms** ± 491 µs
<|||||>Great job @anton-l - feel free to merge! |
transformers | 13,649 | closed | [FLAX] Question Answering Example | # What does this PR do?
Flax Question Answering Example
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten @patil-suraj @sgugger | 09-20-2021 10:29:14 | 09-20-2021 10:29:14 | @patil-suraj
Done changes according to your review. |
transformers | 13,648 | closed | Auto model for conditional generation | # 🚀 Feature request
The majority of tasks (CausalLM, QuestionAnswering...) have the option to be loaded without importing the specific architecture of the model with `AutoModel` yet for `ConditionalGeneration` you need to import the specific class.
is there a technical reason restraining such an implementation?
| 09-20-2021 10:00:38 | 09-20-2021 10:00:38 | If you can confirm that this would be a good plus, I can push my modification for this.<|||||>Hello! These models are available in `AutoModelForSeq2SeqLM`!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,647 | closed | [run_summarization] fix typo | # What does this PR do?
Fix typo | 09-20-2021 07:41:56 | 09-20-2021 07:41:56 | |
transformers | 13,646 | closed | fix research_projects/mlm_wwm readme.md examples | the variables of run example is not correct
@wlhgtc
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-20-2021 05:13:52 | 09-20-2021 05:13:52 | |
transformers | 13,645 | closed | lmetric = load_metric("sacrebleu") exception | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.10.2
- Platform:win 10
- Python version:3.7
- PyTorch version (GPU?):cpu 1.9
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
run_translation.py
[INFO|modeling_utils.py:1533] 2021-09-20 11:18:57,844 >> All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at D:\learn\torch_learn\models\mt5_small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training.
09/20/2021 11:18:59 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\DELL\.cache\huggingface\datasets\json\default-c219a2f78cd5565c\0.0.0\d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50\cache-be5ce54c6a2ffc39.arrow
09/20/2021 11:19:00 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\DELL\.cache\huggingface\datasets\json\default-c219a2f78cd5565c\0.0.0\d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50\cache-6663dd6caea415f1.arrow
Traceback (most recent call last):
File "D:/learn/torch_learn/pytorch/translation/run_translation.py", line 618, in <module>
main()
File "D:/learn/torch_learn/pytorch/translation/run_translation.py", line 490, in main
metric = load_metric("sacrebleu")
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 819, in load_metric
dataset=False,
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 510, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\utils\file_utils.py", line 299, in cached_path
use_auth_token=download_config.use_auth_token,
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\utils\file_utils.py", line 568, in get_from_cache
headers=headers,
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\utils\file_utils.py", line 474, in http_head
max_retries=max_retries,
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\datasets\utils\file_utils.py", line 395, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\requests\adapters.py", line 449, in send
timeout=timeout
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\connection.py", line 506, in _connect_tls_proxy
ssl_context=ssl_context,
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "C:\Users\DELL\anaconda3\envs\test\lib\site-packages\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "C:\Users\DELL\anaconda3\envs\test\lib\ssl.py", line 423, in wrap_socket
session=session
File "C:\Users\DELL\anaconda3\envs\test\lib\ssl.py", line 827, in _create
raise ValueError("check_hostname requires server_hostname")
ValueError: check_hostname requires server_hostname
| 09-20-2021 03:19:31 | 09-20-2021 03:19:31 | |
transformers | 13,644 | closed | Change https:/ to https:// to dataset GitHub repo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-19-2021 21:38:00 | 09-19-2021 21:38:00 | Should fix https://github.com/huggingface/transformers/issues/13635 |
transformers | 13,643 | closed | Fix typo distilbert doc to code link | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-19-2021 21:29:15 | 09-19-2021 21:29:15 | Should fix https://github.com/huggingface/transformers/issues/13638<|||||>What about https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/retribert.rst ?
Linking to same Distil folder path, what's the correct one it should point to ? |
transformers | 13,642 | closed | Report test durations in scheduled CI tests | The scheduled pytorch tests start to hit the 6-hour timeout for self-hosted CI runners (see [**Job execution time**](https://docs.github.com/en/actions/reference/usage-limits-billing-and-administration#usage-limits)). At the moment, `GPT-J` is the main source of 10-minute `pytest` timeouts (50 minutes in total), but there are some other tests that could probably be improved time-wise or pruned altogether.
To investigate the slowdowns in the CI, I propose enabling the test duration tracking (at least for a while), that generates logs like these at the end of pytest runs:
```
=================== slowest durations ===================================
600.02s call tests/test_modeling_gptj.py::GPTJModelTest::test_model_from_pretrained
600.01s call tests/test_modeling_gptj.py::GPTJModelTest::test_batch_generation
600.00s call tests/test_modeling_gptj.py::GPTJModelLanguageGenerationTest::test_gptj_sample
284.58s call tests/test_modeling_gpt_neo.py::GPTNeoModelLanguageGenerationTest::test_batch_generation
80.10s call tests/test_modeling_fsmt.py::FSMTModelIntegrationTests::test_inference_no_head
78.26s call tests/test_modeling_fsmt.py::FSMTModelIntegrationTests::test_translation_direct_1_ru_en
72.04s call tests/test_modeling_fsmt.py::FSMTModelIntegrationTests::test_translation_direct_2_en_de
70.72s call tests/test_modeling_fsmt.py::FSMTModelIntegrationTests::test_translation_direct_3_de_en
37.69s call tests/test_modeling_funnel.py::FunnelModelIntegrationTest::test_inference_model
30.18s call tests/test_modeling_flaubert.py::FlaubertModelIntegrationTest::test_inference_no_head_absolute_embedding
25.47s call tests/test_modeling_encoder_decoder.py::BertGenerationEncoderDecoderModelTest::test_real_model_save_load_from_pretrained
21.86s call tests/test_modeling_blenderbot.py::Blenderbot3BIntegrationTests::test_generation_from_short_input_same_as_parlai_3B
19.56s call tests/test_modeling_encoder_decoder.py::GPT2EncoderDecoderModelTest::test_bert2gpt2_summarization
19.56s call tests/test_modeling_encoder_decoder.py::BertGenerationEncoderDecoderModelTest::test_roberta2roberta_summarization
...
11.36s call tests/test_modeling_bart.py::BartModelIntegrationTests::test_cnn_summarization_same_as_fairseq
11.12s call tests/test_modeling_bart.py::BartModelIntegrationTests::test_base_mask_filling
10.40s call tests/test_modeling_gpt_neo.py::GPTNeoModelLanguageGenerationTest::test_lm_generate_gpt_neo
10.01s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence
(7320 durations < 10s hidden. Use -vv to show these durations.)
```
- "Short report" generation should not get affected by this (@LysandreJik correct me if I'm wrong?)
- Tests with runtimes shorter than 10s will not get reported, to keep the logs manageable. | 09-19-2021 18:33:52 | 09-19-2021 18:33:52 | I believe this is already taken care of by the reports (see the `--make_reports` argument right before you add the `--durations` argument).
In order to check these out you can go to a test's summary and download the artifact:

There, you'll find a `tests_xxx_durations.txt` containing the durations of the run.
This does not output a result when the tests timeout, however - does this addition do? I don't think so as the run gets canceled before outputting pytest's results<|||||>> This does not output a result when the tests timeout, however - does this addition do?
Ah, no, the reports do exactly the same thing, thank you for pointing it out @LysandreJik! Closing this PR then :slightly_smiling_face: |
transformers | 13,639 | closed | Fix mT5 documentation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The mT5 documentation was incomplete. This pull request fixes the documentation. Besides, it changes MT5 to mT5.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 09-19-2021 16:01:59 | 09-19-2021 16:01:59 | |
transformers | 13,638 | closed | Dead link in docs | https://huggingface.co/transformers/model_doc/distilbert.html
In this page, under the paragraph,
_This model was contributed by victorsanh. This model jax version was contributed by kamalkraj. The original code can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research-projects/distillation)._
The link to the original code is broken. | 09-19-2021 15:49:06 | 09-19-2021 15:49:06 | Fixed by #13643 |
transformers | 13,637 | closed | Use torch.unique_consecutive to check elements are same | # What does this PR do?
We use `torch.unique` here only to check whether every elements have the same value.
Therefore, we can use [`torch.unique_consecutive`](https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html) here.
This function eliminates all but the first element from every consecutive group of equivalent elements.
Like, if we apply this function to `[1, 2, 2, 1]`, it will result in `[1, 2, 1]`.
As you could see, this is enough for checking whether every elements have the same value.
Since `torch.unique_consecutive` do less thing, it is much more faster.
On my computer, it is 25x faster on GPU and 15x faster on CPU.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten | 09-19-2021 09:35:36 | 09-19-2021 09:35:36 | |
transformers | 13,636 | closed | [Fix]Make sure the args tb_writer passed to the TensorBoardCallback works | # What does this PR do?
Add an `if` check in the `on_train_begin` function of the `TensorBoardCallback`, to make sure the args `tb_writer` passed to the `__init__` won't be overwritten.
I wanted to pass a tb_writer to the `TensorBoardCallback`, so that I could continue using it after training. But the passed tb_writer didn't work.
```python
tb_writer = SummaryWriter(log_dir="some_dir")
cb = TensorBoardCallback(tb_writer)
trainer = Trainer(
# not important
callbacks=[cb],
)
trainer.train()
tb_writer.add_scalar("other metrics", 0.99, 0) # didn't work
```
After checking the code of `TensorBoardCallback`, I found that the tb_writer passed to the `__init__` would be overwritten by the `on_train_begin` function with a new one no matter what.
By adding an `if` statement before the init, this bug should be fixed.
```python
if self.tb_writer is None: # check
self._init_summary_writer(args, log_dir)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/5329 (a similar issue but is "wontfix" and out-of-date)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-18-2021 17:07:38 | 09-18-2021 17:07:38 | |
transformers | 13,635 | closed | [typo] invalid URL | In <https://github.com/huggingface/transformers/blob/master/docs/source/training.rst?plain=1#L36>
```
We will use the `🤗 Datasets <https:/github.com/huggingface/datasets/>`__ library to download and preprocess the IMDB
```
Correct URL should be <https://github.com/huggingface/datasets/>. | 09-18-2021 12:32:28 | 09-18-2021 12:32:28 | Closed by #13644 |
transformers | 13,634 | closed | Add the fast implementation of `BlenderbotTokenizer` | # 🚀 Feature request
As it is the case for other models' tokenizers, add the fast implementation of BlenderbotTokenizer.
## Motivation
To have faster tokenization for Blenderbot models. (Also, the implementation should be pretty straightforward considering the similarity to the `RobertaTokenizer`/`RobertaTokenizerFast`.)
## Your contribution
I would like to have a look at this and will be glad to add that. | 09-17-2021 20:50:59 | 09-17-2021 20:50:59 | That sounds great @stancld, we would love a PR!<|||||>@LysandreJik - I found a minor issue in the formatting of `tokenizer_config.json` at https://huggingface.co/facebook/blenderbot-3B/blob/main/tokenizer_config.json, where is `"add_prefix_space": "true"` instead of `"add_prefix_space": true`. This leads to the error (see below) during the slow->fast (tokenizer) conversion. It can be handled in the `converter` source code, though I believe it might be better to update a config file. Is there a way of how to send a PR to HF's hub?
Error:
```
TypeError: Can't convert 'true' to PyBool
```<|||||>Ah ! There's no way to do that as of now - let me handle that for you.<|||||>Should be done with [`huggingface#c468b23`](https://huggingface.co/facebook/blenderbot-3B/blob/main/tokenizer_config.json)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>still open #13720 |
transformers | 13,633 | closed | [Flax] Add FlaxBlenderbot | # What does this PR do?
This PR adds flax implementation of Blenderbot.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
<hr>
### TODOs:
- fix PT-Flax model equivalence
## Who can review?
@patrickvonplaten @patil-suraj | 09-17-2021 20:44:16 | 09-17-2021 20:44:16 | @patrickvonplaten I would like to kindly ping for a review. :) I've been struggling to achieve the pt-flax equivalence, however, I cannot find that difference/bug in this new flax implementation.
Thanks a lot! :) <|||||>Hey @stancld,
Thanks a lot for the PR! The difference between PT and Flax in your PR is very close actually < 0.1 so it might also very well be that the implementation is correct!
I'll try to take a deeper look at the end of next week. Could you try one last thing:
add print statements such as:
`print("PT", hidden_states.sum())` in PyTorch
and
`print("FX", hidden_states.sum())` in Flax
before the word embeddings, after the word embeddings, each encoder transformer layer, before the decoder word embeddings, the decoder attention layers, ... to see when the activations start to diverge. If it happens gradually it might very well be that the model is correct and there is a difference. If it haapens all of a sudden at some point then there might be a subtle bug.<|||||>@patrickvonplaten Thank you for the tip! I'll have a look :) <|||||>Hello @patrickvonplaten, I ran a few tests it seems and one output is below. There is some level of divergence, but not sure if it's too severe. I'm gonna check the Flax code today once again :)
```
===PyTorch===
---Encoder---
PT first hidden-states: tensor(-1.2589)
PT encoder after self-attn: tensor(0.5862)
PT encoder: tensor(-0.7895)
PT encoder after self-attn: tensor(0.0465)
PT encoder last hidden states before norm: tensor(-0.2601)
PT encoder last hidden states after norm: tensor(0.)
---Decoder---
PT decoder after self-attn: tensor(1.1000)
PT decoder after cross-attn: tensor(0.1547)
PT decoder: tensor(-0.0142)
PT decoder after self-attn: tensor(0.9638)
PT decoder after cross-attn: tensor(1.7759)
PT decoder: tensor(2.7198)
PT decoder last hidden states before norm: tensor(2.7198)
PT decoder last hidden states after norm: tensor(-5.7220e-06)
PT output: tensor(-5.7220e-06)
===Flax===
---Encoder---
FX first hidden-states: -1.2589027
FX encoder after self-attn: 0.59013414
FX encoder: -0.7862803
FX encoder after self-attn: 0.04762589
FX encoder last hidden states before norm: -0.25001374
FX encoder last hidden states after norm: 4.053116e-06
---Decoder---
FX decoder after self-attn: 1.1029385
FX decoder after cross-attn: 0.15325405
FX decoder: -0.013041288
FX decoder after self-attn: 0.96520036
FX decoder after cross-attn: 1.7912248
FX decoder last hidden states before norm: 2.735363
FX decoder last hidden states after norm: -1.1697412e-06
FX output: -1.1697412e-06
```<|||||>@patrickvonplaten Thank you very much for spotting the problem! :] <|||||>Tests on master seem to be broken currently :-/
But I think the PR is good to go. @patil-suraj could you maybe take a look once you're back (and maybe rebase to master with @stancld to fix the circli ci runner) <|||||>Awesome - I let you merge @patil-suraj once you're back :-) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.