repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 15,646 | closed | Add `decoder_kwargs` to send to LM on asr pipeline. | # What does this PR do?
Fixes #15484
Linked to https://github.com/g8a9/transformers/commit/1f25864edf0319a9984b3051b420b86e287d949b
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-14-2022 14:24:21 | 02-14-2022 14:24:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for adding this feature. Should we maybe add @g8a9 as a co-author here?
Nice catch ! Done. |
transformers | 15,645 | closed | Re-export `KeyDataset`. | # What does this PR do?
Fixes #15524
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 02-14-2022 11:47:37 | 02-14-2022 11:47:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,644 | closed | Fix typo on examples/pytorch/question-answering | # What does this PR do?
Fix typo on examples/pytorch/question-answering/README.md (cna -> can).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-14-2022 02:49:30 | 02-14-2022 02:49:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,643 | closed | Fix TFSequenceSummary's activation | [TFSequenceSummary](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/modeling_tf_utils.py#L1910) set its `activation` only when `config.summary_activation == "tanh"`.
https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/modeling_tf_utils.py#L1960
However, [ElectraConfig](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/electra/configuration_electra.py#L137) has default value `"gelu"` for `summary_activation`, and therefore [TFElectraForMultipleChoice](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/electra/modeling_tf_electra.py#L1427) doesn't use activation at all for `self.sequence_summary`.
This is different from PyTorch version: [SequenceSummary](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/modeling_utils.py#L2198)
https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/modeling_utils.py#L2242
(which has no condition on `tanh`).
This inconsistency also makes the extended pt/tf equivalence fail for `electra` model.
This PR fixes the above issue.
TF: @gante @Rocketknight1
cc @patil-suraj for his somehow related work on adding `ElectraForMultipleChoice` (#4954) with `gelu` for `summary_activation` | 02-13-2022 19:02:11 | 02-13-2022 19:02:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM too - @ydshieh are you ready for me to merge this?<|||||>> LGTM too - @ydshieh are you ready for me to merge this?
Yes, go ahead :-) Thanks! |
transformers | 15,642 | closed | GPT-NeoX-20B Integration | # 🚀 Feature request
Over at EleutherAI we've recently released a 20 billion parameter autoregressive gpt model (see [gpt-neox](https://github.com/EleutherAI/gpt-neox) for a link to the weights). It would be great to get this into transformers!
## Motivation
gpt-neox library is not quite as user-friendly as transformers, and is designed for efficient large scale training over inference. So we think integrating it into transformers would be great for accessibility.
## Your contribution
Personally, I can dedicate a bit of time into integrating this model. We already have a [PR to convert the weights to HF format](https://github.com/EleutherAI/gpt-neox/pull/480), although:
1) I suspect you would want to introduce a new model class for it.
2) It is not merged / thoroughly tested with all possible configurations yet. GPT-NeoX has a bunch of configuration options, and it might be more straightforward to focus on *just* introducing a model class for GPT-NeoX-20B (which should largely be similar to GPT-J, with some caveats, see next section)
## Difficulties
- Whilst we do have a script to merge [model parallel checkpoints](https://github.com/EleutherAI/gpt-neox/pull/466) that will be merged soon, in larger models we see some performance loss when merging mp ranks, due to there being some very slight differences in the replicated parameters between ranks (see the thread above for more details). If we want to integrate GPT-NeoX-20B as a model to be used on a single GPU, we need to figure out how to address this. I'm not sure if this bug is specific to neox, or was introduced in megatron or deepspeed, but I believe the bigscience team [are also looking at merging mp models](https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/239), so I guess we'll find out soon.
- If we do integrate the model without model parallelism - it will be too large to run on most consumer GPUs. During inference, it takes ~45GB of GPU memory to run, and during training much more.
| 02-13-2022 17:29:43 | 02-13-2022 17:29:43 | Hey @sdtblck, this is great, we'd be super excited to support GPT-NeoX in the library. From our side, I believe @mrm8488 was excited about working on the implementation, and @patil-suraj and myself are happy to help out as well.
Let us know how you'd like to collaborate. <|||||>So, i'd be happy to write a basic model class based around GPT-J, but I think we need to decide on how to address the model parallelism issue first. Should the model be written with mp=2 by default? Or would you prefer to write it as a merged model, and try to address the issues with merging somehow?<|||||>@stas00, would you have any recommendation regarding this question?<|||||>We have several of WIP tensor parallelism (TP) projects happening that will eventually address this kind of challenges:
- oslo: TP
- Deepspeed-Inference == TP
but the first one is not yet integrated and the second is still being worked on. But I'm pretty sure they still expect a single file checkpoint.
So **a single model 20B will currently work with Deepspeed-ZeRO over either multi-gpu or with cpu/nvme-offload no problem**, so I'd say that the low hanging fruit is to do mp=1.
And as @sdtblck said Tunji is work on merging the checkpoints - I think he is pretty close to completing at least the many-to-one path. May be try his branch and see if you get better results on checkpoint consolidation?
----------------
wrt, TP, let's ask the project owners:
- @hyunwoongko, could you please comment on the feasibility of running GPT-NeoX-20B on oslo (perhaps for now as a standalone use until it's integrated into transformers)
- @RezaYazdaniAminabadi, could you please comment on the feasibility of running GPT-NeoX-20B on Deepspeed-Inference
And ideally if you have code samples to run that would be very helpful.
Thank you, both!
<|||||>@stas00 Fundamentally OSLO is designed for transformers. So you can easily parallelize gpt-neox by just adding 3 lines of mapping once it is integrated into transformers. But it will be a bit difficult to use before being integrated into transformers.<|||||>Didn't parallelformers work independently of HF Transformers?<|||||>Both oslo and parallelformers are designed for the transformers. But if you want both to support gpt-neox, I can edit my code. So please tell me if you need it.
The process of porting neox to transformers is very important from a user's point of view. If I can help with this process, I will be happy to participate.<|||||>To summarize the conversation, let's first write it as a merged model following @stas00's comment as a first step, leveraging the DeepSpeed integration?
If the GPT-NeoX model is similar to GPT-J but contains a few differences, we recommend creating a new class, as you have highlighted above. We've recently introduced a new tool, [`add-new-model-like`](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model#add-new-model-like-command), which should help creating an exact replica of GPT-J for you to tweak as you wish.
Let us know if you'd like for us to help in any way @sdtblck.<|||||>I am also trying to setup a conversion pipeline between models that are trained on GPT-NEOX and transformers. GPT-NEOX offers different settings for position embedding and normalizations, however GPT-J is only written for rotary embedding. if we could create a general purpose pipeline, then it would be much easier to integrate new models with the transformers. <|||||>> Both oslo and parallelformers are designed for the transformers. But if you want both to support gpt-neox, I can edit my code. So please tell me if you need it.
If you are replying to me, Kevin, I was only asking if either oslo and parallelformers can already support gpt-neox directly before oslo is integrated. So that we then have more than one solution to give to users. If not, all is good, we have deepspeed-zero and once oslo is integrated then there will be at least 2 solutions (or 3 if it works with sagemaker as well).<|||||>I can help with creating a conversion pipeline. <|||||>> And as @sdtblck said Tunji is work on merging the checkpoints - I think he is pretty close to completing at least the many-to-one path. May be try his branch and see if you get better results on checkpoint consolidation?
@sdtblck, progress has been slow but thankfully consistent. Our strategy has been to provide various checkpoint manipulation routines inside deepspeed for client scripts to utilize. Below are links to current status
deepspeed [branch](https://github.com/microsoft/DeepSpeed/tree/olruwase/elastic-ckpt-refresh), [unit tests](https://github.com/microsoft/DeepSpeed/blob/olruwase/elastic-ckpt-refresh/tests/unit/test_reshape_checkpoint.py), and [utils folder](https://github.com/microsoft/DeepSpeed/tree/olruwase/elastic-ckpt-refresh/deepspeed/checkpoint)
bigscience [branch](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/ds_ckpt_reshape) and [clients](https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/ds_ckpt_reshape/tools/convert_checkpoint/deepspeed_to_deepspeed.py).
Do let me know if you find this useful.
<|||||>Hi @stas00,
Sorry I did not see your message earlier. I will look into this and let you know if we can run inference through ds-inference.
Thanks,
Reza<|||||>Hi Stas, I checked the implementation, and we have all the kernels to run this through ds-inference. I will add a policy for this and send a PR at DeepSpeed side.
Best,
Reza<|||||>Thanks a lot, Reza!<|||||>How's this going? I will actively participate in this If there are not enough people.<|||||>Hi,
I have a version of the 20B model with TP=1 written up here: https://github.com/zphang/minimal-gpt-neox-20b/blob/main/minimal20b/model.py
It uses the checkpoint merging script from https://github.com/EleutherAI/gpt-neox/pull/466. As mentioned in the PR and Sid above, there seems to be a slight performance regression from merging TP=2 to TP=1 weights, due to some small differences between the duplicated layer weights.
If we're okay with working off a slightly worse TP=1 version of the model (while we investigate the issue), I am happy to submit a PR for adding GPT-NeoX-20B. (Should the model be uploaded under the EleutherAI org? Should we make it clear that this is slightly worse than the TP=2 version?)<|||||>@zphang is it possible the repo is private?<|||||>Sorry! Should be public now<|||||>Great work to all the people involved in making this available. Since there has been no update for a while, I wanted to check if there are plans to add the Huggingface hub in the near term, and if help is needed with that?<|||||>I've found the issue with TP=1 checkpoints, and should be ready to write the 20B implementation in Transformers.
Quick question: Is it possible to upload multiple files for the layer weights to the Model Hub? Given how big the model is, I'm planning to split up the weights by layers.<|||||>Hey @zphang, since https://github.com/huggingface/transformers/pull/16343, yes it is! Now, when saving the model with `save_pretrained`, you can specify a `max_shard_size` parameter. It will try and split up the weights in files that have a maximum size that you have specified here.
We recommend staying under the upper bound of 20GB, as files above that limit won't get distributed by cloudfront, resulting in very long download speed. We have put a default of `10GB` for each shard.
This is a brand new feature that is only available on the `main` branch of the repository, so you'll need to install from source; and, as always, we're very eager to hear your feedback when using that brand new feature!<|||||>Follow-up question: I'm tweaking the implementation for 20B currently, and have the option to either use more optimized code similar to the Megatron implementation (e.g. adding biases at the last moment) or something that reads more like standard Pytorch (just using F.linear)
Is there a preference for a more straightforward implementation or more performant implementation? ~~To give an idea of the performance difference, in local testing it's something on the order of 1.5s vs. 2s/it.~~ Edit: Ignore the previous comparison, I was looking at the wrong numbers.
Also, given how large the weights are, even having the model be initialized in fp32 might be quite a burden. Since the model itself is natively fp16, would it make sense for me to call `.half()` when instantiating the internal modules?<|||||>I'm working on an implementation here: https://github.com/zphang/transformers/tree/neox20b. Still WIP, but targeting a PR later today/tomorrow.<|||||>Exciting, thanks for working on it @zphang! Let us know if we can help.
> Is there a preference for a more straightforward implementation or more performant implementation?
It depends how much complexity is necessart for the performant implementation, and how platform-dependant the result is. The difference you mention doesn't look like it adds unnecessary complexity, so I would go with the performant approach.
> Also, given how large the weights are, even having the model be initialized in fp32 might be quite a burden. Since the model itself is natively fp16, would it make sense for me to call .half() when instantiating the internal modules?
For GPT-J, I believe we chose to go with two branches on the model repo, one which contains the fp16 weights and one that contains the fp32 weights, so that users may choose whichever weights they prefer.<|||||>Added a PR here: https://github.com/huggingface/transformers/pull/16659
The model implementation should be fully working, based on quick tests on both LM-eval-harness testing (3.680 vs 3.676 perplexity compared to the NeoX implementation) and `.generate`. The model weights should be up on the model hub at `EleutherAI/gpt-neox-20b'.
I'm a little less certain about the tokenizer implementation. So far I've been working off a `tokenizers` Tokenizer object, so I'm not so sure about the slow tokenizer implementation.
Model is large so unfortunately it takes:
- About 1 minute to initialize the model weights
- About 1+ minutes to load the weights in (not including downloading them)
First time adding a model, so there are likely things I've left out/mistakes I made. Please let me know!<|||||>I get `Killed` rather than a `MemoryError` trying to load the weights using `AutoModelForCausalLM`, maybe there is some kind of a memory leak? I was trying to load the model into CPU with `>300GB` of memory.
<|||||>That's typically either cgroups or oom-killer, typically you should see the details in `/var/sys/log` and may be the output of `dmesg` - so it doesn't matter how much memory you have, what matters is how tightly these are configured to kill programs that consume more residential cpu memory than they are configured to allow.
But regardless:
1. shard this large checkpoint first as I have shown here:
https://github.com/huggingface/transformers/issues/16616#issuecomment-1103449592
2.
and switch to `transformers@main` as just this morning memory usage for sharded loading got more efficient https://github.com/huggingface/transformers/pull/16844
<|||||>Thanks for the tip on `cgroups` and `oom-killer`, I will inquire about these limits.
I think this model is already sharded into pieces `<1GB` each (see repo here: https://huggingface.co/EleutherAI/gpt-neox-20b/tree/main), so that seems less of an issue.
(more details here: #16659; https://github.com/huggingface/transformers/pull/16659#issuecomment-1104442722)
<|||||>OK, got past the memory issue, it was an issue on my end (slurm job scheduler).
This may be an issue:
```python
File ..., in load_tokenizer(model_name_or_path='./gpt-neox-20b', **kwargs={'cache_dir': ...})
16 def load_tokenizer(model_name_or_path: str = None, **kwargs) -> AutoTokenizer:
---> 17 return AutoTokenizer.from_pretrained(model_name_or_path, **kwargs)
model_name_or_path = './gpt-neox-20b'
kwargs = {'cache_dir': ...}
File .../lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py:525, in AutoTokenizer.from_pretrained(cls=<class 'transformers.models.auto.tokenization_auto.AutoTokenizer'>, pretrained_model_name_or_path='./gpt-neox-20b', *inputs=(), **kwargs={'_from_auto': True, 'cache_dir': ...})
522 tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)
524 if tokenizer_class is None:
--> 525 raise ValueError(
526 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
527 )
528 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
530 # Otherwise we have to be creative.
531 # if model is an encoder decoder, the encoder tokenizer class is used by default
ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently imported.
```<|||||>I got this error while loading the model in GPU (Quadro RTX 8000 with 48GB memory):
```
Traceback (most recent call last):
File "infer.py", line 23, in <module>
beam_output = model.generate(
File "/gpt_neo/transformers/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/gpt_neo/transformers/src/transformers/generation_utils.py", line 1306, in generate
return self.sample(
File "/gpt_neo/transformers/src/transformers/generation_utils.py", line 1922, in sample
outputs = self(
File "/gpt_neo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/gpt_neo/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 621, in forward
outputs = self.gpt_neox(
File "/gpt_neo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/gpt_neo/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 513, in forward
outputs = layer(
File "/gpt_neo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/gpt_neo/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 317, in forward
attention_layer_outputs = self.attention(
File "/gpt_neo/transformers/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/gpt_neo/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 155, in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
File "/gpt_neo/transformers/src/transformers/models/gpt_neox/modeling_gpt_neox.py", line 220, in _attn
raise RuntimeError()
RuntimeError
```
This is also the whole code I used:
```
model = AutoModelForCausalLM.from_pretrained(PATH, local_files_only=True).half().cuda()
tokenizer = GPTNeoXTokenizerFast.from_pretrained(PATH, local_files_only=True)
input_ids=tokenizer.encode("This is the input text", return_tensors="pt",add_special_tokens=False).cuda()
beam_output = model.generate(
input_ids=input_ids,
max_length=input_ids.shape[1]+30,
min_length=input_ids.shape[1]+5,
early_stopping=True,
num_return_sequences=4,
do_sample=True
)
```
Can anyone run the model on GPU?<|||||>@farzanehnakhaee70 when you use the generate function, there is some overhead GPU memory used, especially since num_return_sequences=4 and max_length > 30.
I know it doesn't say cuda out of memory, but it seems most likely to me.
Perhaps you could try something simpler like
```
input_ids=tokenizer.encode("text", return_tensors="pt",add_special_tokens=False).cuda()
model(input_ids)
```
To minimize memory usage and see if that works.
Then maybe
```
beam_output = model.generate(
input_ids=input_ids,
max_length=input_ids.shape[1]+5,
min_length=input_ids.shape[1]+5,
early_stopping=True,
num_return_sequences=1,
do_sample=True
)
```
Hope that works.<|||||>Thanks @ViktorThink
Unfortunately it doesn't solve the issue.
<|||||>I get the following error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
2 tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
File ~/Documents/GitHub/prompting/env/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py:423, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 kwargs["_from_auto"] = True
422 if not isinstance(config, PretrainedConfig):
--> 423 config, kwargs = AutoConfig.from_pretrained(
424 pretrained_model_name_or_path, return_unused_kwargs=True, trust_remote_code=trust_remote_code, **kwargs
425 )
426 if hasattr(config, "auto_map") and cls.__name__ in config.auto_map:
427 if not trust_remote_code:
File ~/Documents/GitHub/prompting/env/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:672, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
670 return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
671 elif "model_type" in config_dict:
--> 672 config_class = CONFIG_MAPPING[config_dict["model_type"]]
673 return config_class.from_dict(config_dict, **kwargs)
674 else:
675 # Fallback: use pattern matching on the string.
File ~/Documents/GitHub/prompting/env/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py:387, in _LazyConfigMapping.__getitem__(self, key)
385 return self._extra_content[key]
386 if key not in self._mapping:
--> 387 raise KeyError(key)
388 value = self._mapping[key]
389 module_name = model_type_to_module_name(key)
KeyError: 'gpt_neox'
```
is this no longer supported?<|||||>Zphangs PR hasn't been merged with huggingface transformers yet, so you need to install transformers like this:
`pip3 install git+https://github.com/zphang/transformers.git@neox20b
`
Then it should be possible to download.<|||||>Hi everybody,
Is there any updates for the problem I had? Could anyone run the model on GPU?
Just an update from my side is that whenever I run the model in full precision, I got the memory error for which can not allocate the memory needed.<|||||>I'm trying to get this to run on multiple GPUs (8) using deepspeed, zero3, fp16. I'm hitting an out-of-memory error I believe. Machine has ~400GiB RAM. Process is killed with exit code -9.<|||||>What if the current setup for loading a pretrained model directly into fp16 in Transformers? I think the issue may be that the weights are being loaded in fp32 before calling `.half()`?<|||||>Is there an easy way to call `.half()` before loading weights?<|||||>Hi everyone, the model run on gpu with an A40 using the following code:
```{python}
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype=torch.float16)
model = model.to("cuda:0")
```
`.half()` is leading to the same issue reported by @farzanehnakhaee70
<|||||>Hello! I was trying to run the model on smaller GPUs (T4) using accelerate. I added one config to get rid of the 'gpt-neox' key-error (straight.
Script:
```
import torch
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
from huggingface_hub import snapshot_download
weights_path = "EleutherAI/gpt-neox-20b"
from accelerate import init_empty_weights, dispatch_model, infer_auto_device_map, load_checkpoint_and_dispatch
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, AutoModelForSeq2SeqLM
config = AutoConfig.from_pretrained(weights_path)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
tokenizer = AutoTokenizer.from_pretrained(weights_path)
#not sure if this is needed
#model.tie_weights()
device_map = infer_auto_device_map(model, no_split_module_classes=["GPTNeoXLayer"], max_memory={0:12000000000,1:12000000000,2:12000000000,3:12000000000,4:12000000000,5:12000000000,6:12000000000,7:12000000000}, dtype=torch.float16)
load_checkpoint_and_dispatch(
model,
weights_path,
device_map=device_map,
offload_folder=None,
dtype=torch.float16,
offload_state_dict=True
)
prompt = 'Huggingface is'
input_tokenized = tokenizer(prompt, return_tensors="pt")
output = model.generate(input_tokenized["input_ids"].to(0), do_sample=True)
output_text = tokenizer.decode(output[0].tolist())
```
The error I get is the same as @farzanehnakhaee70.
Somebody knows how to get the model working with accelerate?
<|||||>Much appreciated @gaetanlop . That really works.
The other problem is that the previous error still existed while using deepSpeed and also accelerate (as what @thies1006 mentioned). Do you have any solution for that? |
transformers | 15,641 | closed | Update bad_words_ids usage | This PR updates the `bad_words_ids` strategy (follow the PR #15343) and usage.
@patrickvonplaten @LysandreJik | 02-13-2022 13:38:21 | 02-13-2022 13:38:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,640 | closed | Add support for ONNX-TensorRT conversion for GPT-J6B (and possible bug in rotary embedding) | ### Who can help
@patil-suraj
## Information
Model I am using: GPT-J
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## Description
I opened this issue for two reasons:
1. This is not strictly a bug report, rather a change that enables converting this model to ONNX and then parsing it using the current TensorRT ONNX parser.
2. Possible implementation bug in GPT-J.
## Details
1. When exporting GPT-J to ONNX using the latest version (v4.16.2), one of the ops that is exported is [SplitToSequence](https://github.com/onnx/onnx/blob/main/docs/Operators.md#SplitToSequence) (along with more Sequence* ops) that is currently not supported in the [TensorRT ONNX parser](https://github.com/onnx/onnx-tensorrt/blob/master/docs/operators.md).
This is entirely due to just 1 line of code that uses `torch.repeat_interleave`. ([relevant line](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/gptj/modeling_gptj.py#L67))
```
sin, cos = map(lambda t: t[None, offset : x.shape[1] + offset, None, :].repeat_interleave(2, 3), sincos)
```
By replacing `lambda t` with this:
```
lambda t: t.view(-1, 1).repeat(1, 2).view(seq_len, -1)[None, offset : x.shape[1] + offset, None, :]
```
we get the exact same output tensors but now exporting to ONNX doesn't include any Sequence* ops, and TensorRT can parse it successfully.
The suggested function is even faster, although probably not critical in this huge model (benched only on CPU):
```
original: 106 µs ± 20.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
suggested: 32.4 µs ± 6.55 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
2. I was following the implementation in EleutherAI for rotary positional embeddings and I'm trying to understand if this is a bug or I'm simply missing something (would love an explanation if you can spare the time) but there (EleutherAI) they implement this function (rotary positional embedding) using `torch.cat` instead of `torch.repeat_interleave`, as can be seen [here](https://github.com/EleutherAI/gpt-neox/blob/b30afd1d0a1d06220be9b5f2c9c9c1523defba96/megatron/model/positional_embeddings.py#L41).
If I'm not missing something, the EleutherAI version transforms a tensor from
```
[[1,2,3],
[4,5,6]]
```
to
```
[[1,2,3,1,2,3],
[4,5,6,4,5,6]]
```
and HF version (using repeat_interleave):
```
[[1,2,3],
[4,5,6]]
```
to
```
[[1,1,2,2,3,3],
[4,4,5,5,6,6]]
```
Can anyone confirm the current implementation is indeed correct? Because otherwise `cat` and `repeat_interleave` are very different, and the rest of the implementation doesn't take it into account. | 02-13-2022 13:04:20 | 02-13-2022 13:04:20 | Maybe also of interest to @lewtun @michaelbenayoun <|||||>Hey @tomerip thank you for this very detailed and informative feedback!
Regarding point (1), I agree that it would be nice to have a work around for the absent `SplitToSequence` op in TensorRT.
To proceed, I think we'd need to try the following:
1. Validate that the proposed change doesn't have a negative impact on fine-tuning / vanilla inference (i.e. on model metrics like accuracy etc). GPT-J is a popular model, so we need to be sure that we don't introduce a subtle error.
2. Implement an ONNX config for this architecture (see [here](https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture) for a guide)
3. Validate that the proposed change to the modeling file works for the supported `features` (i.e. tasks) defined by step (2)
4. [Nice to have] Validate that the proposed change also works for ONNX Runtime. They support the [`SplitToSequence` op](https://github.com/microsoft/onnxruntime/blob/df841ee87d0e3c9b814dabaadb4f0490e374a3e6/docs/OperatorKernels.md), so my guess is that the proposed change would also work (although I haven't checked)
I can help out with steps 2-4, but would like to first know what the impact of step (1) is. Regarding your point (2), I'll defer to @patil-suraj who is the expert on the GPT-J models 😃 <|||||>Hi @lewtun, thanks for replying.
I agree that the first step should be making sure the change keeps everything the same.
Since the the new `lambda_t` function returns _exactly_ the same outputs, and the underlying ops are just repeats I don't think the grads will be different. In terms of performance, in my original post we can see it's even slightly faster (although probably negligible anyway).
In this short code we can see that it returns the same outputs:
```
import torch
import random
random.seed(42)
def fixed_pos_embedding(x, seq_dim=1, seq_len=None): # no changes in this function
dim = x.shape[-1]
if seq_len is None:
seq_len = x.shape[seq_dim]
inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2) / dim))
sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(seq_len), inv_freq).to(x.device).float()
return torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)
def test_lambda_t(batch_size, num_heads, seq_len, head_features, num_blocks, offset):
x1 = torch.randn(batch_size, num_heads, seq_len, head_features) # (batch, head, seq_length, head_features)
x2 = torch.randn(batch_size, num_blocks, num_heads, seq_len, head_features) # (batch, blocks, head, block_length, head_features)
for x in [x1, x2]:
sincos = fixed_pos_embedding(x, 1, seq_len=seq_len)
lambda_old = lambda t: t[None, offset : x.shape[1] + offset, None, :].repeat_interleave(2, 3)
lambda_new = lambda t: t.view(-1, 1).repeat(1, 2).view(seq_len, -1)[None, offset : x.shape[1] + offset, None, :]
sin_old, cos_old = map(lambda_old, sincos)
sin_new, cos_new = map(lambda_new, sincos)
if ((sin_old == sin_new).all() and (cos_old == cos_new).all()) == False:
return False
return True
def test_lambda_t_with_params(num_iter=10):
for _ in range(num_iter):
params = {
'batch_size': random.randint(1, 8),
'num_heads': random.randint(4, 16),
'seq_len': random.randint(32, 2048),
'head_features': random.randint(64, 512),
'num_blocks': random.randint(4, 12),
'offset': random.randint(0, 4),
}
if not test_lambda_t(**params):
return False
return True
test_lambda_t_with_params()
```
Regarding steps 2-3, actually since I hit other issues with TensorRT and GPT-J at the moment, I moved to use DeepSpeed (that currently also has some other problems with GPT-J :D), so I will probably not pursue the ONNX path soon, although in the future I intend to.
Still if @patil-suraj can comment on point (2) it would be great!<|||||>Will look into this and let you know tomorrow.<|||||>Hey @tomerip thanks a lot for the detailed explanation and for showing that the suggested change produces identical outputs!
If @patil-suraj agrees, would you like to open a PR to implement this? I can then take care of the ONNX export :)<|||||>Hi @lewtun,
Sounds great, I'd be happy to open a PR.<|||||>Hey @tomerip ! Thanks a lot for this, feel free to open a PR!
> Can anyone confirm the current implementation is indeed correct
It is correct. In GPTNeoX it is different because it uses `[seq, batch, heads, hdim]` format for `qkv` tensors but in transformers it is `[batch, seq, heads, hdim]`<|||||>Hey @patil-suraj,
Should we keep this issue open so @lewtun can take care of the ONNX export? i.e. steps 2-4 in his comment above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The GPT-J export was added by a community member in https://github.com/huggingface/transformers/pull/16274 |
transformers | 15,639 | closed | Pooled results for DistilBert | When I look at the architecture of Bert and DistilBert, I notice that Bert has a pooler layer while DistilBert does not. As a result, The output shape of the Distilbert is [batch_size, seq_length, hidden_size] while the shape of Bert is to [batch_size, hidden_size]. I wonder why it is the case and if it is possible to add the pooler layer for DistilBert?
My code
```
bert = AutoModel.from_pretrained('bert-base-uncased')
distilbert = AutoModel.from_pretrained('distilbert-base-uncased')
```
Related question: https://github.com/huggingface/transformers/issues/1328 | 02-13-2022 05:31:07 | 02-13-2022 05:31:07 | I think BERT's pooler layers is used for next sentence prediction (NSP - one of the objective functions in BERT pretraining), see `BertForNextSentencePrediction` (and it is then used for downstream classification tasks). Since `DistilBERT` doesn't use NSP, there is no pooler layer.
However, you can simply use `DistilBertForSequenceClassification` if you intend to use the pooler layer to perform a sequence classification task.<|||||>Thanks, @ydshieh. I followed your recommendation and found it very helpful. `DistilBertForSequenceClassification` works fine, but its last layer, the classifier layer's dim=2. I wonder if there is a flexible way to change the output layer to the default dim = 768? <|||||>You can do something like
```
import torch
from transformers import DistilBertForSequenceClassification
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=768)
logits = model(torch.ones(3, 5, dtype=torch.int32)).logits
# verify the shape
print(logits.size())
```
But `SequenceClassification` models' `num_labels` is for the number of labels in a sequence classification task, and the output is not to be considered as `pooler output`. You are using `768`, and I think you really want to have a `pooler` output. I am not sure what kind of task you are performing, **but if it is not sequence classification, and you need a `pooler` output from `DistilBert` for your specific task**, you can just create a custom model like:
```
class MyDistilBert(DistilBertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.distilbert = DistilBertModel(config)
# Use the name `pooler` to make the naming more meaningful for your need
# self.pre_classifier = nn.Linear(config.dim, config.dim)
self.pooler = nn.Linear(config.dim, config.dim)
# Removed if you task is not sequence classification
# self.classifier = nn.Linear(config.dim, config.num_labels)
```
See the implementation of `DistilBertForSequenceClassification` for more details
https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/distilbert/modeling_distilbert.py#L667-L775
<|||||>`DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=768)` works. Thanks a lot. I am trying to fine tune a DistilBert model for text classification. I added some head layers on top of the pre-trained model, and then train these layers while freezing DistilBert model layers. |
transformers | 15,638 | closed | fix bug for the log of RNG states are not properly loaded lead to exception. | # What does this PR do?
fix bug for the log of RNG states are not properly loaded lead to exception because using error function `logger.infor`
find bug in #14994
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-12-2022 15:36:50 | 02-12-2022 15:36:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,637 | closed | fix bug for the log of RNG states are not properly loaded exception. | # What does this PR do?
fix bug for the log of RNG states are not properly loaded exception.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-12-2022 15:23:32 | 02-12-2022 15:23:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,636 | closed | Report only the failed imports in `requires_backends` | # What does this PR do?
Minor QoL improvement. Current checks in `requires_backends` seem to dump all import failures at once even when only a subset fails. This PR just makes it so that only the relevant messages are communicated to the user.
Based on #11100 I think this is the only file that needs changing (no additional doc updates or some such).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@sgugger
| 02-12-2022 10:37:47 | 02-12-2022 10:37:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,635 | closed | Loading a fairseq trained wav2vec2 model with transformers | ```
>>> import transformers
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
>>> processor = Wav2Vec2Processor.from_pretrained("wav2vec_small.pt")
```
gives the error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/scratch/venv/lib/python3.8/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py", line 110, in from_pretrained
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/tmp/scratch/venv/lib/python3.8/site-packages/transformers/feature_extraction_utils.py", line 304, in from_pretrained
feature_extractor_dict, kwargs = cls.get_feature_extractor_dict(pretrained_model_name_or_path, **kwargs)
File "/tmp/scratch/venv/lib/python3.8/site-packages/transformers/feature_extraction_utils.py", line 423, in get_feature_extractor_dict
text = reader.read()
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
```
The file i'm trying to load is the wav2vec2 base model from the paper. | 02-12-2022 03:01:02 | 02-12-2022 03:01:02 | Hey @tensorfoo, there is a [conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py) that can be used to convert original fairseq checkpoints to HF transformer format. Does this solve your issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,634 | closed | Register feature extractor | # What does this PR do?
This PR adds the register method to AutoFeatureExtractor, to allow the user to register their own feature extractor classes, like for models, configurtations and tokenizers. | 02-11-2022 22:19:31 | 02-11-2022 22:19:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,633 | closed | Register feature extractor | # What does this PR do?
This PR adds the register method to `AutoFeatureExtractor`, to allow the user to register their own feature extractor classes, like for models, configurtations and tokenizers (it's based on #15630 so it's better to review this once the other PR is merged). | 02-11-2022 21:03:53 | 02-11-2022 21:03:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,632 | closed | Add push to hub to feature extractor | # What does this PR do?
This PR adds support for `push_to_hub` on the feature extractors, following the same API as models, configurations and tokenizers. | 02-11-2022 20:00:16 | 02-11-2022 20:00:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,631 | closed | Remove redundant error logging in from_pretrained() method | # What does this PR do?
This PR removes the redundant error logging in the `from_pretrained()` method of:
* Configs
* Tokenizers
* Models
* Feature extractors
I also took the liberty of removing the redundant logging from the file utils. If a genuine error is raised, the user will still see it since the corresponding `EnvironmentError` is raised and provides a helpful message.
## Why do this?
In PRs #15620 and #15625 we observed that the `AutoModel.from_pretrained()` method displays a 404 error in the logs when PyTorch and TensorFlow are installed in the same environment and:
* The `transformers.onnx` package is used to export a pure TensorFlow model
* The `pipeline()` function is used to load a pure TensorFlow model
Examples of specific log messages are shown below:
```bash
# Displays 404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
```python
from transformers import pipeline
# Displays 404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve
p = pipeline("question-answering", model="keras-io/transformers-qa")
```
The reason these logs appear is that both the `transformers.onnx` package and the `pipeline()` function attempt to first load the model weights in an `AutoModel` class (see [here](https://github.com/huggingface/transformers/blob/fcb0f7439754f1b830ebac2070a0c8a8f003f7c5/src/transformers/onnx/features.py#L306) and [here](https://github.com/huggingface/transformers/blob/fcb0f7439754f1b830ebac2070a0c8a8f003f7c5/src/transformers/pipelines/base.py#L305)). If the model repo only has TensorFlow weights, the error is logged, but the API resolves it by using either `from_tf=True` or instantiating a `TFAutoModel`.
These logs are potentially confusing for end-users and it's not clear we need them since we raise `EnvironmentError` for "real" errors and provide a pretty clear message on what went wrong.
If we have to keep these logs, one alternative is to rewrite the `transformers.onnx` and `pipeline()` logic so that the error isn't raised for these edge cases (both frameworks installed, trying to load a pure TF model). I'd rather not do that, so happy to hear other suggestions! | 02-11-2022 19:59:16 | 02-11-2022 19:59:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> If we're removing them, there are plenty others, in the flax and tf modeling file, the config utils, tokenization utils and feature extractor utils.
I just want to make sure that @lewtun is agreeable to that - as sometimes a small improvement can turn into multi-day project - we could also merge this and do another PR for an extended clean up if you don't feel you want to expand the scope. IMHO, of course.<|||||>Checking the history, it was introduced by @julien-c in #8324
Do you remember if there was a specific reason to have both the error log and the error raised Julien? For context, we're dealing with errors logged when they shouldn't be if we do a try catch (try to load a PyTorch model, then if it fails try to load the TF model associated like in the pipeline),<|||||>@sgugger I think it was just the library's logging style at the time of the linked PR. No objection to changing this on my side!<|||||>Hi @stas00 @sgugger thanks for the feedback and historical context!
I did a search for all `logging.error(err)` instances and removed them - please let me know if there's some other modules I should be looking at as well. If that's it, then I think this PR is ready for another pass 😃 |
transformers | 15,630 | closed | Custom feature extractor | # What does this PR do?
This PR adds support for custom feature extractor in the Hub, with the same APIs as for models, configurations and tokenizers. | 02-11-2022 18:59:30 | 02-11-2022 18:59:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,629 | closed | Fix _configuration_file argument getting passed to model | # What does this PR do?
Currently, the following code fails:
```
from transformers import LayoutLMv2ForTokenClassification
model = LayoutLMv2ForTokenClassification.from_pretrained('microsoft/layoutxlm-base', num_labels=10)
```
or any code using this checkpoint.
This is because the `_configuration_file` argument passed to the config ends up in the kwargs passed to the model. This PR fixes that and adds a test to make sure we don't regress.
Pb was reported in #15451 | 02-11-2022 17:15:20 | 02-11-2022 17:15:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,628 | closed | make style produces a lot of reformatted lines | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.5
## Information
I am at commit `3fae83d2` (in `master` branch). When I run `make style`, there are tones of files being reformatted (no local change on my side).
```
reformatted examples\pytorch\speech-pretraining\run_wav2vec2_pretraining_no_trainer.py
reformatted examples\research_projects\jax-projects\wav2vec2\run_wav2vec2_pretrain_flax.py
reformatted examples\research_projects\onnx\summarization\bart_onnx\generation_onnx.py
reformatted examples\research_projects\distillation\run_squad_w_distillation.py
reformatted examples\research_projects\wav2vec2\run_pretrain.py
reformatted examples\research_projects\movement-pruning\masked_run_glue.py
reformatted examples\research_projects\pplm\run_pplm.py
reformatted examples\research_projects\movement-pruning\masked_run_squad.py
reformatted examples\research_projects\lxmert\modeling_frcnn.py
reformatted src\transformers\generation_beam_search.py
reformatted examples\research_projects\visual_bert\modeling_frcnn.py
reformatted src\transformers\generation_flax_utils.py
reformatted src\transformers\generation_tf_utils.py
reformatted src\transformers\models\bart\tokenization_bart.py
reformatted src\transformers\modeling_tf_utils.py
....(many more)
```
Most of them are something like: `1 / (d_head**0.5)` --> `(d_head ** 0.5)`.
(not very sure if this is a problem only on my machine)
### Who can help
@sgugger @LysandreJik | 02-11-2022 16:58:25 | 02-11-2022 16:58:25 | It looks like you haven't updated your black version. Make sure to run `pip install -e .[quality]` again.<|||||>@sgugger Thanks, I am thinking maybe it's the version and trying to verify -- and you already gave the answer! |
transformers | 15,627 | closed | unable to pass create vocabulary_from_data when using concatenate_datasets() method to train a combined dataset model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 4.17.0.dev0
- Platform:Ubuntu 20.04.3 LTS
- Python version: 3.8.10
- PyTorch version (GPU?):1.10.2+cu113
- Tensorflow version (GPU?): not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @anton-l
Models:
- Wav2Vec2
Library:
- Speech
## Information
Model I am using : facebook/wav2vec2-xls-r-300m
The problem arises when using:
* [ ] the official example script: https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
The tasks I am working on is:
* [ ] dataset: using the .concatenate_dataset([common_voice_dataset, openslr_dataset]) method to concatenate common_voice and openslr datasets from huggingface
## To reproduce
Steps to reproduce the behavior:
download multiple marathi or tamil datasets and use the .concatenate_dataset() method to concatenate them together, and further feed them into the raw_dataset['train'] in the official training script
I have been running the official script `run_speech_recognition_ctc.py` as provided in
[https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition)
The same code has been tested for tamil dataset concatenating openslr and common_voice dataset and the same problem persists.
1. `python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="mr" \
--output_dir="./wav2vec2-common_voice-mr" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval `
## Error Message
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.64ba/s]
0%| | 0/1 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "bnb.py", line 751, in <module>
main()
File "bnb.py", line 480, in main
vocab_dict = create_vocabulary_from_data(
File "bnb.py", line 299, in create_vocabulary_from_data
vocabs = datasets.map(
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 494, in map
{
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 495, in <dictcomp>
k: dataset.map(
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2102, in map
return self._map_single(
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper
out = func(self, *args, **kwargs)
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2498, in _map_single
writer.write_batch(batch)
File "/home/stephen/anaconda3/lib/python3.8/site-packages/datasets/arrow_writer.py", line 498, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 1702, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1314, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 9 named vocab expected length 306 but got length 1
the same script works if you do .train_test_split() on the concatenated dataset and pass the training dataset to to ['train'] and ['eval']
| 02-11-2022 16:54:57 | 02-11-2022 16:54:57 | The issue was caused because i the columns in train and eval didnt align, closing this issue. |
transformers | 15,626 | closed | [Fix doc example] FlaxVisionEncoderDecoder | # What does this PR do?
Fix some missing import and checkpoint name in some `FlaxVisionEncoderDecoder`'s doc examples.
@patil-suraj
| 02-11-2022 16:34:16 | 02-11-2022 16:34:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,625 | closed | Enable ONNX export when PyTorch and TensorFlow installed in the same env | # What does this PR do?
This PR implements one of the proposals discussed in #15620 to enable the ONNX export of pure TensorFlow models (i.e. model repos without PyTorch weights) when TensorFlow and PyTorch are installed in the same environment.
When both frameworks are installed, we default to using an `AutoModel` class (instead of `TFAutoModel`), and then use the `from_tf=True` argument of the `from_pretrained()` method to load the TensorFlow weights.
With this PR, it is now possible to export models like the following without error:
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
The user will see a 404 error in the logs from the `from_pretrained()` method:
```
404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
```
but we can handle this in a separate PR if users report confusion.
Closes #15620 | 02-11-2022 14:50:33 | 02-11-2022 14:50:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the reviews 😍 !<|||||>> 404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
> but we can handle this in a separate PR if users report confusion.
Well, I'm reporting this right away. We need to design a way not to report misleading and alarming info and not deal with it in a reactive way. We already have too much logging noise so this just makes things worse. And most users won't report this and just be worried.
Thanks.<|||||>Thanks for the feedback @stas00 - the 404 error is also displayed in the `pipeline()` function with e.g.
```python
from transformers import pipeline
# Load a pure TensorFlow model => see 404 Client Error in logs, but pipeline loads fine
p = pipeline("question-answering", model="keras-io/transformers-qa")
```
I'm happy to take a look at both cases and propose a fix
<|||||>That would be very awesome of you, Lewis! Thank you! |
transformers | 15,624 | closed | Mark "code in the Hub" API as experimental | # What does this PR do?
This PR marks the API to push code in the Hub as experimental. | 02-11-2022 14:39:09 | 02-11-2022 14:39:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,623 | closed | Add TF implementation of GPT-J | # What does this PR do?
This PR adds a TensorFlow implementation of `GPT-J` models
Fixes #15583
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik @patrickvonplaten | 02-11-2022 14:29:14 | 02-11-2022 14:29:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15623). All of your documentation changes will be reflected on that endpoint.<|||||>Super exciting! cc @Rocketknight1 and @gante<|||||>Hey @stancld 👋 Thank you for adding much-needed TensorFlow love ❤️ I will be your TensorFlow reviewer, expect a review by tomorrow morning<|||||>Hey @stancld 👋 I noticed you haven't replied to my review -- are you still interested in continuing this PR? :) If so, is there any way I can be of help? <|||||>Hello @gante, sorry for being silent for the last two weeks as I was busy with other projects. I'm gonna try to address your comments tomorrow O:] <|||||>@gante Bit late but working on the PR now :]<|||||>> @gante Bit late but working on the PR now :]
lovely 👌 lmk if you need a hand with anything, let's push this beauty to the finish line<|||||>I have uploaded the TF weights https://huggingface.co/EleutherAI/gpt-j-6B/blob/main/tf_model.h5
@gante Do you know how to save the fp16 weights in TF ? We need those for the `float16` branch. <|||||>> @gante Do you know how do you save the fp16 weights in TF ? We need those for the `float16` branch.
@patil-suraj negative 😬 @Rocketknight1, do you know how to do it?<|||||>Not easy to do in TF, unfortunately! Keras really wants you to do mixed precision, with full-precision weights. I don't think there's any "native" way to convert a `Model` to use `float16` weights except with `TFLite` stuff, or just manually converting the weight arrays yourself.<|||||>Okay, thanks for the answer.
> or just manually converting the weight arrays yourself.
so should we manually create fp16 weights for the `float16` branch ? Not sure if that affects anything in TF.<|||||>I'm not sure, basically - I don't know how you'd load them into the model class without converting them to float32. Keras has some options for forcing dtypes but I don't think they're very well-supported, so as long as we want the model in Keras and not raw TF then I don't really know how to do this. Can we just drop the `float16` branch for TF for now?<|||||>Gently pinging everyone here :) Is this PR good for merge ?<|||||>There are two outstanding issues that require light changes (@stancld):
1. https://github.com/huggingface/transformers/pull/15623#discussion_r826236349
2. https://github.com/huggingface/transformers/pull/15623#discussion_r826227556
Other than that, good to go IMO 👍
<|||||>> There are two outstanding issues that require light changes (@stancld):
>
> 1. [Add TF implementation of GPT-J #15623 (comment)](https://github.com/huggingface/transformers/pull/15623#discussion_r826236349)
> 2. [Add TF implementation of GPT-J #15623 (comment)](https://github.com/huggingface/transformers/pull/15623#discussion_r826227556)
>
> Other than that, good to go IMO 👍
@gante I'm gonna solve these issues now O:]<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante All the remaining issues/comments should be resolved now :] Thanks a lot for your guidance! O:]<|||||>@stancld amazing, thank you so much for this contribution! Adding these models is always a tough task, especially the last mile, so I appreciate your effort 🤗 and I'm sure the community does as well.
@patil-suraj @Rocketknight1 are you cool with me merging this PR? We won't be able to call `resize_token_embeddings` (and I have an action point to enable that), but other than that seems good to go!<|||||>Good to merge for me! Maybe just override the `resize_token_embeddings` method and raise `NotImplementedError` with a message. But don't feel strongly about this.<|||||>(@Rocketknight1 approved on slack, merging now to get a boost on Friday afternoon good vibes) |
transformers | 15,622 | closed | Add support for bitsandbytes | # What does this PR do?
Fixes #14819
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stas00 @sgugger @TimDettmers
## Status
* [ ] Need to instrument CI to install bnb (binary package so a bit trickier than normal dependency)
* [x] I have implemented a CLI parameter to support `bitsandbytes`
* [ ] I did not write any documentation yet
* [ ] I followed @TimDettmers 's [suggestion](https://github.com/huggingface/transformers/issues/14819#issuecomment-1016017746) to override the embedding layers. However, I am unsure about a couple of things:
* Does the override need to happen before the model is loaded onto the GPU as the official documentation [describes for other overrides](https://github.com/facebookresearch/bitsandbytes/blob/main/howto_config_override.md)?
* Are there any pitfalls to my current approach to identifying `Embedding` layers? It seems to work fine for `RoBERTa` and for `GPT-2`.
* [x] So far, I've used `run_mlm.py` and `run_clm.py` from the examples directory to check that the code runs. Using RTX A6000 GPUs, I see
| Model | visible devices | optimizer | per device batch size | GPU memory |
| :--- | :---- | :---- | :----: | ---: |
| gpt2-large | 0 | adamw_torch | 2 | 48638MiB / 49140MiB |
| gpt2-large | 0 | adamw_bnb | 2 | 42412MiB / 49140MiB |
| gpt2-large | 0,1 | adamw_torch | 1 | 30724MiB / 49140MiB <br> 21040MiB / 49140MiB |
| gpt2-large | 0,1 | adamw_torch | 2 | OOM |
| gpt2-large | 0,1 | adamw_bnb | 1 | 26820MiB / 49140MiB <br> 21042MiB / 49140MiB |
| gpt2-large | 0,1 | adamw_bnb | 2 | 44458MiB / 49140MiB <br> 36906MiB / 49140MiB |
| 02-11-2022 13:01:13 | 02-11-2022 13:01:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15622). All of your documentation changes will be reflected on that endpoint.<|||||>> I followed @TimDettmers 's https://github.com/huggingface/transformers/issues/14819#issuecomment-1016017746 to override the embedding layers. However, I am unsure about a couple of things:
>
> Does the override need to happen before the model is loaded onto the GPU as the official documentation [describes for other overrides](https://github.com/facebookresearch/bitsandbytes/blob/main/howto_config_override.md)?
This requirement would be a problem in some use-cases e.g. with Deepspeed ZeRO-3 which pre-loads the model directly on GPU during `from_pretrained`. But given that Deepspeed already shards the optim states over multiple-gpus it doesn't really need BNB. So we might need to instrument a counter-indication of BNB X DS-ZeRO3.
I'm not sure about other cases where a model ends up on GPU - if I'm not mistaken DS is the only one, @sgugger ?
> Are there any pitfalls to my current approach to identifying Embedding layers? It seems to work fine for RoBERTa and for GPT-2.
Commented here:
https://github.com/huggingface/transformers/pull/15622#discussion_r804948681
<|||||>Normally, when creating the optimizer, the model has been moved to the proper device already, except in the following cases:
- model parallelism (it has been split across devices already)
- deepseed and fairscale
- evaluation-only in fp16/bf16 full eval.<|||||>> Normally, when creating the optimizer, the model has been moved to the proper device already, except in the following cases:
>
> * model parallelism (it has been split across devices already)
so this is not an exception, as it's not on cpu, we just don't do it in the Trainer, but the modeling code does.
> * deepseed and fairscale
yes, except deepspeed zero3 where it's already moved to gpu - we just don't do it in trainer.
> * evaluation-only in fp16/bf16 full eval.
Check - but it's irrelevant to the optimizer.
So to summarize Sylvain's list of exceptions - in the general case the model should be already on GPU.
So we need to wait for Tim to let us know if that's a problem or whether it has a work around.<|||||>@TimDettmers, if you get a chance could you please address some of the questions to you so that this PR can be unblocked and BNB integration added to the HF Trainer? Thank you! <|||||>> > I followed @TimDettmers 's [#14819 (comment)](https://github.com/huggingface/transformers/issues/14819#issuecomment-1016017746) to override the embedding layers. However, I am unsure about a couple of things:
> > Does the override need to happen before the model is loaded onto the GPU as the official documentation [describes for other overrides](https://github.com/facebookresearch/bitsandbytes/blob/main/howto_config_override.md)?
>
> This requirement would be a problem in some use-cases e.g. with Deepspeed ZeRO-3 which pre-loads the model directly on GPU during `from_pretrained`. But given that Deepspeed already shards the optim states over multiple-gpus it doesn't really need BNB. So we might need to instrument a counter-indication of BNB X DS-ZeRO3.
>
> I'm not sure about other cases where a model ends up on GPU - if I'm not mistaken DS is the only one, @sgugger ?
>
> > Are there any pitfalls to my current approach to identifying Embedding layers? It seems to work fine for RoBERTa and for GPT-2.
>
> Commented here: [#15622 (comment)](https://github.com/huggingface/transformers/pull/15622#discussion_r804948681)
The new implementation of the override no longer depends on when the model is transferred to the GPU or when the override is registered. It takes the following signature:
```python
GlobalOptimManager.get_instance().register_module_override(module, 'weight', {'optim_bits': 32})
```
where `weight` is the parameter name to override. In this case, one can use this on the embedding layer (skipping the positional embedding).
<|||||>> Normally, when creating the optimizer, the model has been moved to the proper device already, except in the following cases:
>
> * model parallelism (it has been split across devices already)
> * deepseed and fairscale
> * evaluation-only in fp16/bf16 full eval.
These issues should be resolved with the new parameter override which is independent of when the parameters are transferred to the device.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@stas00 I caught up with your work on testing and with @sgugger's subsequent requests. Is there anything else I should do on this PR for it to be ready to merge?<|||||>Just waiting for @sgugger to have one last look after moving the `require_*` decorators to `testing_utils.py` and I think this is good to be merged.<|||||>Hi there!
Jumping up on this PR after it has been merged
It appears that 2 tests of this PR are not passing:
```
def test_bnb_adam8bit_no_bnb(self):
args = TrainingArguments(optim=OptimizerNames.ADAMW_BNB, output_dir="None")
# Pretend that bnb does not exist, even if installed. By setting bnb to None, importing
# bnb will fail even if bnb is installed.
with patch.dict("sys.modules", {"bnb.optim": None}):
with self.assertRaises(ValueError):
Trainer.get_optimizer_cls_and_kwargs(args)
```
This test should be fixed in https://github.com/huggingface/transformers/pull/18584 because of a very small typo, for the second test
```
test_run_seq2seq_bnb
```
I suspected it has been never run on our side since it is the only test that requires `bitsandbytes` on `transformers` have been always skipped because we used to not install bitsandbytes (check for eg this [page](https://github.com/huggingface/transformers/runs/7664727805?check_suite_focus=true) `tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb SKIPPED [ 21%]` on our Docker image until the PR #17901
But I did not managed to reproduce the failing test (it is passing on my testing VM with the latest `bitsandbytes` + the one we use on the Docker image (`bitsandbytes==0.31.5`).
cc @TimDettmers @manuelciosici @stas00 <|||||>Thank you, @younesbelkada
@ydshieh, do you think it'd be OK to add `bitsandbytes` to the nightly tests workflow so that bnb tests run?
the installation is just cuda-version specific:
```
pip install bitsandbytes-cudaXXX
```
https://github.com/facebookresearch/bitsandbytes#requirements--installation<|||||>@stas00 I could add it and see how things go.
But @younesbelkada added it to the scheduled CI (which means to run on GPU) with
```
RUN python3 -m pip install -i https://test.pypi.org/simple/ bitsandbytes==0.31.5
```
I am a bit confused by why there was no `cuda` there.<|||||>Thanks @stas00 and @ydshieh !
Adding the cuda version is not needed anymore and the facebook repo is not the one we have to refer to from now according to @TimDettmers
pip install bitsandbytes should be sufficient for now (I have to update the Dockerfile though)<|||||>Here is the repo we have to refer to: https://github.com/TimDettmers/bitsandbytes<|||||>oh, ok, I missed that you already added it, nothing to do then.
@TimDettmers, would it be possible to archive the original repo and post a link to the new repo on top of its README, since otherwise users will have no idea to use the new repo instead. thank you!<|||||>Also note that we are linking to the old repo:
```
examples/research_projects/robust-speech-event/README.md:[bitsandbytes](https://github.com/facebookresearch/bitsandbytes) to replace the
docs/source/en/perf_train_gpu_one.mdx:- 2 bytes * number of parameters for 8-bit AdamW optimizers like [bitsandbytes](https://github.com/facebookresearch/bitsandbytes)
docs/source/en/perf_train_gpu_one.mdx:On the other hand [8bit BNB optimizer](https://github.com/facebookresearch/bitsandbytes) can save 3/4 of memory normally used by a typical AdamW optimizer if it is configured to quantize all optimizer states, but in some situations only some optimizer states are quintized and then more memory is used. XXX: update once https://github.com/huggingface/transformers/pull/15622 is merged.
docs/source/en/perf_train_gpu_one.mdx:In contrast to the previous approaches is this one not integrated into the [`Trainer`] as a simple flag. We need to install the 8-bit optimizer and then pass it as a custom optimizer to the [`Trainer`]. Follow the installation guide in the Github [repo](https://github.com/facebookresearch/bitsandbytes) to install the `bitsandbytes` library that implements the 8-bit Adam optimizer.
```
@TimDettmers, should we fix those to point to the new repo instead?<|||||>> But I did not managed to reproduce the failing test (it is passing on my testing VM with the latest bitsandbytes + the one we use on the Docker image (bitsandbytes==0.31.5).
Hi @younesbelkada Are you running inside docker container on a VM similar to CI runners (Nivida T4)?
Could you try to get the values `gpu_peak_mem_orig` and `gpu_peak_mem_bnb`?
<|||||>Hi @ydshieh ! I am running on a VM similar to CI runners, let me re try to reproduce as you suggested<|||||>The test is passing on my VM.. for the VM I get:
```
gpu_peak_mem_orig
243490816
gpu_peak_mem_bnb
8053248
```
But I re-ran the test with `CUDA_VISIBLE_DEVICES=0` (running the test on a single GPU) and the test failed. Maybe the test was designed on a multi-GPU setup. Do you think that we should just add the decorator `@require_torch_multigpu` for this tests?
EDIT: I saw that even on multi-gpu the test was failing on the docker container<|||||>In a single GPU setup:
```
gpu_peak_mem_bnb
510833664
gpu_peak_mem_orig
509707264
``` |
transformers | 15,621 | closed | TPU slow finetuning T5-base | Hi,
I am trying to train a T5-base on Colab with the TPU. I am using the official code to perform a fine-tuning on the T5-base (with my dataset), but the training with TPU is extremely slow! I'm using the offical [code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py).
I am attaching the colab code with the various libraries I have installed: [notebook](https://colab.research.google.com/drive/1g4PYEOa0zhJnMaThPZVSQepn_wr9md4f#scrollTo=XNVao-sd4fbH).
Also, if I try to increase the batch size as >= 64, I get a memory error, as there seems to be only about 8 Gb available.
Can someone help me? Thank you! | 02-11-2022 12:00:05 | 02-11-2022 12:00:05 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>@LysandreJik Done days ago, but no answer yet<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,620 | closed | [RFC] Add framework argument to ONNX export | # What does this PR do?
This PR addresses an edge case introduced by #13831 where the ONNX export fails if:
* Both `torch` _and_ `tensorflow` are installed in the same environment
* The user tries to export a _pure_ TensorFlow model (i.e. a model repo without PyTorch weights)
Here is an example that fails to export on the `master` branch:
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
<details><summary>Traceback</summary>
```
404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
Traceback (most recent call last):
File "/Users/lewtun/git/transformers/src/transformers/modeling_utils.py", line 1358, in from_pretrained
resolved_archive_file = cached_path(
File "/Users/lewtun/git/transformers/src/transformers/file_utils.py", line 1904, in cached_path
output_path = get_from_cache(
File "/Users/lewtun/git/transformers/src/transformers/file_utils.py", line 2108, in get_from_cache
_raise_for_status(r)
File "/Users/lewtun/git/transformers/src/transformers/file_utils.py", line 2031, in _raise_for_status
raise EntryNotFoundError(f"404 Client Error: Entry Not Found for url: {request.url}")
transformers.file_utils.EntryNotFoundError: 404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/lewtun/git/transformers/src/transformers/onnx/__main__.py", line 77, in <module>
main()
File "/Users/lewtun/git/transformers/src/transformers/onnx/__main__.py", line 51, in main
model = FeaturesManager.get_model_from_feature(args.feature, args.model)
File "/Users/lewtun/git/transformers/src/transformers/onnx/features.py", line 307, in get_model_from_feature
return model_class.from_pretrained(model)
File "/Users/lewtun/git/transformers/src/transformers/models/auto/auto_factory.py", line 447, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/Users/lewtun/git/transformers/src/transformers/modeling_utils.py", line 1394, in from_pretrained
raise EnvironmentError(
OSError: keras-io/transformers-qa does not appear to have a file named pytorch_model.bin but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
```
</details>
The reason this fails is because the `FeaturesManager.get_model_class_for_feature()` method uses the `_TASKS_TO_AUTOMODELS` mapping to determine which autoclass (e.g `AutoModel` vs `TFAutoModel`) to return for a given task. This mapping relies on the following branching logic:
```python
if is_torch_available():
_TASKS_TO_AUTOMODELS = {
"default": AutoModel, ...
}
elif is_tf_available():
_TASKS_TO_AUTOMODELS = {
"default": TFAutoModel, ...
}
else:
_TASKS_TO_AUTOMODELS = {}
```
As a result, if a user has `torch` and `tensorflow` installed, we return an `AutoModel` class instead of the desired `TFAutoModel` class. In particular, Colab users cannot export pure TensorFlow models because `torch` is installed by default.
## Proposal
To address this issue, I've introduced a new `framework` argument in the ONNX CLI and extended `_TASKS_TO_AUTOMODELS` to be a nested `dict` when both frameworks are installed. With this change, one can now export pure TensorFlow models with:
```bash
python -m transformers.onnx --model=keras-io/transformers-qa --framework=tf onnx/
```
Similarly, pure PyTorch models can be exported as follows:
```bash
python -m transformers.onnx --model=lewtun/bert-finetuned-squad --framework=pt onnx/
```
And checkpoints with both sets of weights also works:
```bash
python -m transformers.onnx --model=distilbert-base-uncased onnx/
```
Although the implementation works, I'm not entirely happy with it because `_TASKS_TO_AUTOMODELS` changes (flat vs nested) depending on the installation environment, and this feels hacky.
## Alternative solution 1
Thanks to a tip from @stas00, one solution is to change nothing and get the user to specify which framework they're using as an environment variable, e.g.
```bash
USE_TORCH=0 USE_JAX=0 USE_TF=1 python -m transformers.onnx --model=keras-io/transformers-qa onnx/
```
If we adopt this approach, we could provide a warning when both `torch` and `tensorflow` are installed and suggest an example like the one above.
## Alternative solution 2
It occurred to me that we can solve this with a simple try/except in `FeaturesManager.get_model_from_feature()` as follows:
```python
def get_model_from_feature(feature: str, model: str) -> Union[PreTrainedModel, TFPreTrainedModel]:
# By default we return `AutoModel` if `torch` and `tensorflow` are installed
model_class = FeaturesManager.get_model_class_for_feature(feature)
try:
model = model_class.from_pretrained(model)
except OSError:
# Load the TensorFlow weights in `AutoModel`
model = model_class.from_pretrained(model, from_tf=True)
return model
```
The user will still see a 404 error in the logs
```bash
python -m transformers.onnx --model=keras-io/transformers-qa onnx/
# 404 Client Error: Entry Not Found for url: https://huggingface.co/keras-io/transformers-qa/resolve/main/pytorch_model.bin
```
but the conversion to ONNX will work once the TensorFlow weights are loaded in the `AutoModel` instance. Note: this solution seems to be similar to the one adopted in the `pipeline()` function, e.g.
```python
from transformers import pipeline
# Load a pure TensorFlow model => see 404 Client Error in logs, but pipeline loads fine
p = pipeline("question-answering", model="keras-io/transformers-qa")
```
The advantage of this approach is that the user doesn't have to manually specify a `--framework` arg, i.e. it "just works". The only drawback I see is that there might be differences between the `torch.onnx` and `tf2onnx` packages used for the ONNX export, and by using `torch.onnx` as the default we may mislead users on where to debug their exports. However, this is probably a rare case and could be revisited if users report problems.
Feedback on which approach is preferred is much appreciated! | 02-11-2022 08:50:41 | 02-11-2022 08:50:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15620). All of your documentation changes will be reflected on that endpoint.<|||||>> I personally think the solution 2 would be better for the user, as it "just works". We can investigate the provenance of the error log and try to remove it if it's an issue, but it would be better than adding a new arg :-)
Thanks for the feedback @sgugger ❤️ ! Having thought about it a bit more, I agree that solution 2 is the simplest and less error-prone: I've opened a PR for this here https://github.com/huggingface/transformers/pull/15625 |
transformers | 15,619 | closed | Why is my train_samples_per_second graph growing? | I'm training T5 so I'll tag @patrickvonplaten in the off chance it's something specific and not an obvious thing I'm missing.
Why is my train_samples_per_second graph linearly growing? I didn't set an increasing batch size, so it should be constant, right?
<img width="365" alt="image" src="https://user-images.githubusercontent.com/31738272/153556974-46775320-cadb-4327-8ac0-31a3ab231026.png">
| 02-11-2022 08:10:35 | 02-11-2022 08:10:35 | Hey @drunkinlove,
Could you specify which training you used exactly? Do you use the official HF Trainer : https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer? <|||||>@patrickvonplaten Hi! I did indeed, here are the arguments I specified:
- output_dir="checkpoints",
- logging_dir="logs",
- logging_strategy="steps",
- logging_steps=100,
- per_device_train_batch_size=4,
- num_train_epochs=100,
- max_steps=10000,
- warmup_steps=5000,
- weight_decay=0.01,
- evaluation_strategy="steps",
- save_strategy="steps",
- eval_steps=5000,
- save_steps=5000,
- save_total_limit=100,
- fp16=False,
- gradient_accumulation_steps=1
Am I even reading this graph correctly? It's supposed to be 'amount of training samples seen per second' vs 'total seconds passed', right? It looks really suspicious, I'm considering logging the time after each step myself to make my own graph.<|||||>Hey @drunkinlove,
I looked a bit into it. This graph should actually not show a line really but just a single scalar value for a single training run. If you look into this code line: https://github.com/huggingface/transformers/blob/b87c044c79a408c0a1e7f7b046b5b4ce999c2d0e/src/transformers/trainer.py#L1546,
you can see that the `train_samples_per_second` metric is computed only once during training (after the training is finished) and the tensorboard should only show a single scalar as it is done here: https://huggingface.co/patrickvonplaten/xls-r-300m-sv-cv8/tensorboard
So, I don't really understand how it is possible that you see a linear graph there. Do you have a link to your repo on the Hub?
The metric `train_samples_per_second` literally just says "How many training samples were treated per second" so it's a very easy formula: `number of all training samples` / `total time of training`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,618 | closed | DDP training hangs with `run_glue.py` and `run_seq2seq.py` | ## Environment info
- `transformers` version: 4.16.2
- Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger @patrickvonplaten @stas00
## Information
Model I am using (Bert, XLNet ...): Bert for `run_glue.py`, UnifiedQA for `run_seq2seq_qa.py`
The problem arises when using:
* [x] the official example scripts:
[run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py), [run_seq2seq_qa.py](https://github.com/huggingface/transformers/commits/master/examples/pytorch/question-answering/run_seq2seq_qa.py) (I may have fixed some bugs in the latter).
* [ ] my own modified scripts:
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE, SQUAD
* [ ] my own task or dataset:
## To reproduce
The issue is that training proceeds as expected on a single GPU but when run on multiple GPU, it simply hangs.
Trying to debug using `py-spy` and `faulthandler`, the first issue I observed was in blocks using `main_process_first` context manager introduced by [#12367](https://github.com/huggingface/transformers/pull/12367). Eg. [run_glue.py:L406](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/examples/pytorch/text-classification/run_glue.py#L406).
Removing this context manager allowed the script to proceed but now it gets stuck at [torch.nn.parallel.distributed:578](https://github.com/pytorch/pytorch/blob/302ee7bfb604ebef384602c56e3853efed262030/torch/nn/parallel/distributed.py#L578) in `DistributedDataParallel`'s `__init__()`.
1. DDP on GLUE:
Command:
```
CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch \
--nproc_per_node 2 run_glue.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 --report_to none \
--output_dir mnli_output/ --log_level debug --log_level_replica warning
```
Trace from faulthandler:
```
Thread 0x00007f592d2d7340 (most recent call first):
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 578 in __init__
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1051 in _wrap_model
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1223 in train
File "/home/shivag5/Documents/research/qa/unifiedqa/pytorch/run_glue.py", line 493 in main
File "/home/shivag5/Documents/research/qa/unifiedqa/pytorch/run_glue.py", line 575 in <module>
```
2. DDP on Squad:
Command:
```
CUDA_VISIBLE_DEVICES=1,2 python -m torch.distributed.launch \
--nproc_per_node 2 run_seq2seq_qa.py \
--model_name_or_path t5-small \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--report_to none \
--output_dir runs/squad-multigpu-test \
--overwrite_output_dir
--log_level debug --log_level_replica warning
```
Similar trace.
<details><summary>After a while it times out and ends with:</summary>
```
[E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1802784 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:587] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1802688 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1802784 milliseconds before timing out.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1323928 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 1323929) of binary: /home/shivag5/miniconda3/envs/transformers/bin/python
Traceback (most recent call last):
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/shivag5/miniconda3/envs/transformers/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
========================================================
run_glue.py FAILED
--------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
--------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-02-10_19:37:42
rank : 1 (local_rank: 1)
exitcode : -6 (pid: 1323929)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 1323929
========================================================
```
</details>
I have also tried using running without `torch.distributed.launch` as e.g. `CUDA_VISIBLE_DEVICES=1,2 python run_seq2seq_qa.py [args]`. This produces the error from #14128 and also gets stuck right at the start of training (at a different point presumably as it does manage to produce the progress bar unlike DDP).
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The training with multiple GPUs hangs even with example scripts run with the suggested commands. | 02-11-2022 03:49:14 | 02-11-2022 03:49:14 | I'm testing with transformers@master.
You're using `CUDA_VISIBLE_DEVICES=1,2` - is it intentionally because you don't want to use the gpu 0? I trust that you wanted to use gpus 1 and 2 and skip gpu 0.
If you don't want to select specific gpus you don't need to use `CUDA_VISIBLE_DEVICES`. See: https://huggingface.co/docs/transformers/master/en/main_classes/trainer#specific-gpus-selection
Now to your report:
1. the first one works just fine for me - no hanging - try `transformers@master` version?
I did:
```
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \
examples/pytorch/text-classification/run_glue.py --model_name_or_path \
bert-large-uncased-whole-word-masking --task_name mnli --do_train --do_eval \
--max_seq_length 128 --per_device_train_batch_size 2 --learning_rate 2e-5 \
--num_train_epochs 3.0 --report_to none --output_dir mnli_output/ --log_level \
debug --log_level_replica warning
```
<|||||>2. the 2nd one doesn't work even with 1 gpu, it fails:
```
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node 1 \
examples/pytorch/question-answering/run_seq2seq_qa.py --model_name_or_path \
t5-small --dataset_name squad_v2 --context_column context --question_column \
question --answer_column answers --do_train --do_eval \
--per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 \
--max_seq_length 384 --doc_stride 128 --report_to none --output_dir \
runs/squad-multigpu-test --overwrite_output_dir --log_level debug \
--log_level_replica warning
[...]
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 678, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 516, in main
eval_dataset = eval_examples.map(
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2102, in map
return self._map_single(
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper
out = func(self, *args, **kwargs)
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2498, in _map_single
writer.write_batch(batch)
File "/home/stas/anaconda3/envs/py38-pt110/lib/python3.8/site-packages/datasets/arrow_writer.py", line 498, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 1613, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1232, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 4 named labels expected length 1007 but got length 1000
```
I think a similar issue is addressed here: https://github.com/huggingface/datasets/issues/1817
As I'm not familiar with this script I will let its creator address this second issue. Then we can check the hanging issue once it can be run. @karthikrangasai, could you please have a look at why this script breaks with `transformers@master`? Thank you!
The relevant part of my setup:
```
datasets 1.18.3
transformers 4.17.0.dev0
arrow 1.1.1
pyarrow 5.0.0
```
<|||||>> You're using CUDA_VISIBLE_DEVICES=1,2 - is it intentionally because you don't want to use the gpu 0? I trust that you wanted to use gpus 1 and 2 and skip gpu 0.
Yeah, I didn't want to use GPU 0 as it was being used by another job.
</br>
> the first one works just fine for me - no hanging - try transformers@master version?
I haven't tried the master version - tried 4.16.2 and I think one of 4.15's. I'll try the master too. Did it work for you even with the `main_process_first` context manager?
</br>
> the 2nd one doesn't work even with 1 gpu, it fails:
Yeah, to get past this (bug?) I commented-out lines [456-457](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/examples/pytorch/question-answering/run_seq2seq_qa.py#L456) and lines [465-474](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/examples/pytorch/question-answering/run_seq2seq_qa.py#L465).<|||||>> > You're using CUDA_VISIBLE_DEVICES=1,2 - is it intentionally because you don't want to use the gpu 0? I trust that you wanted to use gpus 1 and 2 and skip gpu 0.
>
> Yeah, I didn't want to use GPU 0 as it was being used by another job.
Thank you for confirming that
> > the first one works just fine for me - no hanging - try transformers@master version?
>
> I haven't tried the master version - tried 4.16.2 and I think one of 4.15's. I'll try the master too. Did it work for you even with the `main_process_first` context manager?
Why won't it? It's just a set of `barrier` calls that syncs several processes.
I wonder - is it somehow possible that your gpus can't talk to each other? So any time you run into a `barrier` things fail?
Please try this diagnostics tool:
https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-gpu-test.py
> > the 2nd one doesn't work even with 1 gpu, it fails:
>
> Yeah, to get past this (bug?) I commented-out lines [456-457](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/examples/pytorch/question-answering/run_seq2seq_qa.py#L456) and lines [465-474](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/examples/pytorch/question-answering/run_seq2seq_qa.py#L465).
Oh, cool. I followed your instructions and there is no hanging.
Since you have 2 unrelated programs hang, I'd say you have a hardware issue there.<|||||>Thanks for sharing the diagnostics script. I ran it on two nodes on our cluster. First, on the one that I have been using for all the above experiments, I ran it like this:
```
CUDA_VISIBLE_DEVICES=4,5,6,7 python -m torch.distributed.run --nproc_per_node 4 --nnodes 1 torch-distributed-gpu-test.py
```
It got stuck at the first barrier (L65) as indicated by `faulthandler`. Since I couldn't remove the CUDA_VISIBLE_DEVICES setting on this node (some gpus were being used), to ablate, I ran the script on another node with all gpus idle - same command minus the env var. This time it hung at the second barrier (L77).
This does indicate communication issues b/w the GPUs but I also know people in my group who have been running distributed training on these nodes, albeit without Huggingface Trainer. Lemme check with them and the sysads to see if there is actually an issue. Thanks for the help till now!<|||||>This is very interesting.
Were those people in your group running pytorch or tensorflow and if pytorch which version (specifically if they were using a different version than you).
I'd say your next call is debugging NCCL, so you can set
```
NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node=4
```
and see if you catch any diagnostics in there. You may have some information that you need in those logs.
also `NCCL_DEBUG_SUBSYS=ALL`, see: https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-debug-subsys
If the NCCL debug won't give you enough hints to understand the problem, the best next plan of action is to file an Issue at
https://github.com/pytorch/pytorch as this Issue is clearly not related to the HF Trainer or Transformers. Use my script in your report and not the example code as the devs would want a simple reproducible code.
Once you find a resolution please do share it with us. I have only encountered this kind of issues when some nodes on HPC were getting stuck and needed rebooting, so this looks as potentially new issue that I haven't encountered yet.
I hope it's something very simple, like an odd firewall rule or something. Perhaps https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/env.html#nccl-socket-ifname IF "eth" sometimes doesn't work due to firewall rules, then you can try to set this env var: `NCCL_SOCKET_IFNAME=lo` (`ifconfig` to see what interfaces are visible).
<|||||>Thanks, @stas00 for the suggestions. I ran the script with the envvars you suggested. Here are the output logs:
```NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 8 --nnodes 1 torch-distributed-gpu-test.py```: [log0.txt](https://github.com/huggingface/transformers/files/8055103/log0.txt)
```NCCL_DEBUG_SUBSYS=NET NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 8 --nnodes 1 torch-distributed-gpu-test.py```: [log1.txt](https://github.com/huggingface/transformers/files/8055099/log1.txt)
```NCCL_DEBUG_SUBSYS=ALL NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 8 --nnodes 1 torch-distributed-gpu-test.py```: [log2.txt](https://github.com/huggingface/transformers/files/8055097/log2.txt)
Since I can't interpret much of it do you think these suggest communication issues between the GPUs? I did notice the messages like `Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)` but then it seems to be able to create all communication channels eventually..<|||||>It's pretty clear from the logs that "Could not enable P2P between dev A and dev B" is your problem - I don't have this if I try it on my setup - I'm not sure how you derived that:
> but then it seems to be able to create all communication channels eventually.
which part of the log points to that success? `Init COMPLETE` refers to the set up of each "process" but doesn't say anything about its ability to inter-connect.
Googling it it appears to be a very rarely reported issue. I see some older pytorch report and one report had success with switching to cuda-11 - I'm not sure if yours is 10 or 11. But I'm pretty sure this is not cuda related. Most likely it's some kind of firewall rule that causes this issue.
Also are you using the gpus directly or via some kind of a virtual machine or docker?
In your first log file one rank reports OK, in the 3rd one - 2 ranks. which means that some processes can connect to all, but not all - and since some succeeded that means all can send but some cannot receive. So again potentially this is some firewall receive deny rule may be? But also it appears to be inconsistent - and not the same ranks succeed in different runs.
What happens if you try to use the loopback device `lo` instead of the `eno0` as I suggested above.
```
NCCL_SOCKET_IFNAME=lo NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 8 --nnodes 1 torch-distributed-gpu-test.py
```
I assume you have it.
Do you use a firewall? and if so check its logs - could be in `/var/log/syslog`, or for ufw `/var/log/ufw.log`. I'd check `/var/log/syslog` anyway for any error reports.
I'd also try subsets instead of all 8 ranks and perhaps you will see which ranks have an issue? e.g. does `--nproc_per_node 2` succeed?
<|||||>Thanks for the clarification. I didn't fully understand the logs and was hoping you'd have a better idea. I thought what I did because of the `Connected all trees` and `Connected all rings` messages and because there were multiple `Could not enable P2P` messages for every pair of devices. So the hypothesis was that maybe it does multiple attempts to create the p2p connections and printing an output on each failure but eventually succeeds.
Here are the logs for the other settings you asked for (not much difference):
<details><summary>
`NCCL_SOCKET_IFNAME=lo NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 8 --nnodes 1 torch-distributed-gpu-test.py`
</summary>
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ava-s0:537553:537553 [0] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537553:537553 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537553:537553 [0] NCCL INFO NET/IB : No device found.
ava-s0:537553:537553 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537553:537553 [0] NCCL INFO Using network Socket
NCCL version 2.10.3+cuda11.3
ava-s0:537559:537559 [6] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537559:537559 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537559:537559 [6] NCCL INFO NET/IB : No device found.
ava-s0:537559:537559 [6] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537559:537559 [6] NCCL INFO Using network Socket
ava-s0:537556:537556 [3] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537556:537556 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537556:537556 [3] NCCL INFO NET/IB : No device found.
ava-s0:537556:537556 [3] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537556:537556 [3] NCCL INFO Using network Socket
ava-s0:537557:537557 [4] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537557:537557 [4] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537557:537557 [4] NCCL INFO NET/IB : No device found.
ava-s0:537557:537557 [4] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537557:537557 [4] NCCL INFO Using network Socket
ava-s0:537555:537555 [2] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537555:537555 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537555:537555 [2] NCCL INFO NET/IB : No device found.
ava-s0:537555:537555 [2] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537555:537555 [2] NCCL INFO Using network Socket
ava-s0:537554:537554 [1] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537554:537554 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537554:537554 [1] NCCL INFO NET/IB : No device found.
ava-s0:537554:537554 [1] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537554:537554 [1] NCCL INFO Using network Socket
ava-s0:537560:537560 [7] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537560:537560 [7] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537560:537560 [7] NCCL INFO NET/IB : No device found.
ava-s0:537560:537560 [7] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537560:537560 [7] NCCL INFO Using network Socket
ava-s0:537558:537558 [5] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:537558:537558 [5] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:537558:537558 [5] NCCL INFO NET/IB : No device found.
ava-s0:537558:537558 [5] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:537558:537558 [5] NCCL INFO Using network Socket
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537558:537635 [5] NCCL INFO Trees [0] 6/-1/-1->5->4 [1] 6/-1/-1->5->4
ava-s0:537558:537635 [5] NCCL INFO Setting affinity for GPU 5 to ffff0000,ffff0000
ava-s0:537559:537623 [6] NCCL INFO Trees [0] 7/-1/-1->6->5 [1] 7/-1/-1->6->5
ava-s0:537559:537623 [6] NCCL INFO Setting affinity for GPU 6 to ffff0000,ffff0000
ava-s0:537560:537628 [7] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6
ava-s0:537560:537628 [7] NCCL INFO Setting affinity for GPU 7 to ffff0000,ffff0000
ava-s0:537553:537622 [0] NCCL INFO Channel 00/02 : 0 2 3 4 5 7 6 1
ava-s0:537554:537627 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0
ava-s0:537555:537626 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1
ava-s0:537553:537622 [0] NCCL INFO Channel 01/02 : 0 2 3 4 5 7 6 1
ava-s0:537554:537627 [1] NCCL INFO Setting affinity for GPU 1 to ffff,0000ffff
ava-s0:537556:537624 [3] NCCL INFO Trees [0] 4/-1/-1->3->2 [1] 4/-1/-1->3->2
ava-s0:537555:537626 [2] NCCL INFO Setting affinity for GPU 2 to ffff,0000ffff
ava-s0:537557:537625 [4] NCCL INFO Trees [0] 5/-1/-1->4->3 [1] 5/-1/-1->4->3
ava-s0:537553:537622 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
ava-s0:537556:537624 [3] NCCL INFO Setting affinity for GPU 3 to ffff,0000ffff
ava-s0:537557:537625 [4] NCCL INFO Setting affinity for GPU 4 to ffff0000,ffff0000
ava-s0:537553:537622 [0] NCCL INFO Setting affinity for GPU 0 to ffff,0000ffff
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537559:537623 [6] NCCL INFO Channel 00 : 6[b1000] -> 1[1b000] via direct shared memory
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537553:537622 [0] NCCL INFO Channel 00 : 0[1a000] -> 2[3d000] via P2P/IPC
ava-s0:537555:537626 [2] NCCL INFO Channel 00 : 2[3d000] -> 3[3e000] via direct shared memory
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537559:537623 [6] NCCL INFO Channel 01 : 6[b1000] -> 1[1b000] via direct shared memory
ava-s0:537553:537622 [0] NCCL INFO Channel 01 : 0[1a000] -> 2[3d000] via P2P/IPC
ava-s0:537555:537626 [2] NCCL INFO Channel 01 : 2[3d000] -> 3[3e000] via direct shared memory
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537557:537625 [4] NCCL INFO Channel 00 : 4[88000] -> 5[89000] via direct shared memory
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537557:537625 [4] NCCL INFO Channel 01 : 4[88000] -> 5[89000] via direct shared memory
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537558:537635 [5] NCCL INFO Channel 00 : 5[89000] -> 7[b2000] via P2P/IPC
ava-s0:537554:537627 [1] NCCL INFO Channel 00 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537558:537635 [5] NCCL INFO Channel 01 : 5[89000] -> 7[b2000] via P2P/IPC
ava-s0:537554:537627 [1] NCCL INFO Channel 01 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:537556:537624 [3] NCCL INFO Channel 00 : 3[3e000] -> 4[88000] via direct shared memory
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537556:537624 [3] NCCL INFO Channel 01 : 3[3e000] -> 4[88000] via direct shared memory
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537558:537635 [5] NCCL INFO Connected all rings
ava-s0:537560:537628 [7] NCCL INFO Channel 00 : 7[b2000] -> 6[b1000] via direct shared memory
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537558:537635 [5] NCCL INFO Channel 00 : 5[89000] -> 6[b1000] via direct shared memory
ava-s0:537560:537628 [7] NCCL INFO Channel 01 : 7[b2000] -> 6[b1000] via direct shared memory
ava-s0:537558:537635 [5] NCCL INFO Channel 01 : 5[89000] -> 6[b1000] via direct shared memory
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537557:537625 [4] NCCL INFO Connected all rings
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537555:537626 [2] NCCL INFO Connected all rings
ava-s0:537559:537623 [6] NCCL INFO Connected all rings
ava-s0:537557:537625 [4] NCCL INFO Could not enable P2P between dev 4(=88000) and dev 5(=89000)
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537560:537628 [7] NCCL INFO Connected all rings
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537560:537628 [7] NCCL INFO Could not enable P2P between dev 7(=b2000) and dev 6(=b1000)
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537557:537625 [4] NCCL INFO Channel 00 : 4[88000] -> 3[3e000] via direct shared memory
ava-s0:537559:537623 [6] NCCL INFO Channel 00 : 6[b1000] -> 7[b2000] via direct shared memory
ava-s0:537559:537623 [6] NCCL INFO Could not enable P2P between dev 6(=b1000) and dev 7(=b2000)
ava-s0:537554:537627 [1] NCCL INFO Connected all rings
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537557:537625 [4] NCCL INFO Channel 01 : 4[88000] -> 3[3e000] via direct shared memory
ava-s0:537556:537624 [3] NCCL INFO Connected all rings
ava-s0:537559:537623 [6] NCCL INFO Channel 01 : 6[b1000] -> 7[b2000] via direct shared memory
ava-s0:537560:537628 [7] NCCL INFO Connected all trees
ava-s0:537560:537628 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537560:537628 [7] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537553:537622 [0] NCCL INFO Connected all rings
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537559:537623 [6] NCCL INFO Channel 00 : 6[b1000] -> 5[89000] via direct shared memory
ava-s0:537553:537622 [0] NCCL INFO Channel 00 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:537553:537622 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:537559:537623 [6] NCCL INFO Channel 01 : 6[b1000] -> 5[89000] via direct shared memory
ava-s0:537553:537622 [0] NCCL INFO Channel 01 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:537554:537627 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537558:537635 [5] NCCL INFO Channel 00 : 5[89000] -> 4[88000] via direct shared memory
ava-s0:537558:537635 [5] NCCL INFO Could not enable P2P between dev 5(=89000) and dev 4(=88000)
ava-s0:537558:537635 [5] NCCL INFO Channel 01 : 5[89000] -> 4[88000] via direct shared memory
ava-s0:537558:537635 [5] NCCL INFO Connected all trees
ava-s0:537558:537635 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537558:537635 [5] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537559:537623 [6] NCCL INFO Connected all trees
ava-s0:537559:537623 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537559:537623 [6] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537556:537624 [3] NCCL INFO Channel 00 : 3[3e000] -> 2[3d000] via direct shared memory
ava-s0:537556:537624 [3] NCCL INFO Could not enable P2P between dev 3(=3e000) and dev 2(=3d000)
ava-s0:537556:537624 [3] NCCL INFO Channel 01 : 3[3e000] -> 2[3d000] via direct shared memory
ava-s0:537554:537627 [1] NCCL INFO Channel 00 : 1[1b000] -> 2[3d000] via direct shared memory
ava-s0:537554:537627 [1] NCCL INFO Channel 01 : 1[1b000] -> 2[3d000] via direct shared memory
ava-s0:537557:537625 [4] NCCL INFO Connected all trees
ava-s0:537557:537625 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537557:537625 [4] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537553:537622 [0] NCCL INFO Connected all trees
ava-s0:537553:537622 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537553:537622 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537555:537626 [2] NCCL INFO Could not enable P2P between dev 2(=3d000) and dev 3(=3e000)
ava-s0:537555:537626 [2] NCCL INFO Channel 00 : 2[3d000] -> 1[1b000] via direct shared memory
ava-s0:537555:537626 [2] NCCL INFO Channel 01 : 2[3d000] -> 1[1b000] via direct shared memory
ava-s0:537556:537624 [3] NCCL INFO Connected all trees
ava-s0:537556:537624 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537556:537624 [3] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537554:537627 [1] NCCL INFO Connected all trees
ava-s0:537554:537627 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537554:537627 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537555:537626 [2] NCCL INFO Connected all trees
ava-s0:537555:537626 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 8/8/512
ava-s0:537555:537626 [2] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:537553:537622 [0] NCCL INFO comm 0x7f3450002fb0 rank 0 nranks 8 cudaDev 0 busId 1a000 - Init COMPLETE
ava-s0:537557:537625 [4] NCCL INFO comm 0x7f2bc0002fb0 rank 4 nranks 8 cudaDev 4 busId 88000 - Init COMPLETE
ava-s0:537559:537623 [6] NCCL INFO comm 0x7fbe4c002fb0 rank 6 nranks 8 cudaDev 6 busId b1000 - Init COMPLETE
ava-s0:537553:537553 [0] NCCL INFO Launch mode Parallel
ava-s0:537556:537624 [3] NCCL INFO comm 0x7f75d0002fb0 rank 3 nranks 8 cudaDev 3 busId 3e000 - Init COMPLETE
ava-s0:537554:537627 [1] NCCL INFO comm 0x7f40ec002fb0 rank 1 nranks 8 cudaDev 1 busId 1b000 - Init COMPLETE
ava-s0:537558:537635 [5] NCCL INFO comm 0x7f5688002fb0 rank 5 nranks 8 cudaDev 5 busId 89000 - Init COMPLETE
ava-s0:537555:537626 [2] NCCL INFO comm 0x7faf08002fb0 rank 2 nranks 8 cudaDev 2 busId 3d000 - Init COMPLETE
ava-s0:537560:537628 [7] NCCL INFO comm 0x7f9f28002fb0 rank 7 nranks 8 cudaDev 7 busId b2000 - Init COMPLETE
[ava-s0.ics.uci.edu-0] is OK (global rank: 0/8)
```
</details>
<details><summary>
`NCCL_SOCKET_IFNAME=lo NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py`
</summary>
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ava-s0:538447:538447 [0] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:538447:538447 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:538447:538447 [0] NCCL INFO NET/IB : No device found.
ava-s0:538447:538447 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:538447:538447 [0] NCCL INFO Using network Socket
NCCL version 2.10.3+cuda11.3
ava-s0:538448:538448 [1] NCCL INFO Bootstrap : Using lo:127.0.0.1<0>
ava-s0:538448:538448 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:538448:538448 [1] NCCL INFO NET/IB : No device found.
ava-s0:538448:538448 [1] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0>
ava-s0:538448:538448 [1] NCCL INFO Using network Socket
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538448:538500 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
ava-s0:538448:538500 [1] NCCL INFO Setting affinity for GPU 1 to ffff,0000ffff
ava-s0:538447:538499 [0] NCCL INFO Channel 00/02 : 0 1
ava-s0:538447:538499 [0] NCCL INFO Channel 01/02 : 0 1
ava-s0:538447:538499 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
ava-s0:538447:538499 [0] NCCL INFO Setting affinity for GPU 0 to ffff,0000ffff
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538447:538499 [0] NCCL INFO Channel 00 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:538447:538499 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538447:538499 [0] NCCL INFO Channel 01 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538448:538500 [1] NCCL INFO Channel 00 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:538448:538500 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538448:538500 [1] NCCL INFO Channel 01 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:538448:538500 [1] NCCL INFO Connected all rings
ava-s0:538448:538500 [1] NCCL INFO Connected all trees
ava-s0:538448:538500 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
ava-s0:538448:538500 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:538447:538499 [0] NCCL INFO Connected all rings
ava-s0:538447:538499 [0] NCCL INFO Connected all trees
ava-s0:538447:538499 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
ava-s0:538447:538499 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:538447:538499 [0] NCCL INFO comm 0x7f4afc002fb0 rank 0 nranks 2 cudaDev 0 busId 1a000 - Init COMPLETE
ava-s0:538447:538447 [0] NCCL INFO Launch mode Parallel
ava-s0:538448:538500 [1] NCCL INFO comm 0x7f16ac002fb0 rank 1 nranks 2 cudaDev 1 busId 1b000 - Init COMPLETE
[ava-s0.ics.uci.edu-1] is OK (global rank: 1/2)
```
</details>
<details><summary>
`NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py`
</summary>
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
ava-s0:538740:538740 [0] NCCL INFO Bootstrap : Using eno0:128.195.10.163<0>
ava-s0:538740:538740 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:538740:538740 [0] NCCL INFO NET/IB : No device found.
ava-s0:538740:538740 [0] NCCL INFO NET/Socket : Using [0]eno0:128.195.10.163<0>
ava-s0:538740:538740 [0] NCCL INFO Using network Socket
NCCL version 2.10.3+cuda11.3
ava-s0:538741:538741 [1] NCCL INFO Bootstrap : Using eno0:128.195.10.163<0>
ava-s0:538741:538741 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
ava-s0:538741:538741 [1] NCCL INFO NET/IB : No device found.
ava-s0:538741:538741 [1] NCCL INFO NET/Socket : Using [0]eno0:128.195.10.163<0>
ava-s0:538741:538741 [1] NCCL INFO Using network Socket
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538740:538760 [0] NCCL INFO Channel 00/02 : 0 1
ava-s0:538740:538760 [0] NCCL INFO Channel 01/02 : 0 1
ava-s0:538741:538761 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
ava-s0:538740:538760 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
ava-s0:538741:538761 [1] NCCL INFO Setting affinity for GPU 1 to ffff,0000ffff
ava-s0:538740:538760 [0] NCCL INFO Setting affinity for GPU 0 to ffff,0000ffff
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538741:538761 [1] NCCL INFO Channel 00 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:538741:538761 [1] NCCL INFO Could not enable P2P between dev 1(=1b000) and dev 0(=1a000)
ava-s0:538741:538761 [1] NCCL INFO Channel 01 : 1[1b000] -> 0[1a000] via direct shared memory
ava-s0:538740:538760 [0] NCCL INFO Channel 00 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:538740:538760 [0] NCCL INFO Could not enable P2P between dev 0(=1a000) and dev 1(=1b000)
ava-s0:538740:538760 [0] NCCL INFO Channel 01 : 0[1a000] -> 1[1b000] via direct shared memory
ava-s0:538741:538761 [1] NCCL INFO Connected all rings
ava-s0:538741:538761 [1] NCCL INFO Connected all trees
ava-s0:538741:538761 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
ava-s0:538741:538761 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:538740:538760 [0] NCCL INFO Connected all rings
ava-s0:538740:538760 [0] NCCL INFO Connected all trees
ava-s0:538740:538760 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/512
ava-s0:538740:538760 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
ava-s0:538741:538761 [1] NCCL INFO comm 0x7f5690002fb0 rank 1 nranks 2 cudaDev 1 busId 1b000 - Init COMPLETE
ava-s0:538740:538760 [0] NCCL INFO comm 0x7fc454002fb0 rank 0 nranks 2 cudaDev 0 busId 1a000 - Init COMPLETE
ava-s0:538740:538740 [0] NCCL INFO Launch mode Parallel
[ava-s0.ics.uci.edu-0] is OK (global rank: 0/2)
```
</details>
> Also are you using the gpus directly or via some kind of a virtual machine or docker?
I am using them directly. As for cudatoolkit, it seems my conda environment has 11.3 but nvidia-smi shows 11.2. SO seems to [indicate](https://stackoverflow.com/a/53504578/4953139) that they are separate API versions (runtime and driver respectively) and don't need to be the same but could this be the issue? I also checked that the runtime version was compatible with driver [here](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html). Output from nvidia-smi:
`NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2`
I don't have sudo access so cannot open `/var/log/syslog` and there doesn't seem to be `/var/log/ufw.log`. I am not sure if there is a firewall. Given the limited access, if it's clear to you that there is communication/hardware issue, I'll might have to reach out to the sysads now.<|||||>Perhaps you're correct. And given that your previous reports gave different nodes that succeeded 1st log, rank 0, 2nd log rank 5,6 or something like that, I wonder if this is some sort of intermittent failure.
So even 2 gpus fail.
And `lo` didn't help either.
The story with cuda is this:
1. pytorch comes with its own cuda version which is only the runtime
2. then you have a system-wide cuda version that is used by `nvidia-smi` or if you were to build a CUDA extension (like deepspeed)
3. I think - the NVIDIA driver doesn't depend on CUDA - you can run the driver just fine w/o the CUDA toolkit installed.
therefore CUDA version they usually don't have to match as long as the major version is the same. So I don't think there is anything to worry about.
Your next step is to file an Issue with https://github.com/pytorch/pytorch/issues where there are multiple NCCL experts and who will hopefully guide you to a resolution of this very unusual problem. It'd help for you to summarize what you have tried and share the logs.
As mentioned earlier do go for the short script as the reproduction script in your report - as nobody is going to try to debug your issue with a transformers example.
I'm looking forward to learning from your journey, @Shivanshu-Gupta <|||||>This might be completely unrelated to this issue, but I was closely monitoring this thread as I was having very similar issues and I suspected it was an nccl-related error. After days of digging I fixed my issue by turning off ACS ([1](https://forums.developer.nvidia.com/t/multi-gpu-peer-to-peer-access-failing-on-tesla-k80/39748/17) [2](https://github.com/NVIDIA/nccl/issues/19#issuecomment-213223260)).
The links filtered lspci with "plx" but my machine didn't have this, so I just I manually checked the output of `lscpi -vvv` for "ACSCtl". I took note of the sources whenever ACSCtl was NOT in this form (there should be - after each part):
`ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-`
For each source that had ACSCtl on, I disabled it using a command like this
`sudo setpci -s cb:08.0 f2a.w=0000`
I did this a total of 21 times (I'm not sure if all of it was necessary). This way I made it so that every line in `sudo lspci -vvv | grep -i ACSCtl` would be `ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-`
And after that, I fixed my weeks-long nightmare of trying to get DDP to work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@Shivanshu-Gupta Do you by chance have an AMD CPU? If so, check [this](https://github.com/stas00/toolbox/issues/1) out. |
transformers | 15,617 | closed | Fix typo in speech2text2 doc | # What does this PR do?
Fix speech2text2 documentation to be consistent with code.
Following the doc as model.generate(input_ids=...) throws an error.
Using model.generate(inputs=inputs["input_values"], ...) works.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-11-2022 02:47:19 | 02-11-2022 02:47:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,616 | closed | Implementation of activations as pytorch modules | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/15364
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
A few questions relevant to this PR:
1. Are there any known barriers for usage of `inplace` versions of these activation functions? If you are not aware of any, I can push an update for this, so we could squeeze in some additional performance gains.
2. Given that currently all implemented activation functions are stateless, it’s safe to initialize them directly in the `ACT2FN`. If at some point activation functions with states are introduced (or something similar to what @stas00 mentioned in https://github.com/huggingface/transformers/issues/15364#issuecomment-1026240845), this approach has to be updated by initializing these activations during the model construction or via `def get_activation` function. This update would be needed even without this PR, when activations were implemented as functions.
3. Are `quick_gelu` and `gelu_fast` implementations necessary now that the pytorch has a built-in implementation (`nn.functional.gelu`) which should be sufficiently fast? | 02-11-2022 01:37:54 | 02-11-2022 01:37:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I visually checked that nothing was left out - I wonder whether we have all these activation functions tested - have you by chance grepped if all entries in the ACT2FN table appear at least once in the tests?
I've added a few tests that were missing for activations, so now they should all appear at least once.
> (note for future PRs: in general changing the order of the original code chunks and doing changes to it at the same time makes it much more difficult to review, but no need to change anything as I have already verified it - just had to scroll up and down a lot)
Thanks a lot for the tip, will definitely take care about it in the future.
And regarding the inplace and jitted versions, I agree it's better to move this in a new thread. I will try to benchmark these variations so we can have some numbers to help us decide about the best way to proceed. <|||||>The error is unrelated.<|||||>Merging now, will pay attention to the slow tests to ensure nothing is changed due to this.<|||||>This seems to have broken the `run_translation` script:
```
============================= FAILURES SHORT STACK =============================
______________________ ExamplesTests.test_run_translation ______________________
self = <[AttributeError("'SiLUActivation' object has no attribute '_modules'") raised in repr()] SiLUActivation object at 0x7f5db0d053d0>
name = '_parameters'
def __getattr__(self, name: str) -> Union[Tensor, 'Module']:
if '_parameters' in self.__dict__:
_parameters = self.__dict__['_parameters']
if name in _parameters:
return _parameters[name]
if '_buffers' in self.__dict__:
_buffers = self.__dict__['_buffers']
if name in _buffers:
return _buffers[name]
if '_modules' in self.__dict__:
modules = self.__dict__['_modules']
if name in modules:
return modules[name]
> raise AttributeError("'{}' object has no attribute '{}'".format(
type(self).__name__, name))
E AttributeError: 'SiLUActivation' object has no attribute '_parameters'
/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py:1[17](https://github.com/huggingface/transformers/runs/5240564468?check_suite_focus=true#step:5:17)7: AttributeError
```<|||||>This is fixed by #15718 (it's currently breaking the inference API on some models). |
transformers | 15,615 | closed | Fix broken link in CTRL docs | Fixes broken link in CTRL docs (see #14969). | 02-11-2022 00:02:38 | 02-11-2022 00:02:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,614 | closed | Fix grammar in tokenizer_summary docs | "to make ensure" is redundant. minor grammar nit.
# What does this PR do?
Fixes typo in docs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 02-10-2022 22:30:06 | 02-10-2022 22:30:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,613 | closed | Flax Speech-Encoder-Decoder Model | # What does this PR do?
Currently, Speech-Encoder-Decoder Models are only available in PyTorch. This PR adds the **Flax** Speech-Encoder-Decoder Model and all accompanying tests.
The accuracy of the Flax Speech-Encoder-Decoder Model is validated through cross-test comparison with the corresponding PyTorch Speech-Encoder-Decoder Model. The final logit values agree to within a `4e-2` threshold, making certain that the Flax Model operates as intended. | 02-10-2022 16:55:21 | 02-10-2022 16:55:21 | The issue being encountered occurs when the model weights are initialized: the `encoder_hidden_states` don’t match the shape of the `decoder_hidden_states`, meaning that the cross-attention layers throw an error when the decoder is initialised. I have been inspecting the code to try and establish where the two deviate in shape. For reference, running this code produces the error:
```py
from transformers import FlaxSpeechEncoderDecoderModel
# Initialize a wav2vec2-2-gpt2 from pretrained wav2vec2 and gpt2 models
# Note that the cross-attention layers will be randomly initialized
model_fx = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter", "gpt2", encoder_from_pt=True)
```<|||||>> The issue being encountered occurs when the model weights are initialized: the `encoder_hidden_states` don’t match the shape of the `decoder_hidden_states`, meaning that the cross-attention layers throw an error when the decoder is initialised. I have been inspecting the code to try and establish where the two deviate in shape. For reference, running this code produces the error:
>
> ```python
> from transformers import FlaxSpeechEncoderDecoderModel
> # Initialize a wav2vec2-2-gpt2 from pretrained wav2vec2 and gpt2 models
> # Note that the cross-attention layers will be randomly initialized
> model_fx = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter", "gpt2", encoder_from_pt=True)
> ```
This already looks like a pretty edge-case loading. Could we maybe in a first step try to see whether the following works:
```py
from transformers import FlaxSpeechEncoderDecoderModel
# Initialize a wav2vec2-2-gpt2 from pretrained wav2vec2 and gpt2 models
# Note that the cross-attention layers will be randomly initialized
model_fx = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter", "gpt2")
```
?
Meaning that the weights don't have to be converted from PT to Flax. For this you probably first need to do:
```py
from transformers import FlaxWav2Vec2Model
model = FlaxWav2Vec2Model.from_pretrained("patrickvonplaten/dummy_wav2vec2_with_adapter", from_pt=True)
model.push_to_hub("<your-url>")
```<|||||>Thanks for the reply! Converting the PT weights to Flax and re-running the script still throws the error:
```py
from transformers import FlaxSpeechEncoderDecoderModel
# Initialize a wav2vec2-2-gpt2 from pretrained **Flax** wav2vec2 and gpt2 models
# Note that the cross-attention layers will be randomly initialized
model_fx = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained("sanchit-gandhi/dummy_flax_wav2vec2_with_adapter", "gpt2")
```
The first forward pass in initialising the model is performed successfully: the shape of both the `encoder_hidden_states` and `decoder_hidden_states` is `(1, 1, 768)`. In the second forward pass, the `encoder_hidden_states` are `(1, 2, 768)`, and the `decoder_hidden_states` are `(1, 1024, 768)`. Not sure where this discrepancy in `seq_len` has come about. They should both be `(1, 1, 768)` I believe.<|||||>@sanchit-gandhi,
Just pushed a fix for the problem. The reason was that the encoder downsamples the seq_len while the decoder does not. This means that we can't have the same encoder and decoder input lengths.
The script still fails, but now because the attention_mask is not correct. Could you take a look here? We should implement `_get_feature_vector_attention_mask` now as you correctly noted :-) <|||||>Quick question about our PT-Flax cross-test thresholds. I noticed that in the `test_modeling_flax_common.py`, we use a threshold of 4e-2 when comparing Flax and PT outputs: https://github.com/huggingface/transformers/blob/60ba48205e4ec070780e8cb8d461421b77432bad/tests/test_modeling_flax_common.py#L192 whereas in `test_modeling_flax_encoder_decoder.py` the threshold is set much more aggressively at 1e-5: https://github.com/huggingface/transformers/blob/60ba48205e4ec070780e8cb8d461421b77432bad/tests/test_modeling_flax_encoder_decoder.py#L261 Is there a reason why the threshold is set so much lower in the latter?<|||||>> that
Good question! The reason is mostly due to different frameworks and how the matrix multiplication is done. In JAX a LAX-backend implementation of `matmul()` is used which approximates the output matrix often instead of doing the actual computation - see here: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.matmul.html<|||||>So if we see differences of like 0.05 we are still pretty sure that the model is correctly implemented in Flax |
transformers | 15,612 | closed | TF: Add informative warning for inexistent CPU backprop ops | # What does this PR do?
Some TF operations do not support backpropagation on CPU, namely convolutions with the `groups` argument set ([related issue on Keras](https://github.com/keras-team/keras/issues/15713)). Sadly, the exception thrown is very uninformative and makes no mention of hardware problems if there is no GPU on the system (but it does if the machine has a GPU). This PR adds an informative warning in case a user loads the affected models.
Fixes #15114
Fixes #15059
| 02-10-2022 16:54:38 | 02-10-2022 16:54:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Example of the warning:
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/12240844/153624301-9b171959-66c1-4a24-bb59-e86673192e6a.png">
<|||||>(Merging as soon as tests pass, as this just adds a warning) |
transformers | 15,611 | closed | Small clean up generate | # What does this PR do?
This PR does a small clean-up for PyTorch's generate by removing unused duplicated code and ordering the code a bit better. | 02-10-2022 16:47:25 | 02-10-2022 16:47:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,610 | closed | T5 AttributeError: 'T5Encoder' object has no attribute 'main_input_name' | I trained a T5 model and when I ran the inference it gave me error as mentioned in title. I installed required package as shown below. It was working yesterday but now it returned me this attribute error.
```
!pip install sentencepiece==0.1.95
!pip install torch
!pip install transformers
!pip install pytorch_lightning==0.7.5
```
```py
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
def get_model_output(sentence,model):
c_output = []
text = sentence + ' </s>'
max_len = 256
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cpu"), encoding["attention_mask"].to("cpu")
# set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
beam_outputs= model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=64,
num_beams=10,
no_repeat_ngram_size=2,
do_sample = False,
num_return_sequences=5,
early_stopping=True
)
final_outputs =[]
for beam_output in beam_outputs:
sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if sent.lower() != sentence.lower() and sent not in final_outputs:
final_outputs.append(sent)
for final_output in final_outputs:
c_output.append(final_output)
return `c_output[0]`
```
Here is the error message:
```
[/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py](https://localhost:8080/#) in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, stopping_criteria, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
1067 # otherwise model_input_name is None
1068 # all model-specific keyword inputs are removed from `model_kwargs`
-> 1069 inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(inputs, bos_token_id, model_kwargs)
1070 batch_size = inputs_tensor.shape[0]
1071
[/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py](https://localhost:8080/#) in _prepare_model_inputs(self, inputs, bos_token_id, model_kwargs)
392 self.config.is_encoder_decoder
393 and hasattr(self, "encoder")
--> 394 and self.encoder.main_input_name != self.main_input_name
395 ):
396 input_name = self.encoder.main_input_name
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
1176 return modules[name]
1177 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1178 type(self).__name__, name))
1179
1180 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'T5Encoder' object has no attribute 'main_input_name'
```
Can anyone suggest? Thanks in advance! | 02-10-2022 16:46:18 | 02-10-2022 16:46:18 | try to Reduce the version of transformers<|||||>Hey, there is some information lacking. Could you please provide:
- A fully working code example: there is no call to `get_model_output` so we're unaware of what is the model and the sentence
- The result of `transformers-cli env`
- What result did you expect
Thank you.<|||||>I got the same issue,the version of my transformer is 4.16.2.
Solved the issue by reducing the version to 4.10.0.
<|||||>> I got the same issue,the version of my transformer is 4.16.2.
> Solved the issue by reducing the version to 4.10.0.
Thanks a lot.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I meet the same problem for transformers 4.17.0. However, the rest of the project depends on newer transformers version. How should I change the code accordingly to avoid this error?<|||||>Reduce version of transformers
发自我的iPhone
------------------ Original ------------------
From: Yuqing Xie ***@***.***>
Date: 周四,8月 25,2022 08:13
To: huggingface/transformers ***@***.***>
Cc: yuanhuachao ***@***.***>, Comment ***@***.***>
Subject: Re: [huggingface/transformers] T5 AttributeError: 'T5Encoder' object has no attribute 'main_input_name' (Issue #15610)
Hi, I meet the same problem for transformers 4.17.0. However, the rest of the project depends on newer transformers version. How should I change the code accordingly to avoid this error?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***><|||||>> However, the rest of the project depends on newer transformers version. How should I change the code accordingly to avoid this error?
I met the same issue. Have you solved it? 😭 <|||||>I have had a lot of trouble trying to get `fastT5` to work. Eventually, this issue kept coming up. It took a while to figure out exactly what I needed to just run the conversion from T5 to ONNX. Like @magichz86 noted, downgrading transformers works. I took these steps:
```
conda create -n fastT5
```
```
conda activate fastT5
```
```
conda install python=3.9
```
```
pip install transformers==4.10.0
```
```
pip install fastT5
```
I had to use a python environment just for these two libraries, as there were conflicts with others. Also, python 3.9 or lower is needed. |
transformers | 15,609 | closed | Documentation of DataCollatorForLanguageModeling | # 🚀 Feature request
Would you consider adding to the following tip:
"For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the "special_tokens_mask" key, ...."
The reason for doing so, for example: "So that special mask tokens won't be masked".
| 02-10-2022 16:43:05 | 02-10-2022 16:43:05 | Hey! Would you like to open a PR?<|||||>Yes, I'll do that<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,608 | closed | ❓ T5 pre-training dataset | Hi there,
I am working with the T5 model and, before proceeding, I need to account for the actual datasets that it has been pre-trained on.
In the [T5v1.1 doc page](https://huggingface.co/docs/transformers/model_doc/t5v1.1) it says "Note: T5 Version 1.1 was only pre-trained on [C4](https://huggingface.co/datasets/c4) excluding any supervised training." hinting at the fact that T5 checkpoint is pre-trained on supervised tasks too. This is confirmed by its ability to work out-of-the-box really well on MNLI. However, in the [model page](https://huggingface.co/t5-small) on the hub it reports only C4 as a pre-training dataset.
- Is there an error in the T5 model description page on the hub?
- (general question) is there any source (I can cite) that specifies which data has been seen by the checkpoint available on the hub?
Thanks a lot in advance for your help.
| 02-10-2022 16:31:41 | 02-10-2022 16:31:41 | I recently ran into the same issue and can confirm your findings. I believe [this](https://huggingface.co/t5-small) version of T5 is fine-tuned in a multitask setup which you can learn more about from the [T5 paper](https://arxiv.org/abs/1910.10683). This is why it works super well on many downstream tasks. However if you are looking for a T5 model that is only pre-trained on C4 then I suggest this [v1.1 checkpoint](https://huggingface.co/google/t5-v1_1-base)
I also think that it would be great to add more info about how the multitask model is fine-tuned to avoid this confusion.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry to answer so late here.
Yes the following checkpoints:
- https://huggingface.co/t5-small
- https://huggingface.co/t5-base
- https://huggingface.co/t5-large
- https://huggingface.co/t5-3b
- https://huggingface.co/t5-11b
have **not** only been pretrained on C4, but if I recall correctly on all datasests of the downstream tasks as described in the section `2.3 Downstream Tasks` of the paper: https://arxiv.org/pdf/1910.10683.pdf as part of the multi-task pretraining strategy as well. Happy to update the model cards!
@craffel - do you have a list of all datasets used for the pretraining of the first versions of T5 (the ones linked above) by any chance? <|||||>Hey, yes, it's the datasets listed in 2.3. Note that we also train on DPR (which is not a dataset we evaluate on) and we don't train on WNLI (though we do evaluate on it).<|||||>Thanks @craffel - documenting this on the model cards :-)<|||||>Thank you very much @patrickvonplaten and @craffel for acting on this issue. Super appreciated!<|||||>Updated all of them<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,607 | closed | fixed pipeline code | The PR fixes the doc issue in [#15578](https://github.com/huggingface/transformers/issues/15578)
| 02-10-2022 15:16:32 | 02-10-2022 15:16:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,606 | closed | Add aws studio notebooks | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-10-2022 15:10:58 | 02-10-2022 15:10:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>https://github.com/huggingface/moon-landing/pull/2077 should improve the UI<|||||>I can't check the result, do you have any way of adding a screenshot of what it will give?<|||||>It will look like this (also, github renders it almost identically [here](https://github.com/huggingface/transformers/blob/cb5ef2e0ae27f09f2427aad155a119a2b22433e6/notebooks/README.md))
<img width="1000" alt="Screenshot 2022-02-11 at 09 42 07" src="https://user-images.githubusercontent.com/11827707/153561148-29ecf65e-26eb-4dc3-9b7d-b0359d2e62c4.png">
<|||||>Rebased it 🙂<|||||>Thanks, all good! |
transformers | 15,605 | closed | Auto tokenizer for question-context tokenization | @patrickvonplaten, @patil-suraj, @LysandreJik
Hello!
I have trained this [tokenizer](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#train-tokenizer-2)
I have a question answering task using T5 and I need the question and context to be tokenized as T5Tokenizer do. I mean `quesion_ids</s>context_ids</s><pad>` I did the following
```
tokenizer = AutoTokenizer.from_pretrained(path_to_tokenizer_json_file, config=AutoConfig.from_pretrained(path_to_model_config))
query_tok = 'Do you go to school?'
doc_tok = 'What could be a second sequence to the uninteresting text, What could \
be a second sequence to the uninteresting text'
query_tok = " ".join(query_tok.split()[:min(5,len(query_tok.split()))])
doc_tok = " ".join(doc_tok.split()[:min(9,len(doc_tok.split()))])
print("query_tok: ", query_tok)
print("doc_tok: ", doc_tok)
ids = tokenizer(
query_tok+'</s>',
doc_tok+'</s>',
max_length=15,
padding='max_length',
truncation="only_second", # need to experiement
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt"
)
print("ids: ",ids)
print("ids shape: ",ids['input_ids'].shape)
print("Len ids: ",len(ids['input_ids'].flatten()), len(ids['input_ids'].flatten()) - 9)
print(tokenizer.decode([id for id in ids['input_ids'].flatten()]))
```
the result was
> query_tok: Do you go to school?
> doc_tok: What could be a second sequence to the uninteresting
> ids: {'input_ids': tensor([[ 3, 11, 36, 316, 475, 16, 156, 187, 1, 219, 312, 63,
> 3, 8, 114, 1906, 16, 4, 352, 6347, 483, 38, 0, 0,
> 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
> 0]])}
> ids shape: torch.Size([1, 25])
> Len ids: 25 16
> do you go to school?</s> what could be a second sequence to the uninteresting<pad><pad><pad>
>
How to make the tokenizer add </s> to the end of the context before padding or after truncation?
| 02-10-2022 14:52:32 | 02-10-2022 14:52:32 | Hey @Arij-Aladel,
Sorry I'm having trouble to fully understand your question here. Could you clarify a bit what output you would expect? E.g. in your opinion what should:
```bash
print(tokenizer.decode([id for id in ids['input_ids'].flatten()]))
```
print out?<|||||>@patrickvonplaten Thanks for your response. Sorry for not clear explanation. I hope this notebook will explain better than words the differences between T5tokenizer and autotokenizer. https://colab.research.google.com/drive/1Gro30z9tbyUUxXR4-bGtKhJzwnEVsXR4?usp=sharing<|||||>Hey @Arij-Aladel,
Could you please try to give a minimal reproducible code snippet that clearly shows where the problem is? So far I've understood that there is a problem with tokenizer (probably T5) in the context of question answering, but I don't know exactly what the problem is.
The notebook starts by importing `pytorch-lightning` which is not related to this issue, which makes me stop looking into the google colab.
Please keep in mind that we are getting 100s of pings on GitHub each day and we need to optimize our time when solving issues. It would be very nice if you could try to have a minimal reproducible code snippet here.
It's always nice to add a code example just like you did above, but then please make sure it is as easy to understand as possible and that everybody can copy-paste and run it.
E.g. copying your issue above and adding comments to each line which should be improved so that we can better help you
```py
# I cannot use `path_to_tokenizer_json_file` -> please give me something that I can run locally
tokenizer = AutoTokenizer.from_pretrained("t5-small")
query_tok = 'Do you go to school?'
doc_tok = 'What could be a second sequence to the uninteresting text, What could \
be a second sequence to the uninteresting text'
# Why do you do this `min(.....)` stuff here??? Please remove this if it's not relevant to the issue
# or explain why. This is definitely not a "normal" case
query_tok = " ".join(query_tok.split()[:min(5,len(query_tok.split()))])
doc_tok = " ".join(doc_tok.split()[:min(9,len(doc_tok.split()))])
print("query_tok: ", query_tok)
print("doc_tok: ", doc_tok)
# Why do you append '</s>` here?
ids = tokenizer(
query_tok+'</s>',
doc_tok+'</s>',
max_length=15,
padding='max_length',
truncation="only_second", # need to experiement
return_attention_mask=True,
add_special_tokens=True,
return_tensors="pt"
)
print("ids: ",ids)
print("ids shape: ",ids['input_ids'].shape)
# Why do I need to know the length - why `-9`?
print("Len ids: ",len(ids['input_ids'].flatten()), len(ids['input_ids'].flatten()) - 9)
# Why do you do `[id in id in ids["..."]]` instead of `batch_decode(...)`?
print(tokenizer.decode([id for id in ids['input_ids'].flatten()]))
```<|||||>@patrickvonplaten https://drive.google.com/drive/folders/1bQVwilQ3RAah7kvnzLSKbdx92s6bv0wX?usp=sharing
Is my question clear now?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,604 | closed | Add local and TensorFlow ONNX export examples to docs | # What does this PR do?
This PR builds on #13831 to show examples of:
* Exporting a local checkpoint to ONNX
* Exporting a pure TensorFlow checkpoint to ONNX
| 02-10-2022 14:41:27 | 02-10-2022 14:41:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the tip @sgugger - I'd totally forgotten about this neat feature of the docs :) I've now implemented the change, so should be good for another pass |
transformers | 15,603 | closed | Fix Seq2SeqTrainer for VisionEncoderDecoderModel | # What does this PR do?
Fixes #15526
The Seq2SeqTrainer always passes `attention_mask` to the `generate` method in its `prediction_step`. However, this results in an error for instances of `VisionEncoderDecoderModel`, which don't accept `attention_mask` in their forward method:
```
[/usr/local/lib/python3.7/dist-packages/transformers/trainer_seq2seq.py](https://localhost:8080/#) in prediction_step(self, model, inputs, prediction_loss_only, ignore_keys)
173 generation_inputs,
174 attention_mask=inputs.get("attention_mask", None),
--> 175 **gen_kwargs,
176 )
177 # in case the batch is shorter than max length, the output should be padded
[/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py](https://localhost:8080/#) in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
[/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py](https://localhost:8080/#) in generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, stopping_criteria, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
1087 # and added to `model_kwargs`
1088 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(
-> 1089 inputs_tensor, model_kwargs, model_input_name
1090 )
1091
[/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py](https://localhost:8080/#) in _prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name)
505 encoder_kwargs["return_dict"] = True
506 encoder_kwargs[model_input_name] = inputs_tensor
--> 507 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs)
508
509 return model_kwargs
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'attention_mask'
```
This PR fixes that by adding "attention_mask" to `gen_kwargs` if they are in the inputs. | 02-10-2022 14:32:20 | 02-10-2022 14:32:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,602 | closed | Fix deepspeed-zero-inference hashlink | # What does this PR do?
Followup to https://github.com/huggingface/transformers/pull/15601
Fixes typo in hashlinks in dpeepseed.mdx doc file
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-10-2022 14:20:38 | 02-10-2022 14:20:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for helping me to resolve this issue, @mishig25
Same as the reason for not applying https://github.com/huggingface/transformers/pull/15601 these tags are there for redirects from other pages so we can't remove them.
The original problem:
I happened to create a new header that was mapped already to an older tag and that's why I am getting both:
https://huggingface.co/docs/transformers/master/main_classes/deepspeed#zero-inference
https://huggingface.co/docs/transformers/master/main_classes/deepspeed#deepspeed-zero-inference
ending up in the same section.
Thank you for catching where the problem came from but I am already resolving it here: https://github.com/huggingface/transformers/pull/15585 by renaming the newly added header to be more distinct.
so closing this one. |
transformers | 15,601 | closed | Rm unneeded `<a =''>` | # What does this PR do?
For hashlink generation, `<a id='seq_imdb'></a>` is not used.
Instead, hashlinks are generated from the markdown headings using [these rules](https://github.com/huggingface/doc-builder/blob/21972fd444094cd6c7414968c6a58935047c5dd1/src/doc_builder/build_doc.py#L264-L266)
cc: @stas00 this pr will fix https://huggingface.slack.com/archives/C02GLJ5S0E9/p1644426635823319
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-10-2022 10:58:19 | 02-10-2022 10:58:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This will breaking existing links to those anchors (when they don't match the section name just after), which is the reason why they are here.<|||||>ooh I see 😊
then I'll push another PR that only fixes dpeespeed.mdx issue mentioned [here](https://huggingface.slack.com/archives/C02GLJ5S0E9/p1644490837265109?thread_ts=1644426635.823319&cid=C02GLJ5S0E9) |
transformers | 15,600 | closed | [deepspeed docs] Correct JSON format | # What does this PR do?
This PR corrects the JSON format.
@stas00 @LysandreJik | 02-10-2022 10:51:20 | 02-10-2022 10:51:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your work on this, @stas00 |
transformers | 15,599 | closed | Run ViT examples on PyTorch XLA out of the box | Check if Vision Transformers (ViT) model and its example script are supported on PyTorch/XLA out of the box. If not, produce the code to bring the needed support. | 02-10-2022 10:08:17 | 02-10-2022 10:08:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15599). All of your documentation changes will be reflected on that endpoint.<|||||>It turns out the model is mostly supported.
I had to make a couple of change to successfully run the model on the TPU using PyTorch/XLA. Below is the command to repro:
```
python examples/pytorch/xla_spawn.py --num_cores 1 examples/pytorch/image-classification/run_image_classification.py \
--dataset_name cats_vs_dogs --output_dir ./cats_vs_dogs_outputs/ \
--remove_unused_columns False --do_train --do_eval --learning_rate 2e-4 \
--num_train_epochs 5 --per_device_train_batch_size 32 --per_device_eval_batch_size 32 \
--logging_strategy steps --logging_steps 10 --evaluation_strategy epoch --save_strategy epoch \
--load_best_model_at_end True --save_total_limit 3 --seed 1337
```<|||||>Hi @sgugger.
Basic questions: I just locally ran the code quality test. Not sure if I understand what it requires to correct the issue given the small size of my code change. How do you usually handle this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I dropped the ball here.
@steventk-g can please follow up on this PR and help land it in HF stack? |
transformers | 15,598 | closed | gpt2 error using torch.jit.trace | ## Environment info
- `transformers` version: 4.16.2
- Platform: ubuntu
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.0a0+0aef44c
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
#### the environment is built from a docker image from `https://github.com/NVIDIA/Torch-TensorRT`, CONTAINER VERSION ==21.10
### Who can help
@patrickvonplaten, @LysandreJik
## To reproduce
issue reported before in #8263 #3954
```
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
tokens=tokenizer('The cat is on the table.', return_tensors='pt')['input_ids']
with torch.jit.optimized_execution(True):
traced_model = torch.jit.trace(model, tokens)
```
## information
```
/opt/conda/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py:196: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)
Traceback (most recent call last):
File "generate_speed.py", line 282, in <module>
tokenizer, eod_id, sep_id, unk_id, model = load_model()
File "generate_speed.py", line 252, in load_model
traced_model = torch.jit.trace(model, tokens)
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py", line 741, in trace
return trace_module(
File "/opt/conda/lib/python3.8/site-packages/torch/jit/_trace.py", line 958, in trace_module
module._c._create_method_from_trace(
RuntimeError: Tracer cannot infer type of CausalLMOutputWithCrossAttentions(loss=None, logits=tensor([[[ 1.6039, -14.8921, -12.7946, ..., -12.5804, -13.8293, -13.4610],
[ -1.9447, -17.5292, -15.7436, ..., -15.3740, -17.2827, -16.3414],
[ -2.1778, -15.4163, -13.4095, ..., -12.6063, -14.2943, -14.2544],
[ -1.2752, -13.6237, -12.9004, ..., -12.0724, -11.4089, -11.1407]]],
device='cuda:0'), past_key_values=((tensor([[[[ 1.3004e+00, -9.0571e-02, -1.1026e+00, ..., -4.1700e-01,
2.2270e-01, -1.0456e-02],
[-1.3599e+00, -9.8881e-01, 3.2034e-01, ..., 2.1444e-03,
...
[-4.8185e-01, 6.5437e-01, -1.7764e-01, ..., 1.1841e+00,
2.4035e-01, 7.0395e-01]]]], grad_fn=<PermuteBackward0>))), hidden_states=None, attentions=None, cross_attentions=None)
:Dictionary inputs to traced functions must have consistent type. Found Tensor and Tuple[Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor], Tuple[Tensor, Tensor]]
``` | 02-10-2022 09:53:41 | 02-10-2022 09:53:41 | dose anyone knows how to solve this? many thanks!<|||||>use `return_dict=False`<|||||>Hello! I recommend reading the following, which mentions what must be done to run models with JIT: https://huggingface.co/docs/transformers/v4.16.2/en/serialization#torchscript<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>try specifying torchscript is True when loading the model
`model = GPT2LMHeadModel.from_pretrained('gpt2', torchscript=True)`<|||||>I have the same problem, how to solve it?<|||||>Same as me,can anyone solve this problem?<|||||>GPT2LMHeadModel.from_pretrained("gpt2", torchscript=True).eval() |
transformers | 15,597 | closed | How to implement generate function for seperate encoder decoder T5 model? | I was working on optimizing the T5 model. The version of the transformer I am using is 4.8. For optimization, I separated the model into encoder and decoder with LM(Language modeling) head. Earlier for generation, I just passed input_ids, attention_mask,max_length, and num_beams. But now since the model is separated into encoder and decoder I can't use generate function. But I am able to generate logits using the below code:
encoder_last_hidden_state = t5_trt_encoder(input_ids=input_ids)
outputs = t5_trt_decoder(input_ids, encoder_last_hidden_state)
But how can I use these logits to generate sequences?. I mean I am unclear how can I use the encoder and decoder to write a class which supports generate function?
----
note: edited by @stas00 to remove a dozen of tags | 02-10-2022 08:48:14 | 02-10-2022 08:48:14 | Hi! Thanks for the issue. But please don't tag every maintainer on an issue. In the issue template you can find who to tag depending on the model or feature.
Also for such general questions please use the [forum ](https://discuss.huggingface.co/). We use issues for bug reports and feature requests. Thank you !
<|||||>Hi @patil-suraj. Thanks for letting me know about the community. I was trying to optimize the T5 based QA-QG model from your repository using https://github.com/NVIDIA/TensorRT/tree/main/demo/HuggingFace repo. Here they have separated the model into encoder and decoder with LM(Language model) for optimization. But when I am doing inference in the following way:
max_length = 64
decoder_input_ids = torch.full(
(1, 1), tokenizer.convert_tokens_to_ids(tokenizer.pad_token), dtype=torch.int32
)
encoder_last_hidden_state = t5_trt_encoder(input_ids=input_ids)
outputs = t5_trt_decoder.greedy_search(
input_ids=decoder_input_ids,
encoder_hidden_states=encoder_last_hidden_state,
stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length)])
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
I am getting an empty result. I tried with an unoptimized torch model too but the results are the same. Hence I thought maybe that can be resolved if somehow I am able to use logits from these in generate function. Hence I asked the question.
The same prediction method works for translation case.
The tokenizer is the same as Valhalla/t5-small-qa-qg-hl
Please help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,596 | closed | Add example batch size to all commands | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As discussed offline with @stas00 and @sgugger, this PR adds the default batch_size to the example commands to make the reader aware of how the batch size can be set.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-10-2022 07:53:26 | 02-10-2022 07:53:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,594 | closed | [research_projects] deal with security alerts | for 2 weeks now dependabot keeps on complaining about old PL versions hardcoded in research_projects that have a vulnerability.
This PR may break those abandoned examples but at least give the developers some peace of mind.
Perhaps it's time to move research_projects that have been abandoned to their own repo.
@LysandreJik | 02-10-2022 04:39:16 | 02-10-2022 04:39:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good idea, @LysandreJik - done
Please review with whitespace-ignore filter on:
https://github.com/huggingface/transformers/pull/15594/files?diff=split&w=1 |
transformers | 15,593 | closed | Fix bugs of s2t fairseq model converting | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This fix bugs in s2t fairseq model converting into HF one.
Fixes
- Fix typo in add_argument
- Fix `_Missing key(s) in state_dict_` error while loading fairseq model weights into HF model. This specifcally fixes issue cased by fairseq model's fixed positional embedding in both encoder and decoder.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-10-2022 02:20:28 | 02-10-2022 02:20:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15593). All of your documentation changes will be reflected on that endpoint.<|||||>> @patil-suraj it's also possible that the position embedding weights are present when converting from Fairseq no? Or are they always missing?
Sinusoidal embeddings in fairseq are not registered as parameters, so they are always missing from the `state_dict`<|||||>> > @patil-suraj it's also possible that the position embedding weights are present when converting from Fairseq no? Or are they always missing?
>
> Sinusoidal embeddings in fairseq are not registered as parameters, so they are always missing from the `state_dict`
@patrickvonplaten good point, technically positional embedding weights can be present if fairseq model is trained using learned embedding. Say if you give `learned=True` as a 4th argument in [PositionalEmbedding function](https://github.com/pytorch/fairseq/blob/5f2515e676985efd2944f6c87fe1b086cb94bdee/fairseq/modules/positional_embedding.py#L12~L16) which is being called in [encoder](https://github.com/pytorch/fairseq/blob/5f2515e676985efd2944f6c87fe1b086cb94bdee/fairseq/models/speech_to_text/s2t_transformer.py#L331~#L333) and [decoder](https://github.com/pytorch/fairseq/blob/5f2515e676985efd2944f6c87fe1b086cb94bdee/fairseq/models/transformer/transformer_decoder.py#L97~#L103).
@patil-suraj , kindly correct me if I am wrong.
So in short, depending on training h-parameter settings, `missing` lists can be 4 cases.
`[], ["encoder.embed_positions.weights"], ["decoder.embed_positions.weights"], ["encoder.embed_positions.weights", "decoder.embed_positions.weights"]`, although if the model is trained straight-forward way(using fixed positional embedding), both of encoder, decoder positional embedding weights must be missing.
Based on @patrickvonplaten suggestion, I have modifed to code to cover all thoses cases.
Thank you for suggestion.
Please review reflected changes @patil-suraj @patrickvonplaten
<|||||>hmm I kind of did something wrong with git by force-push ing it. Please let me know if it can be a problem :( <|||||>@patil-suraj could you take a final look here and merge if ok?<|||||>Hey guys, I have trained s2t model with fairseq and for some reason conversion still fails. https://github.com/huggingface/transformers/blob/main/src/transformers/models/speech_to_text/convert_s2t_fairseq_to_tfms.py#L56 here it seems args is actually none. It seems that ['cfg']['model'] is a thing code actually requires. with this change conversion is successful but I am missing preprocessor_config.json can anyone help me with this one?<|||||>@patil-suraj could you take a quick look here?<|||||>Also while we are at it, conversion does not take into account about model.vocab file, the easiest way I find possible to successfully transcribe audio with hf was to copy spm_unigram10000.vocab and rename it to vocab.json, also grapped this one as a pre processor https://huggingface.co/facebook/s2t-small-librispeech-asr/blob/main/preprocessor_config.json. I think vocab should be saved with fairseq pt or script should require its path at least... also Speech2TextForConditionalGeneration generate function does not take input_ids it takes inputs. I will be happy to help in any way ;) |
transformers | 15,592 | closed | Update TAPAS script to convert TAPAS-NQ weights (re-attempt #11907) | Closes https://github.com/huggingface/transformers/issues/14494
This is a reattempt of #11907, as it became stall and was closed. I had to use the latest version and copy the code over.
Instructions:
* Clone fork
* go to `transformers/`
* `pip install torch pandas tensorflow`
* `pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.0+102.html` (or whatever your cu version is, just do `pip show torch` to find out)
* Download the retriever medium or large weights with HN from the official source: https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md
* Run this:
```
# Medium model
python src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py --task RETRIEVAL --tf_checkpoint_path tapas_nq_hn_retriever_medium/model.ckpt --tapas_config_file tapas_nq_hn_retriever_medium/bert_config.json --pytorch_dump_path tapas_od/ --vocab_file tapas_nq_hn_retriever_medium/vocab.txt
# lareg model
python src/transformers/models/tapas/convert_tapas_original_tf_checkpoint_to_pytorch.py --task RETRIEVAL --tf_checkpoint_path tapas_nq_hn_retriever_large/model.ckpt --tapas_config_file tapas_nq_hn_retriever_large/bert_config.json --pytorch_dump_path tapas_nq_large/ --vocab_file tapas_nq_hn_retriever_large/vocab.txt
```
## Results
https://huggingface.co/xhlu/tapas-nq-hn-retriever-medium-1
https://huggingface.co/xhlu/tapas-nq-hn-retriever-medium-0
https://huggingface.co/xhlu/tapas-nq-hn-retriever-large-1
https://huggingface.co/xhlu/tapas-nq-hn-retriever-large-0 | 02-10-2022 02:13:37 | 02-10-2022 02:13:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15592). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@xhluca Thanks for this contribution! Can you elaborate how to use this for inference?<|||||>This script isn't meant for inference. You should use load the regular TAPAS model. I have uploaded them to Huggingface already so you can find my name and use them if you don't want to run this script. |
transformers | 15,591 | closed | T5 encoder past_key_values | Currently only the T5 decoder can use `past_key_values`, but not the encoder. Why?
https://github.com/huggingface/transformers/blob/644ec052332beb160276ceaf9cc9ab6bbf0b89d2/src/transformers/models/t5/modeling_t5.py#L631
My use case is Li & Liang (2021) style prefix tuning (https://arxiv.org/abs/2101.00190), which I think can be implemented by providing `past_key_values` to the encoder.
And if `past_key_values` is not a viable option, are there other simple ways to implement this with T5? | 02-10-2022 00:58:24 | 02-10-2022 00:58:24 | The reason is that the encoder has no need to cache keys and values; all we need to do is cache the encoder output hidden states and then we can reuse those for cross-attention. Ultimately, the T5 encoder is only used one time during generation - just one forward pass at the very beginning of generation and then we have the hidden states and are good to go on that front. On the other hand, the decoder does benefit from caching keys and values, as otherwise we'd have to recompute them every time step during generation (unlike the encoder, the decoder has to do a forward pass every single step).
> And if past_key_values is not a viable option, are there other simple ways to implement this with T5?
I'm not familiar with that paper, so I might not be of any help. However, you do have access to the encoder hidden states, which are essentially just a higher dimensional version of the key and value states, so you could look into ways to use those instead. Alternatively, you could modify the T5 code to output the key and value states computed in the encoder.<|||||>Yes, I understand this motivation. But `past_key_values` can also be used to implement this method I cited. It is perhaps an unorthodox use of this attribute, but I don't think it should be forbidden. Re. encoder hidden states: `past_key_values` effectively prepends several trainable tokens *at each layer* for T5 to attend to -- I don't think manipulating the hidden states can achieve this purpose. And yes I could modify the T5 code, but that would be inelegant. Especially since I need to modify `T5Attention`, I would have to modify (or inherit) all other T5 classes that use it, either directly or indirectly.<|||||>> Re. encoder hidden states: past_key_values effectively prepends several trainable tokens at each layer for T5 to attend to -- I don't think manipulating the hidden states can achieve this purpose.
Do you mean in Huggingface's T5 implementation or in the paper you're trying to implement? The HF T5 implementation simply projects the previous layer's hidden states into key and value states without doing anything special for each layer. So to me it sounds like you might have to customize `T5Attention` for your use-case anyway.<|||||>Also, I think the maintainers missed this thread, so perhaps make a feature request for passing passing past keys and values to the encoder? I do think you're correct that they _could_ be passed to the encoder (`T5Attention` would have to be modified to concatenate them to the present keys and values, as in the decoder). Or you could tag one of the maintainers in this thread.<|||||>I was referring to the paper which could be implemented by passing `past_key_values` to the encoder, allowing us to not have to change `T5Attention` at all. This has been done for e.g. BERT-like models in this way by the authors and other people, but not T5 due to this restriction.
Maybe @LysandreJik since we briefly talked about this on slack?<|||||>Hey @ZhaofengWu,
Sorry I thought I answered here before. I do agree with what @dblakely says here mostly.
Could you maybe open a PR to showcase a bit what changes we would have to do to satisfy your use case?
IMO this is very much an edge case and I've never seen anybody use `past_key_values` states in BERT. The only reason we have this implemented in `BertAttention` is because BERT can be used as a decoder<|||||>@patrickvonplaten Sorry for the late reply. I think the only change needed would be to remove this assertion. Re. past work using `past_key_values` in BERT to provide a prefix, see the original implementation of the prefix tuning paper (https://github.com/XiangLi1999/PrefixTuning), and a follow-up work P-tuning-v2's (https://github.com/THUDM/P-tuning-v2).<|||||>Oh, and another thing is that currently `past_key_values` passes to a T5 model is only given to the decoder. This is workaroundable for my purpose by manually computing `encoder_outputs`, with `past_key_values`, and giving it to the T5 model. But ideally there can be something like `{encoder,decoder}_past_key_values`. Though I would understand if you believe this is too hacky and are unwilling to support this.<|||||>> @patrickvonplaten Sorry for the late reply. I think the only change needed would be to remove this assertion. Re. past work using `past_key_values` in BERT to provide a prefix, see the original implementation of the prefix tuning paper (https://github.com/XiangLi1999/PrefixTuning), and a follow-up work P-tuning-v2's (https://github.com/THUDM/P-tuning-v2).
Ok if we just need to change the assertion to a warning than I think that this would be fine :-) Would you like to open a PR for this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Oh, and another thing is that currently `past_key_values` passes to a T5 model is only given to the decoder. This is workaroundable for my purpose by manually computing `encoder_outputs`, with `past_key_values`, and giving it to the T5 model. But ideally there can be something like `{encoder,decoder}_past_key_values`. Though I would understand if you believe this is too hacky and are unwilling to support this.
Hi Zhaofeng,
I'm also trying to apply the prefix-prompt idea to T5 model and facing the issues similar to what you had. Could you elaborate a bit more on how did you pass "past_key_values" to the T5 encoder?
Thanks,
JC<|||||>I ended up writing a whole stack of T5ModelWithPrefix/T5EncoderWithPrefix/... I hoped it would be easier but I don't think the current transformers setup allows that.<|||||>> I ended up writing a whole stack of T5ModelWithPrefix/T5EncoderWithPrefix/... I hoped it would be easier but I don't think the current transformers setup allows that.
Hi Zhaofeng, would it be possible to share your implementations? I couldn't either make it work based on the current transformers' implementation for T5. Or could you shed more lights on what core changes you made?
<|||||>> I ended up writing a whole stack of T5ModelWithPrefix/T5EncoderWithPrefix/... I hoped it would be easier but I don't think the current transformers setup allows that.
Hello, I also have the same problem, could you share your implementations?Thank you very much.<|||||>Sure, I should be able to share it along with some other project-specific code after the EMNLP decisions, which will be in less than a month. I will let you know when that happens.<|||||>Thank you very much.
------------------ 原始邮件 ------------------
发件人: "huggingface/transformers" ***@***.***>;
发送时间: 2022年9月7日(星期三) 上午10:59
***@***.***>;
***@***.******@***.***>;
主题: Re: [huggingface/transformers] T5 encoder past_key_values (Issue #15591)
Sure, I should be able to share it along with some other project-specific code after the EMNLP decisions, which will be in less than a month. I will let you know when that happens.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***><|||||>In case someone is still looking for T5 with prefix-tuning implementation: https://github.com/HKUNLP/UnifiedSKG/blob/main/models/prompt/modeling_t5.py<|||||>thanks
> In case someone is still looking for T5 with prefix-tuning implementation: https://github.com/HKUNLP/UnifiedSKG/blob/main/models/prompt/modeling_t5.py
Thanks!<|||||>@yushuinanrong @BAOOOOOM Sorry for the delay, but if it's still helpful, our implementation can be found at https://github.com/allenai/better-promptability/blob/main/better_promptability/models/t5_with_prefix.py |
transformers | 15,590 | closed | Fix ASR pipelines from local directories with wav2vec models that have language models attached | # What does this PR do?
Fixes #15589.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: https://discord.com/channels/879548962464493619/930507803339137024/939885949633064960
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten, @Narsil
| 02-09-2022 23:40:07 | 02-09-2022 23:40:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sure thing, I'll add the test tomorrow 👍🏼 <|||||>@versae you need to protect your test with @require_torch or @require_pyctcdecode (not sure what was failing)<|||||>@Narsil, thanks! Sorry for all the mess with the commits. I'm trying to figure out what's going on. You guys have a pretty serious CI setup and I don't even have a local GPU. I'll mark it as WIP until I think is ready.<|||||>@versae No worries, it is indeed a pretty involved test suite.
If you want to run it, you don't need to run it entirely though:
```
RUN_PIPELINE_TESTS=1 pytest -sv tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_with_local_lm_fast
```
or
```
RUN_PIPELINE_TESTS=1 pytest -sv tests/test_pipelines_automatic_speech_recognition.py
```
These only run on CPU for instance.
Should run only the test you are creating, making it much easier to debug. And don't hesitate to ask for help if you don't understand what's happening the `@require` are easy to miss, and hard to understand once they occur for instance.
Thanks for the PR! <|||||>Alright, I think it's ready now. Thanks y'all for all the awesome libraries you open source! |
transformers | 15,589 | closed | ASR pipelines won't load local Wav2Vec models with language models attached | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help
@patrickvonplaten, @Narsil.
## Information
Model I am using: Wav2Vec2 with KenLM
The problem arises when using:
* [x] the official example scripts: Any script using the ASR pipeline trying to load from a local directory a Wav2Vec2 model with a language model attached, as in for example [eval.py](https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py)
* [ ] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: `robust-speech-event`
* [ ] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Download `eval.py` script
2. Clone a model repo that contains a language model
2. Run the script with the model in a local directory
3. It tries to download the model from the hub even though it should load locally
```bash
$ git clone https://huggingface.co/NbAiLab/wav2vec2-xls-r-1b-npsc-bokmaal-low-27k
$ cd wav2vec2-xls-r-1b-npsc-bokmaal-low-27k
$ python eval.py --model_id ./ --dataset NbAiLab/NPSC --config 16K_mp3_bokmaal --split test --log_outputs
Reusing dataset npsc (/home/user/.cache/huggingface/datasets/NbAiLab___npsc/16K_mp3_bokmaal/1.0.0/fab8b0517ebc9c0c6f0d019094e8816d5537f55d965f2dd90750349017b0bc69)
Traceback (most recent call last):
File "/home/user/wav2vec2-xls-r-1b-npsc-bokmaal-low-27k/eval.py", line 151, in <module>
main(args)
File "/home/user/wav2vec2-xls-r-1b-npsc-bokmaal-low-27k/eval.py", line 98, in main
asr = pipeline("automatic-speech-recognition", model=args.model_id, device=args.device)
File "/home/user/audio/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 628, in pipeline
decoder = BeamSearchDecoderCTC.load_from_hf_hub(model_name, allow_regex=allow_regex)
File "/home/user/audio/lib/python3.9/site-packages/pyctcdecode/decoder.py", line 771, in load_from_hf_hub
cached_directory = snapshot_download(model_id, cache_dir=cache_dir, **kwargs)
File "/home/user/audio/lib/python3.9/site-packages/huggingface_hub/snapshot_download.py", line 144, in snapshot_download
model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)
File "/home/user/audio/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 912, in model_info
r.raise_for_status()
File "/home/user/audio/lib/python3.9/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models//revision/main
```
## Expected behavior
It should not try to download anything when the model is a path to a local directory.
| 02-09-2022 23:31:57 | 02-09-2022 23:31:57 | |
transformers | 15,588 | closed | [ViTMAE] Add link to script | # What does this PR do?
This PR adds a link to the PyTorch example script in the docs of ViTMAE. | 02-09-2022 21:52:27 | 02-09-2022 21:52:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,587 | closed | Expand tutorial for custom models | # What does this PR do?
This PR extends the tutorial for custom models by showing how to write such a custom model, with an example of wrapping a model from the timm library. | 02-09-2022 21:30:17 | 02-09-2022 21:30:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,586 | closed | Add SimMIM | # What does this PR do?
This PR adds an `AutoModelForMaskedImageModeling` API to the library with a corresponding `run_mim.py` script.
It allows to pre-train the following Transformer-based vision models:
* ViT
* DeiT
* BEiT
* Swin Transformer
based on Microsoft's cool (and simple) [SimMIM](https://arxiv.org/abs/2111.09886) paper (note that this won't work for convolution based models such as ConvNeXT).
This requires a tiny tweak in each modeling file, unfortunately. However, it's quite minimal, namely I added the ability to add a `mask_token` trainable parameter. This is a bit different compared to NLP models, which have the mask token inside their embedding layer (`nn.Embedding`).
The `run_mim.py` script is directly inspired by `run_mlm.py`, but adapted to work for vision.
| 02-09-2022 20:55:13 | 02-09-2022 20:55:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Updated the PR to not include BEiT. |
transformers | 15,595 | closed | Dynamic padding did not work as expected for custom audio dataset | ## Describe the bug
Hello everyone, following-up of [this post](https://discuss.huggingface.co/t/dynamic-padding-not-working-for-audio-custom-dataset/14532) and [official blog post](https://huggingface.co/blog/fine-tune-wav2vec2-english) on fine-tuning Wav2Vec model. Turns out it did not pad correctly w.r.t. input features.
## Steps to reproduce the bug
```python
def map_speech_to_array(batch):
"""
map the wav file to audio signals
:param batch: the loaded dataset, with audio file location as "column"
:type batch: datasets.dataset_dict.DatasetDict
"""
speech_array, sampling_rate = sf.read(batch["audio_loc"])
batch["speech"] = speech_array
batch["sampling_rate"] = sampling_rate
batch["audio_loc"] = batch["audio_loc"]
batch["text"] = batch["text"]
return batch
def prepare_dataset(batch):
"""
data preprocess with Wav2Vec customized processor
:param batch: the loaded dataset
:type batch: datasets.dataset_dict.DatasetDict
:param processor: the customized
:type processor: transformers.models.wav2vec2.processing_wav2vec2.Wav2Vec2Processor
"""
batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"]).input_values[0]
with processor.as_target_processor():
labels = processor(batch["text"]).input_ids
batch["labels"] = labels
return batch
loaded_dt = load_dataset(
'csv',
data_files={"train": "../manifest/train.csv",
"test": "../manifest/test.csv"})
loaded_dt = loaded_dt.map(
map_speech_to_array)
loaded_dt = loaded_dt.map(prepare_dataset)
train_dt = loaded_dt["train"]
```
I tried the following code for padding investigation
```python
tokenizer = Wav2Vec2CTCTokenizer(
"../vocab/vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|",
padding=True, truncation=True)
feature_extractor = Wav2Vec2FeatureExtractor(
feature_size=1, sampling_rate=SAMPLE_RATE, padding_value=0.0,
do_normalize=True, return_attention_mask=False,
padding=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
train_dt =train_dt.map(map_speech_to_array)
train_dt = train_dt.map(prepare_dataset)
input_features = []
label_features = []
for i, item in enumerate(train_dt):
input_features.append({"input_values":train_dt[i]["input_values"]})
label_features.append({"input_ids":train_dt[i]["labels"]})
print(len(label_features[0]["input_ids"]))
batch = processor.pad(
input_features,
padding=True,
return_tensors="pt")
print(batch)
with processor.as_target_processor():
labels_batch = processor.pad(
label_features,
padding=True,
return_tensors="pt")
```
## Expected results
The code above is line-by-line breakdown of `DataCollatorCTCWithPadding` provided in the official blog. It should start to fine-tune Wav2Vec model.
## Actual results
```
labels_batch = processor.pad(
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py", line 150, in pad
return self.current_processor.pad(*args, **kwargs)
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2776, in pad
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2312, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
TypeError: '<' not supported between instances of 'NoneType' and 'int'
```
I got a very similar error when I used `Trainer`
```python
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-base",
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id)
model.to(DEVICE)
model.freeze_feature_extractor()
data_collator = DataCollatorCTCWithPadding(
processor=processor, padding=True)
training_args = TrainingArguments(
output_dir="../fine-tuned/wav2vec",
group_by_length=True,
per_device_train_batch_size=32,
evaluation_strategy="steps",
num_train_epochs=30,
fp16=True,
gradient_checkpointing=True,
save_steps=500,
eval_steps=500,
logging_steps=500,
learning_rate=1e-4,
weight_decay=0.005,
warmup_steps=1000,
save_total_limit=2,
push_to_hub=False)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=loaded_dt["train"],
eval_dataset=loaded_dt["test"],
tokenizer=processor.feature_extractor)
trainer.train()
```
Which returns:
```
trainer.train()
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/trainer.py", line 1306, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "wav2vec_test.py", line 73, in __call__
labels_batch = self.processor.pad(
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py", line 150, in pad
return self.current_processor.pad(*args, **kwargs)
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2776, in pad
padding_strategy, _, max_length, _ = self._get_padding_truncation_strategies(
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2312, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
TypeError: '<' not supported between instances of 'NoneType' and 'int'
0%|
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-4.15.0-166-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
Any suggestions? Thanks in advance. | 02-09-2022 19:57:10 | 02-09-2022 19:57:10 | Hi @changyeli, thanks for reporting.
According to the TypeError message you get, `self.pad_token_id` is None and code line
```
File "/home/lixx3013/anaconda3/envs/toolkit/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2312, in _get_padding_truncation_strategies
if padding_strategy != PaddingStrategy.DO_NOT_PAD and (not self.pad_token or self.pad_token_id < 0):
```
is trying to compare it with 0 int value.
I think this is an issue with `transformers` instead of `datasets`. I'm transferring this issue to the transformers team.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,585 | closed | [deepspeed docs] misc additions | This PR:
- documents a new Deepspeed optimization param
- fixes a header (had 2 with similar names)
- adds new troubleshooting entry
@sgugger | 02-09-2022 19:55:35 | 02-09-2022 19:55:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,584 | closed | [Flax] abstract init model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-09-2022 18:04:53 | 02-09-2022 18:04:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15584). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,583 | closed | Add TF implementation of GPT-J model | # 🚀 Feature request
Add TF implementation of GPT-J model
## Motivation
Narrow the gap between PT and TF implementations.
## Your contribution
I'd like to work on this. | 02-09-2022 17:16:32 | 02-09-2022 17:16:32 | Good luck.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>active #15623 |
transformers | 15,582 | closed | train_file and validation_file in jsonl format with examples/pytorch/language-modeling/run_clm.py | I'm using `./examples/pytorch/language-modeling/run_clm.py` providing the arguments _--train_file_ and _--validation_file_ with the **json** extension, however the underlying file is really a **jsonl** file. This is confusing as I need to rename my jsonl files.
Since run_clm.py is compatible with jsonl files, is it possible to include the *jsonl* extension [here](https://github.com/huggingface/transformers/blob/eed3186b79d7ef9187113849f9ee7882bdd359fd/examples/pytorch/language-modeling/run_clm.py#L186) and [here](https://github.com/huggingface/transformers/blob/eed3186b79d7ef9187113849f9ee7882bdd359fd/examples/pytorch/language-modeling/run_clm.py#L189)? | 02-09-2022 17:15:49 | 02-09-2022 17:15:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,581 | closed | Flax model much slower than Pytorch | Hi,
I am running the same QuestionAnswering model using Pytorch and Flax, but I am seeing quite a bad performance for Flax.
| Model | Time |
|--------------- |-------- |
| Pytorch (GPU) | 30 ms |
| Flax (GPU) | 4 sec |
| Flax (TPU) | 15 sec |
I have warmed the jit function, and I think I have put the model and data on the device in both cases. What am I doing wrong here?
Notebook: https://colab.research.google.com/drive/18ouY4S9rYtlEC1dht_w8WU4fFKnP9yhP#scrollTo=FLfV3ymlwRJC
## Environment info
- `transformers` version: 4.16.2
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (gpu)
- Jax version: 0.2.25
- JaxLib version: 0.1.71
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): BERT (FlaxBertForQuestionAnswering, BertForQuestionAnswering)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run [this notebook](https://colab.research.google.com/drive/18ouY4S9rYtlEC1dht_w8WU4fFKnP9yhP#scrollTo=FLfV3ymlwRJC) on Google Colab (GPU runtime)
## Expected behavior
About the same performance for Pytorch as for Flax.
| 02-09-2022 16:47:14 | 02-09-2022 16:47:14 | cc @patil-suraj think we need to improve our docs here.
@yorickvanzweeden the very first time you run `run_model_jit(jax_inputs_batched)`, the model is compiled which takes time. However the second time you run it, it should be pretty fast. Could you maybe run the code say 20 times for both PyTorch and Flax and then check if the Flax is still as slow?<|||||>@patrickvonplaten Thank you for your swift response!
I noticed caching on the inputs, so I have two inputs. When I do something like this, I still get about 4 seconds versus 40 ms.
```python
# PyTorch
%timeit -r 20 -n 1 model(**inputs_gpu) # <-- 20x repeat
> 1 loop, best of 20: 20.5 ms per loop
%timeit -r 1 -n 1 model(**inputs_gpu2)
> 1 loop, best of 1: 39.1 ms per loop
# Flax
%timeit -r 20 -n 1 run_model_jit(jax_inputs_batched) # <-- 20x repeat
> 1 loop, best of 20: 44.7 µs per loop
%timeit -r 1 -n 1 run_model_jit(jax_inputs_batched2)
> 1 loop, best of 1: 3.83 s per loop
```
Also, please note that I am using `FlaxBertForQuestionAnsweringModule`, and not `FlaxBertForQuestionAnswering` directly. I would like to use the module directly.
But also when using `FlaxBertForQuestionAnswering.from_pretrained`, I get the second input in seconds (5.75 sec).
I updated [the notebook](https://colab.research.google.com/drive/18ouY4S9rYtlEC1dht_w8WU4fFKnP9yhP#scrollTo=d5Ic_18Affx0) for the 20x repeat, and using the model rather than the module.
<|||||>Hey @yorickvanzweeden,
Cool, so it seems that once the flax inputs are cached, the forward pass is very fast no?
Now the problem is that for the second input `jax_inputs_batched2` the code is recompiled again. The most important thing when dealing with `jax.jit(...)` is to remember the following if you `jax.jit(...)` to create a "jitted" function, *e.g.* `run_model_jit(...)` and then run the function, *e.g.* `run_model_jit(jax_inputs_batched2)`, the following happens under the hood:
1. JAX checks if it can find a compiled binary that corresponds to the function you jitted `run_model_jit(...)` **and** the input shape you forwarded.
2. If there is no match, then JAX compiles the function + input shape and we repeat step 1. If there is a match then the compiled binary is executed.
In your case for `run_model_jit(jax_inputs_batched2)` JAX does find a compiled binary, but the compiled binary corresponds to `run_model_jit` + `jax_inputs_batched.shape` and while you again use `run_model_jit`, `jax_inputs_batched2.shape` != `jax_inputs_batched.shape` which means that the function is compiled again. To remedy this you should make sure that all your inputs have the exact same shape, *e.g.* by padding the input. This way you won't trigger a new recompilation.<|||||>Hi @patrickvonplaten,
Thank you for the explanation. The different shapes indeed resulted in recompilation. It is now about 15% faster than PyTorch :)
Thanks for the help! |
transformers | 15,580 | closed | Fix tests hub failure | # What does this PR do?
This PR fixes the flaky tests in the test_hub job.
TL;DR: the two tests changed push to the Hub with async commits which is the source of two flaky errors and one at-exit error:
- 1st flaky error: the pushes may not be finished before we try to access the repo, so we wait for the push in progress to be finished
- 2nd flaky error: the pushes may not all be done, since a commit is skipped when the push in progress is not finished
At-exit error: when exiting the test, `huggingface_hub` accesses the `is_done` property of some of the commands (even if they are finished), which calls another time the `post_method` of those command (`lfs_prune`) on a repo that does not exist anymore. This is fixed on the `huggingface_hub` side by the PR mentioned above. | 02-09-2022 15:57:05 | 02-09-2022 15:57:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,579 | closed | Click new version | Upgrades click version in CI, as the `ubuntu-latest` image with python 3.8 support has `click~=7` installed by default which is incompatible with `black`. The `pip` dependency resolver doesn't manage to update. | 02-09-2022 15:27:25 | 02-09-2022 15:27:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,578 | closed | Documentation error | ##
Hi, I think there is a tiny issue in the documentation of using pipelines
The current [doc](https://github.com/huggingface/transformers/blob/4a6a35bc6506dd5deb528580184c39e3e778d53b/docs/source/main_classes/pipelines.mdx) says that the KeyDataset class is in the transformers.pipelines.base, however the class is actually in the [pt_utils](https://github.com/huggingface/transformers/blob/19d37c2dd36d73537a6855c56808e524fb584459/src/transformers/pipelines/pt_utils.py)
```python
#The current doc (not working)
from transformers.pipelines.base import KeyDataset
```
```python
#The actual class is in
from transformers.pipelines.pt_utils import KeyDataset
```
Running the current doc's examples won't work :
```
ImportError: cannot import name 'KeyDataset' from 'transformers.pipelines.base' (/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py)
``` | 02-09-2022 14:44:40 | 02-09-2022 14:44:40 | Hi,
Thanks for spotting. Feel free to open a PR to fix this!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,577 | closed | Tokenizers dictionaries not being saved to cache. | Hi,
Sometimes huggingface servers are down. This can break training scripts if you are trying to load a tokenizer because they get the vocabulary through the API everytime.
e.g.
`tokenizer = transformers.CLIPTokenizer.from_pretrained("openai/CLIP/ViT-B/32")`
If the servers are down the above line of code will fail:
`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/openai/CLIP/ViT-B/32`
Why not save the tokenizer vocab to the local model cache the first time this is called? This would reduce unnecessary calls to the server and make training models less sensitive to huggingface server downtime.
| 02-09-2022 12:16:49 | 02-09-2022 12:16:49 | Hello 🙂
Tokenizers get automatically cached when first downloaded, so once you downloaded the tokenizer, you can use `AutoTokenizer.from_pretrained(model_name, local_files_only=True)` and if the tokenizer is present in the cache, you'll get the tokenizer. Otherwise, you'll get `TypeError: __init__() missing 1 required positional argument: 'vocab_file'`.
If you want to work completely offline, you can run your script with the `TRANSFORMERS_OFFLINE=1` environment variable.<|||||>ok tbh i feel like the cache should be used automatically without needing the flag, it also seems to not have downloaded for me but ill check further |
transformers | 15,576 | closed | i cannot connect the hf webset |

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 02-09-2022 12:06:18 | 02-09-2022 12:06:18 | Their services are currently down. Check: https://status.huggingface.co/ to see when they are up again.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,575 | closed | Inference using ONNX model exported by transformers.onnx raises [ONNXRuntimeError] INVALID_ARGUMENT : Unexpected input data type | Hello !
I am currently encountering a strange behaviour when using a ONNX model generated by the `transformers.onnx` package on an actual documentation example, which can be found here: https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx
I have attached all relevant info I could think of, but don't hesitate to tell me if you need anything else!
## Environment info
- `transformers` version: 4.16.0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0+cpu (False)
- ONNX version: 1.10.2
- ONNX Runtime version: 1.10.0
- ONNX Runtime tools version: 1.7.0
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
From the suggested list, I think that @sgugger, @patil-suraj are the correct persons to tag.
## Information
When exporting a model to onnx using the CLI command `python -m transformers.onnx --model=distilbert-base-uncased onnx/`, the export works fine but the snippet included in the doc regarding actual inference results in the following error:
``[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int32)) , expected: (tensor(int64))``
Here is the snippet that produces the error:
```python
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
session = InferenceSession("onnx/model.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
I guess that this is solely a question of tensor format, but since this is an example [from the documentation](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx) I thought it would be valuable to add this here.
---
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [X] N/A
## To reproduce
1. Install `transformers[onnx]` through pip
2. Export the distilbert-base-uncased model to the ONNX format using the following CLI command:
`python -m transformers.onnx --model=distilbert-base-uncased onnx/`
3. Execute the following snippet to perform inference:
```python
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
session = InferenceSession("onnx/model.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
3. Witness the following error:
```python
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int32)) , expected: (tensor(int64))
```
## Expected behavior
Normally, the last line of code above should output the actual last hidden state of the model rather than raising an error.
Thanks for your help! | 02-09-2022 09:59:04 | 02-09-2022 09:59:04 | For the record, I forgot to mention that the issue can be easily fixed by converting inputs to numpy int64 as the error states. I opened this issue since the snippet is a direct example from the documentation and it could be a good idea to modify the snippet there.
Here's a quick and dirty fix:
```python
import numpy as np
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
session = InferenceSession("onnx/model.onnx")
# ONNX Runtime expects NumPy arrays as input
inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np")
inputs = {k: v.astype(np.int64) for k, v in inputs.items()}
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```<|||||>cc @lewtun <|||||>Hi @Khreas thank you for raising this issue and writing a detailed report with a potential solution!
As far as I can tell, this might be a problem with Windows rather than the code snippet itself. I've tested it on macOS and Ubuntu, but cannot reproduce your error - see this [Colab](https://colab.research.google.com/drive/1hGpYj08yK0SteBYLptWwq9e2E1qRwOUf?usp=sharing) for a working example.
In particular, the data type for both the `input_ids` and `attention_mask` returned by the tokenizer is `int64` on these OS':
```python
# Returns (dtype('int64'), dtype('int64'))
inputs.input_ids.dtype, inputs.attention_mask.dtype
```
Could you please check the data types of `input_ids` and `attention_mask` on your system? If these are `int32` by default on Windows, then we'll have to debug the problem at the tokenizer level :)
Edit: I just found this [SO](https://stackoverflow.com/questions/36278590/numpy-array-dtype-is-coming-as-int32-by-default-in-a-windows-10-64-bit-machine) question which seems relevant:
> Default integer type np.int_ is C long:
>
> http://docs.scipy.org/doc/numpy-1.10.1/user/basics.types.html
> But C long is int32 in win64.
>
> https://msdn.microsoft.com/en-us/library/9c3yd98k.aspx
>
> This is kind of a weirdness of the win64 platform.
Are you running on a 64-bit architecture?<|||||>Hi @lewtun! Thanks for the quick reply. :)
I ran the following simple code to assess the dtypes:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
inputs = tokenizer("Is this real life?")
print(f"Type of input_ids and attention masks: {type(inputs['input_ids'])} {type(inputs['attention_mask'])}")
>>> 'Type of input_ids and attention masks: <class 'list'> <class 'list'>'
print(f"dtype of converted numpy arrays: {np.array(inputs['input_ids']).dtype} {np.array(inputs['attention_mask']).dtype}")
>>> 'dtype of converted numpy arrays: int32 int32'
```
I can confirm that the `dtype` of `input_ids` and `attention_mask` are `int32` on my side, but the strange thing is that the tokenizer seems to return lists instead of numpy arrays to begin with.
As you mentionned, I am currently under a Windows machine with a x64 architecture, so the SO post you linked could explain the issue!<|||||>Hey @Khreas
> but the strange thing is that the tokenizer seems to return lists instead of numpy arrays to begin with.
I think you just need to include the `return_tensors="np"` argument when you use the tokenizer :)
> As you mentionned, I am currently under a Windows machine with a x64 architecture, so the SO post you linked could explain the issue!
Thanks for the information. I think in this case, we can't really do much about the issue as it's a low-level OS quirk with this specific version of Windows 😞. <|||||>Hey @lewtun, I guess I didn't have enough sleep haha, thanks for pointing that out. :)
Since the issue can be resolved by a simple cast in one line of code, this isn't a big problem here. At least this issue will give an explanation for people encountering this situation in the future.
I'll close the issue since nothing can be done about it, feel free to reopen it if needed.
Thanks for investigating the issue and have a nice day! |
transformers | 15,574 | closed | PyTorch Cuda Tensors not supported in DETR model | https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/detr/feature_extraction_detr.py#L568
The feature extractor attempts to convert the tensors to PIL images, which is only possible for `t.detach().cpu()` tensors.
The origin of the error is `ImageFeatureExtractorMixin.to_pil_image()` | 02-09-2022 09:53:46 | 02-09-2022 09:53:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm having same problem here. Did you solve it? |
transformers | 15,573 | closed | Transformers Pipeline Error | When I tried to use pipeline from transformers I got this error. It happened when tensorflow was installed but pytorch was not.
```
from transformers import pipeline
pipe = pipeline(model="lysandre/tiny-vit-random")
```
Error:
```
> pipe = pipeline(model="lysandre/tiny-vit-random")
test_external.py:248:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\venv\lib\site-packages\transformers\pipelines\__init__.py:541: in pipeline
framework, model = infer_framework_load_model(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
model = 'lysandre/tiny-vit-random'
config = ViTConfig {
"_name_or_path": "lysandre/tiny-vit-random",
"architectures": [
"ViTForImageClassification"
],
...m_channels": 3,
"num_hidden_layers": 2,
"patch_size": 16,
"qkv_bias": true,
"transformers_version": "4.16.2"
}
model_classes = {'pt': (), 'tf': ()}, task = 'image-classification'
framework = None
model_kwargs = {'_from_pipeline': 'image-classification', 'revision': None, 'use_auth_token': None}
class_tuple = (<class 'transformers.models.vit.modeling_tf_vit.TFViTForImageClassification'>,)
look_pt = False, look_tf = True
classes = [<class 'transformers.models.vit.modeling_tf_vit.TFViTForImageClassification'>]
architecture = 'ViTForImageClassification'
transformers_module = <module 'transformers' from 'F:\\SecondaryDownloads\\git_repos\\gradio\\venv\\lib\\site-packages\\transformers\\__init__.py'>
_class = <class 'transformers.models.vit.modeling_tf_vit.TFViTForImageClassification'>
def infer_framework_load_model(
model,
config: AutoConfig,
model_classes: Optional[Dict[str, Tuple[type]]] = None,
task: Optional[str] = None,
framework: Optional[str] = None,
**model_kwargs
):
"""
Select framework (TensorFlow or PyTorch) to use from the `model` passed. Returns a tuple (framework, model).
If `model` is instantiated, this function will just infer the framework from the model class. Otherwise `model` is
actually a checkpoint name and this method will try to instantiate it using `model_classes`. Since we don't want to
instantiate the model twice, this model is returned for use by the pipeline.
If both frameworks are installed and available for `model`, PyTorch is selected.
Args:
model (`str`, [`PreTrainedModel`] or [`TFPreTrainedModel`]):
The model to infer the framework from. If `str`, a checkpoint name. The model to infer the framewrok from.
config ([`AutoConfig`]):
The config associated with the model to help using the correct class
model_classes (dictionary `str` to `type`, *optional*):
A mapping framework to class.
task (`str`):
The task defining which pipeline will be returned.
model_kwargs:
Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
**model_kwargs)` function.
Returns:
`Tuple`: A tuple framework, model.
"""
if not is_tf_available() and not is_torch_available():
raise RuntimeError(
"At least one of TensorFlow 2.0 or PyTorch should be installed. "
"To install TensorFlow 2.0, read the instructions at https://www.tensorflow.org/install/ "
"To install PyTorch, read the instructions at https://pytorch.org/."
)
if isinstance(model, str):
model_kwargs["_from_pipeline"] = task
class_tuple = ()
look_pt = is_torch_available() and framework in {"pt", None}
look_tf = is_tf_available() and framework in {"tf", None}
if model_classes:
if look_pt:
class_tuple = class_tuple + model_classes.get("pt", (AutoModel,))
if look_tf:
class_tuple = class_tuple + model_classes.get("tf", (TFAutoModel,))
if config.architectures:
classes = []
for architecture in config.architectures:
transformers_module = importlib.import_module("transformers")
if look_pt:
_class = getattr(transformers_module, architecture, None)
if _class is not None:
classes.append(_class)
if look_tf:
_class = getattr(transformers_module, f"TF{architecture}", None)
if _class is not None:
classes.append(_class)
class_tuple = class_tuple + tuple(classes)
if len(class_tuple) == 0:
raise ValueError(f"Pipeline cannot infer suitable model classes from {model}")
for model_class in class_tuple:
kwargs = model_kwargs.copy()
if framework == "pt" and model.endswith(".h5"):
kwargs["from_tf"] = True
logger.warning(
"Model might be a TensorFlow model (ending with `.h5`) but TensorFlow is not available. "
"Trying to load the model with PyTorch."
)
elif framework == "tf" and model.endswith(".bin"):
kwargs["from_pt"] = True
logger.warning(
"Model might be a PyTorch model (ending with `.bin`) but PyTorch is not available. "
"Trying to load the model with Tensorflow."
)
try:
model = model_class.from_pretrained(model, **kwargs)
if hasattr(model, "eval"):
model = model.eval()
# Stop loading on the first successful load.
break
except (OSError, ValueError):
continue
if isinstance(model, str):
> raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
E ValueError: Could not load model lysandre/tiny-vit-random with any of the following classes: (<class 'transformers.models.vit.modeling_tf_vit.TFViTForImageClassification'>,).
..\venv\lib\site-packages\transformers\pipelines\base.py:235: ValueError
``` | 02-09-2022 07:34:54 | 02-09-2022 07:34:54 | Hey @FarukOzderim, `lysandre/tiny-vit-random` is a dummy model that should probably be deleted by now. What is the use-case you're trying to achieve with it?
It's currently used in the tests, but it should be removed.<|||||>I also use it in the tests within Gradio, please do not remove it :D
Just wanted to report the occurrence, I have no problem after installing torch.<|||||>I recommend using this one instead, which will be maintained forever: https://huggingface.co/hf-internal-testing/tiny-random-vit
Thanks for the error report!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,572 | closed | logger.warn --> logger.warning | # What does this PR do?
logger.warn --> logger.warning
(There is 1 occurrence in research_projects untouched)
@sgugger @LysandreJik | 02-09-2022 05:51:58 | 02-09-2022 05:51:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,571 | closed | [Flax tests] fix test_model_outputs_equivalence | # What does this PR do?
The `test_model_outputs_equivalence` common test is never run because `recursive_check` call is at the wrong place. This PR calls `recursive_check` in `check_equivalence` and also fixes the `set_nan_tensor_to_zero` by using an equivalent jax function since jax arrays do not support index assignment. | 02-08-2022 22:06:45 | 02-08-2022 22:06:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for fixing it |
transformers | 15,570 | closed | m2m100 model do not support eval / predict on deepspeed + fp16 enviroment? | Hello everyone! Recently, I am finetuning m2m100 model, but find using ```deepspeed``` + ```fp16``` can caculate loss in train_step, but got ```nan``` loss in eval_step and predict_step. The ```run_translation.py``` is normally in ```mbart``` and ```t5``` model and is also normarlly in ```m2m100``` model when do not using ```fp16```. The ```zero-2``` and ```zero-3``` got the same results. Has anyone encountered this issues? Thank you very much!
##### Version
+ Deepspeed: 0.5.10
+ transformers: 4.16.2
+ pytorch: 1.8
##### Command to repro
```
deepspeed examples/pytorch/translation/run_translation.py \
--deepspeed tests/deepspeed/ds_config_zero2.json \
--model_name_or_path facebook/m2m100_418M \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--output_dir output_dir --overwrite_output_dir \
--fp16 \
--do_train --do_eval --do_predict \
--max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 \
--num_train_epochs 3 \
--dataset_name wmt16 --dataset_config "ro-en" \
--source_lang en --target_lang ro \
--predict_with_generate --forced_bos_token ro
```
<img width="459" alt="image" src="https://user-images.githubusercontent.com/86561756/153156016-12b48a08-b717-47f5-a500-919f787e604d.png">
| 02-08-2022 21:47:53 | 02-08-2022 21:47:53 | Maybe of interest to @patil-suraj (M2M100) and @stas00 (DeepSpeed)<|||||>I bet that if you remove deepspeed the problem won't go away,
How was m2m100 pre-trained? if it's bf16 mixed precision (on TPUs) then it suffers the same problem as most other bf16-pretrained models (e.g. t5) - i.e. it most likely can't be finetuned/eval'ed under fp16 as it overflows.
I'm collecting info on how the models were trained [here](https://github.com/stas00/ml-ways/blob/master/numbers/detect-model-pretrained-in-bf16-fp16-fp32.ipynb). I don't have `m2m100` so if you can find that info it'd be very helpful.
So if that's the case then you have 2 choices:
1. switch to fp32
2. switch to bf16 if you have access to Ampere GPUs (you'd need pytorch >= 1.9 for that) - you will get about the same performance as fp16.
If these suggestions don't work, first we need to isolate the problem alone, w/o deepspeed and just using bare HF transformers.
<|||||>@stas00 Thanks for your reply, I am switching to fp32, we can finetune a ```m2m100``` model with deepspeed. <|||||>> How was m2m100 pre-trained?
@stas00 `M2M100` is trained on GPUs with `fp16`. Here is the original model https://github.com/pytorch/fairseq/tree/main/examples/m2m_100 <|||||>Thank you for letting me know it wasn't the bf16-pretrained model, @patil-suraj
I checked that it works w/o deepspeed:
```
PYTHONPATH=src CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path facebook/m2m100_418M --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --do_eval --do_predict --max_train_samples 500 --max_eval_samples 50 --max_predict_samples 50 --num_train_epochs 3 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro --predict_with_generate --forced_bos_token ro
[...]
train_loss = 0.5012
train_runtime = 0:00:32.22
train_samples = 500
train_samples_per_second = 46.542
train_steps_per_second = 5.864
02/10/2022 09:27:53 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2387] 2022-02-10 09:27:53,558 >> ***** Running Evaluation *****
[INFO|trainer.py:2389] 2022-02-10 09:27:53,559 >> Num examples = 50
[INFO|trainer.py:2392] 2022-02-10 09:27:53,559 >> Batch size = 8
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:05<00:00, 1.23it/s]02/10/2022 09:28:02 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:06<00:00, 1.01it/s]
***** eval metrics *****
eval_loss = 1.5899
predict_loss = 1.7644
```
I reproduced the problem under deepspeed and here is the fix:
```
diff --git a/tests/deepspeed/ds_config_zero2.json b/tests/deepspeed/ds_config_zero2.json
index dec097dd1..a9f9a5805 100644
--- a/tests/deepspeed/ds_config_zero2.json
+++ b/tests/deepspeed/ds_config_zero2.json
@@ -3,7 +3,7 @@
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
- "initial_scale_power": 16,
+ "initial_scale_power": 32,
"hysteresis": 2,
"min_loss_scale": 1
},
```
same for zero3 - I will explain why shortly.
```
train_loss = 0.7379
eval_loss = 1.8916
predict_loss = 2.0059
```
the loss is worse I think because it takes some 10 steps before it figures out the right loss scaling factor and there are very few steps here.<|||||>So what happened:
If you look at the log, you can see it's trying to find the right loss scaling factor - scroll to the right to see the "Attempted loss scale:" numbers.
```
[2022-02-10 09:32:30,702] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536
1%|▌ | 1/189 [00:00<01:23, 2.26it/s]
[2022-02-10 09:32:31,160] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
2%|█▊ | 3/189 [00:02<02:47, 1.11it/s]
[2022-02-10 09:32:33,056] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
2%|██▍ | 4/189 [00:02<02:14, 1.37it/s]
[2022-02-10 09:32:33,518] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
3%|███
[...]
17%|███████████████████▋ | 32/189 [00:16<01:12, 2.18it/s]
[2022-02-10 09:32:47,241] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
17%|████████████████████▎ | 33/189 [00:16<01:12, 2.16it/s]
[2022-02-10 09:32:47,718] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
18%|████████████████████▊ | 34/189 [00:17<01:12, 2.14it/s]
[2022-02-10 09:32:48,157] [INFO] [stage_1_and_2.py:1652:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1
```
and it fails to do so.
So in this particular case, configuring deepspeed.fp16 settings to start from a higher initial scale power overcomes the problem.
Please let me know if this resolves this issue for you, @lavine-lmu <|||||>it works after increase the ```initial_scale_power ``` from 16 to 32. Thank you very much! @stas00 <|||||>Thank you for confirming that the suggestion worked for you, closing this issue.<|||||>also added this use case to the troubleshooting section here: https://github.com/huggingface/transformers/pull/15585<|||||>@lavine-lmu were you able to achieve good results when fine-tuning M2M100? I'm trying to do that but unfortunately when I try to do that the loss decreases but the performance does not improve (it goes from around 30BLEU points without fine-tuning to 0.1 BLEU points after fine-tuning) |
transformers | 15,569 | closed | Make sure custom configs work with Transformers | # What does this PR do?
This PR makes sure that once can use a custom configuration in Transformers with no trouble in the `save_pretrained`/`from_pretrained` methods. The only requirement is that it must accept kwargs as the `from_pretrained` method calls the init with lots of them (this will be documented soon in a PR that shows how to write a custom model and config).
An alternative would be to make sure that every subclass of `PretrainedConfig` calls the `super().__init__` class with a check like PyTorch, but it would force us to put that calls at the beginning of each init before any attribute is set. This PR opts for a more lenient strategy, and makes sure any config attribute we try to reach is done through a `getattr`. | 02-08-2022 19:45:55 | 02-08-2022 19:45:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,568 | closed | update serving_output for some TF models | # What does this PR do?
The recent merged PRs #15511 and #15530 forgot to update `serving_output` for the changed outputs.
(sorry about this)
This PR fixes it.
@patrickvonplaten | 02-08-2022 19:31:39 | 02-08-2022 19:31:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(@Rocketknight1 ) |
transformers | 15,567 | closed | TF MT5 embeddings resize | # What does this PR do?
Fixes #13839
In MT5, the `get_output_embeddings()` method does not return a layer, returns a `tf.Tensor` with the weights. As such, `self._get_word_embedding_weight(self.get_output_embeddings())` was failing, as it expected a layer. Since the point of `_get_word_embedding_weight` is to get the weights, added a return for the case where we already have the weights. Also added tests to ensure we don't regress. | 02-08-2022 19:27:17 | 02-08-2022 19:27:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,566 | closed | Add Wav2Vec2 Adapter Weights to Flax | # What does this PR do?
Fixes #15476
- Adds an adapter to the Flax Wav2Vec2 model to reduce the time dimension of the extracted feature vectors beyond that of the standard Wav2Vec2 model. The encoder's output hidden states thus have a time context window that is more similar to that of a subword token instead of just a character.
- Shape and values of Flax output logits match those of the PyTorch model.
- Flax model uses all PyTorch model weights, including those of the adapter. Running the script in #15476 is resolved to yield identical Flax and PyTorch results (within 4e-2 threshold). | 02-08-2022 18:48:29 | 02-08-2022 18:48:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Great looks good to me! @patil-suraj, could you also take a quick look? <|||||>For reference, PR is the corrected version of #15521<|||||>Feel free to merge @sanchit-gandhi, the failing test is unrelated |
transformers | 15,565 | closed | Upgrade black to version ~=22.0 | Upgrades the `transformers` library quality tool `black` to use the first stable release of `black`, version 22.0. | 02-08-2022 18:43:01 | 02-08-2022 18:43:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Need to update the `check_copies`. |
transformers | 15,564 | closed | How to fine-tune NLP tasks docs | This PR focuses on adding guides for how-to fine-tune models for NLP tasks. Main additions:
- Change the TOC structure to include a nested section in the how-to guides for fine-tuning for NLP tasks.
- Separate each task in `custom_datasets` into their own pages. This will make it easier for users to find and read about a specific one. Also, when you click on any of the subsections that share the same name (`Preprocess`, `Fine-tune with Trainer API`, and `Fine-tune with TensorFlow`), you are automatically redirected to the sequence classification section even if you wanted to look at the question answering section.
- Add a guide for causal/masked language modeling.
To do:
- [x] Add guides for multiple choice, translation and summarization. | 02-08-2022 18:23:22 | 02-08-2022 18:23:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review!
I've added videos where they're relevant and links to the base fine-tuning tutorials. About the subfolders, I was going to create one for each modality so users can easily sort for the type of guide they're looking for. If you don't think this is too impactful (I guess it does add additional subfolders which can make navigation harder), I'm happy with your suggestion!<|||||>I don't think we need the extra-layer of subfolders for the source doc. We can organize it by tasks in the ToC, but only contributors go look at the source.<|||||>I updated the causal language modeling example to also use `DataCollatorForLanguageModeling` instead of a `DefaultDataCollator` for consistency across both language modeling examples. Let me know if the example usage looks correct to you!<|||||>Thanks for the review, I'll merge this tomorrow if there are no other comments! Maybe we can add these to the doctests in a second step just because there are so many new examples, and it'd be easier to debug if something went wrong? No strong preference either way though :)<|||||>When I run `make fixup` it changes:
```
>>> @dataclass
... class DataCollatorForMultipleChoice:
```
to the example below which is causing the test to fail. Should I just drop the `>>>` and ellipses in this example?
```
>>> @dataclass
>>> class DataCollatorForMultipleChoice:
```<|||||>Yes, ok!
Can you identify which tool does that? If it's one of ours, it'd be nice to fix it; if it's not, it'd be nice to open an issue to the corresponding tool.<|||||>I think it might be this [line](https://github.com/huggingface/transformers/blob/57882177becb85560f1ff931abb1b0b75d67e70d/utils/style_doc.py#L176) in `utils/style_doc.py`:
```
if has_doctest and not is_empty_line(line):
prefix = "... " if line.startswith(" ") or line in [")", "]", "}"] or in_triple_quotes else ">>> "
```
Since my line doesn't start - or is in - any of those options above, it just adds the `>>>`?<|||||>~I guess adding the option for the line to start with `@` would be a good solution then :)~ Ah no that don't do. It would need to check that the preceding line has the `@`.<|||||>Hey @stevhliu, I think something wrong happened with your rebase or merge, do you mind closing this PR and opening one again from the same branch?
There are 359 files changed in this one :) |
transformers | 15,563 | closed | Add codecarbon callback to docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This just adds the missing `CodeCarbonCallback` to the callbacks section of the docs.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-08-2022 17:22:54 | 02-08-2022 17:22:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,562 | closed | TF generate refactor - Greedy Search | # What does this PR do?
This PR is the first step to refactor TF's `generate()` method similar in spirit to https://discuss.huggingface.co/t/big-generate-refactor/1857 .
## Motivation
The main reasons for the refactor are:
- Make `generate` more readable and easier to understand.
- Disentangle different components of `generate`, *e.g.* the model_input preparation, the logits processor creation and application, the sub-generation methods. Each of these components of `generate` should be as independent and extendable as possible. We've seen that such a design greatly helps to foster community contributions to the generate method for PyTorch.
- Make `generate` more generally applicable for future use cases. This mostly concerns multi-modal models, such as speech to text and image to text.
In addition the final goal is to significantly speed up TF's generate method by making it XLA-compatible. This PR should greatly help at making the `generate()` method compatible with XLA by:
- clearly separating the model inputs preparation part and the auto-regressive part. The auto-regressive part is handled in the sub-generation method which should now be much shorter. This is the part which will require some heavy changes to work with XLA.
- Moving out all of the logits processing logic into a `LogitsProcessorList` class. We can now simply test each of those classes separately whether they are XLA-compatible and don't have to make each of them XLA-compatible. E.g. `bad_token_ids` and `ngram_repeat` are probably not at all XLA-compatible.
## Proposed refactor design and steps to achieve it
TF's `generate` can currently handle more or less **4** different types of generation:
- a) greedy search
- b) sampling (GPT-like generation)
- c) beam search (default use case for translation, etc...)
- d) beam sample (exotic case where sampling is combined with `beam_search`
Currently a) and b) are handled by the sub-method `_generate_no_beam_search` while c) and d) are handled by `_generate_beam_search` .
This is not very readable and understandable. I propose to remove those sub-methods and instead define all four generation methods a), b), c) and d) separately.
I propose to handle the complete generate refactor in multiple steps.
1) Define the new `generate()` design only for the simplest use case being `greedy_search` (a)). Leave all existing code for now to not break any of b), c), d)
2) At this point we can already start change greedy search to be XLA-compatible.
3) Analogues to 1), we define the sub-method b) and can then fully remove `_generate_no_beam_search`
4) Refactor c) beam search. This is a bit more complex and I'd like to handle this in a second step.
5) Finally we can refactor d) which should be relatively simple after 4) and can then remove the `_generate_beam_search` function.
This first PR lays the foundation for the complete refactor and finishes 1).
In short in does the following:
- i.) Adds more aggressive `greedy_search` tests for a decoder-only model - GPT2 - and an encoder-decoder model - T5.
- ii.) Define the TF logic for logits processors
- iii.) Define the new `generate()` method which is in a first step called `_generate()` so that it will only be used for a) `greedy_search`. It should be much more readable by adding clear `# 1. ... 2. ` comments for each model input preparation step.
- iv.) Fully refactor `greedy_search`
- v.) Clean-up some duplicated code that was heavily used by generate and related functions (*i.e.* `tf_utils.py`).
## Next steps
=> In a next step, I propose to tackle both 2) and 3) at the same time.
@Rocketknight1 Maybe you could take care of 2) while maybe @gante maybe it would make sense if you start with 3) (should be pretty straight-forward after this PR). IMO, this would also greatly help at making more people in the team knowledgeable in how `generate()` works.
Before we go over to 4), I'm happy to do the `past`/`encoder_outputs` refactor (see all those `TODO(Patrick)` comments in `generate`). We still rely on a very ugly hack that puts `encoder_outputs` into the `past` variable. I didn't want to change this here because it require changing the `prepare_inputs_for_generation` method of multiple models and IMO the PR would have become to difficult to review.
Thoughts? @Rocketknight1 @gante @LysandreJik @sgugger
| 02-08-2022 17:22:28 | 02-08-2022 17:22:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>All of the following slow tests pass:
```bash
RUN_SLOW=1 pytest -sv \
tests/bart/test_modeling_tf_bart.py \
tests/t5/test_modeling_tf_t5.py \
tests/gpt2/test_modeling_tf_gpt2.py \
tests/vision_encoder_decoder/test_modeling_tf_vision_encoder_decoder.py \
tests/encoder_decoder/test_modeling_tf_encoder_decoder.py \
tests/speech_to_text/test_modeling_tf_speech_to_text.py
```
as well as
```bash
RUN_SLOW=1 pytest -sv tests/test_modeling_tf_rag.py
```
(this one is tested on brutasse as it requires a heavy index).<|||||>Just finished and it looks very cool! I had a few nits but it seems like a great refactor, which will make it easy to optimize sub-methods for performance later.<|||||>Thanks a lot for the review @gante and @Rocketknight1 . Adapted the PR according to your comments and will merge once tests are passing.<|||||>Failing tests are flaky and unrelated. Merging<|||||>@patrickvonplaten
About the following
```
We still rely on a very ugly hack that puts encoder_outputs into the past variable.
I didn't want to change this here because it require changing the prepare_inputs_for_generation
method of multiple models and IMO the PR would have become to difficult to review.
```
I haven't looked the details, but it is something different from the PyTorch's models (and `generate`), right?
During the fix for PT/TF inconsistency, this causes some problem for a strict equivalence testing.
Maybe I can work on this once other inconsistencies are fixed (and by that time, `generate` already has some clean refactoring by you guys!).
<|||||>Hey @ydshieh,
Great initiative! Yes that's indeed an important refactor we should do soon - would you like to look into it? In short we should do the following in TF:
- Always force `return_dict=True`. The we don't need any of these `self._use_cache()` methods anymore (we can just check whether `past_key_values` is present in the output. Also this way the can clearly differentiate between `encoder_outputs` and `past_key_values` for the encoder-decoder case. Would you like to open a PR for it? Happy to help you along the way!<|||||>> Hey @ydshieh,
>
> Great initiative! Yes that's indeed an important refactor we should do soon - would you like to look into it? In short we should do the following in TF:
>
> * Always force `return_dict=True`. The we don't need any of these `self._use_cache()` methods anymore (we can just check whether `past_key_values` is present in the output. Also this way the can clearly differentiate between `encoder_outputs` and `past_key_values` for the encoder-decoder case. Would you like to open a PR for it? Happy to help you along the way!
Sure - after I finish the work of adding testing `past_key_values` for TF encoder models (those can be used for causal lm and can have cross-attention`). Currently, no test for it, and making #15477 undetected for quite some time. |
transformers | 15,561 | closed | [Flax tests/FlaxBert] make from_pretrained test faster | # What does this PR do?
Use flax checkpoint for the `from_pretrained` test and only test this for the base model to speed up the tests. | 02-08-2022 17:12:01 | 02-08-2022 17:12:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 15,560 | closed | Output ragged tensors from TF predict() | This is a very simple and dumb PR right now, but I think it's a lot less broken than our current implementation! Obviously don't merge it yet, I'm going to poke around to try to find edge cases and see if it resolves our memory leak issues. | 02-08-2022 16:24:38 | 02-08-2022 16:24:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Closing this as we're going to fix this with an upstream PR to Keras instead |
transformers | 15,559 | closed | BART Large generate predictions are wonky | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2 (issue exists on 4.9.2)
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten @sshleifer
## Information
Essentially re-opening [issue 8005](https://github.com/huggingface/transformers/issues/8005), BART-large does not mask fill properly (whereas BART-base has entirely reasonable outputs). The previous fix of setting `force_bos_token_to_be_generated = True` is no longer viable since the option no longer exists in BART config. It also seems like adjust_logits_during_generation (where force_bos_token_to_be_generated was used) is no longer implemented in the BART model.
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base", forced_bos_token_id=0)
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
batch = tokenizer("My friends are <mask> but they eat too many carbs.", return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
print(tokenizer.decode(generated_ids[0]))
# Output: </s><s>My friends are healthy, but they eat too many carbs.</s>
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large")
batch = tokenizer("My friends are <mask> but they eat too many carbs.", return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
print(tokenizer.decode(generated_ids[0]))
# Output: </s>My,, but they eat too many carbs.</s>```
| 02-08-2022 15:23:50 | 02-08-2022 15:23:50 | I have encountered the same problem. And I have also found that `bart-large` cannot be fine-tuned with a reasonable output.<|||||>@patil-suraj - we had another issue thread about this somewhere no? I can't find it anymore though :-/<|||||>@patrickvonplaten
It might be this one that I linked in the original description: https://github.com/huggingface/transformers/issues/8005<|||||>Found the original issue: https://github.com/huggingface/transformers/issues/9731 . Looking a bit into the commit history here: https://huggingface.co/facebook/bart-large/commits/main it looks like the mask token problem actually only existed for `bart-base` and not for `bart-large` according to @patil-suraj .
@patil-suraj - could you double-check this real quick?<|||||>I can confirm that bart-large generates very strange output.
```python
tokenizer = BartTokenizer.from_pretrained(model_name)
model = FlaxBartForConditionalGeneration.from_pretrained(model_name)
model.params = model.to_bf16(model.params) # convert float16 to bfloat16 (for TPU)
sentences = (
'She waded to the bank and picked up her shoes and stockings.',
'The bank is increasing the amount they lend to small companies.',
)
inputs = tokenizer(sentences, padding=True, return_tensors='jax')
output = model.generate(inputs.input_ids)
print(tokenizer.batch_decode(output.sequences, skip_special_tokens=True, clean_up_tokenization_spaces=False))
```
`facebook/bart-base`:
```
['She waded to the bank and picked up her shoes and stockings.', 'The bank is increasing the amount they lend to small companies.']
```
`facebook/bart-large`:
```
['She.....', 'TheThe']
```<|||||>To me this seems to be unrelated to #9731;
[@patrickvonplaten's previous method](https://github.com/huggingface/transformers/issues/9731#issuecomment-901775955) to check whether the correct mask-token is used, produces a difference of 0 when used with the large model.
So it looks like this is not related to the mask-token, as @ayaka14732's example does not even use masks.
<|||||>I see sorry you're right - I looked too quickly indeed. Will take a deeper look in the coming days.<|||||>Any updates?<|||||>Hey @StephAO,
You are passing `forced_bos_token_id` to the `tokenizer` instead of the model. It should be passed to the model. When running this code:
```python
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
batch = tokenizer("My friends are <mask> but they eat too many carbs.", return_tensors="pt")
generated_ids = model.generate(batch["input_ids"])
print(tokenizer.decode(generated_ids[0]))
```
gives sensible outputs:
```
</s><s>My friends are good people, but they eat too many carbs.</s>
```
Where did you see code that `forced_bos_token_id` should be passed to the tokenizer?<|||||>Good catch. I am actually unsure where/if I saw code that passed `forced_bos_token_id` to the tokenizer, it is possible that when I was playing around with the model to try and get it to work, I ended up adding the argument to the wrong spot. That being said, the documentation is unclear in a few places:
- The Mask Filling example in [BartForConditionalGeneration](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/bart#transformers.BartForConditionalGeneration) does not include this necessary argument
- The [ImplementaionNotes](https://huggingface.co/docs/transformers/v4.16.2/en/model_doc/bart#implementation-notes) mention `force_bos_token_to_be_generated=True` instead of `forced_bos_token_id=0` (which I assume is the old version of this argument?)
- None of the BART models or the PreTrainedModel it inherits specify `forced_bos_token_id` as a possible argument. In fact, the only place I can find it defined is in [GenerationMixin](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/generation_utils.py#L375), and even there it is not clear what this argument does.
Lastly, it is still confusing to me why this is required for bart-large, but not for bart-base. Either way, thank you for the update!<|||||>Thanks for the great feedback! Actually if you would be interested, it would be amazing to open a PR to fix the docs - both the mask filling example and the implementation notes - to fix this :-)
Otherwise I'm happy to open a PR for it as well!<|||||>@patrickvonplaten Thanks for the explanitaton, it helps a lot! Actually, I think it does not only effect mask infilling, but also the fine-tuning procedure (at least on my side). Without the `forced_bos_token_id`, the `bart-large` model cannot be fine-tuned well. Therefore, I recommend to set the default value of `forced_bos_token_id` inside the BART model config file if you do not mind.<|||||>Hmm, the problem is that I'm not sure whether this should be done or not `bart-base` and `bart-large`. E.g. when the model was added the author explicitly mentioned that `forced_bos_token_id` should only be used for `bart-large-cnn` by default - see: https://github.com/huggingface/transformers/blob/818878dc881a32c949844f734af5a8ce25385660/src/transformers/configuration_bart.py#L109
This is also why only `bart-large-cnn` has this config attribute set by default -> see: https://huggingface.co/facebook/bart-large-cnn/blob/main/config.json#L27 and the other don't.
@sshleifer sorry to ping you here - do you remember by any chance if `forced_bos_token_id` is **recommended** to be used when fine-tuning BART? E.g. should one always place BOS after `decoder_start_token_id` for fine-tuning?
Also cc @patil-suraj any ideas?
<|||||>> I'm not sure whether this should be done or not bart-base and bart-large
Sam will have a better answer but IIRC the `forced_bos_token_id` (previously, `force_bos_token_to_be_generated`) was added to be able to reproduce the `bart-large-cnn` results. And in our experiments, we had found that this is only required for the cnn pre-trained model and other checkpoints were not affected by this. Found a related [discussion](https://discuss.huggingface.co/t/bart-lm-odd-beam-search-output/618/13) and [PR](https://github.com/huggingface/transformers/pull/6526).
> do you remember by any chance if `forced_bos_token_id` is recommended to be used when fine-tuning BART? E.g. should one always place BOS after `decoder_start_token_id` for fine-tuning?
In my experiments, it actually doesn't matter. It depends on how the `decoder_input_ids` are prepared, for example, if the `decoder_input_ids` are like this: `[eos, bos, .....]` then BOS should be forced.
But now the issue is, in bart like models the `decoder_input_ids` are prepared by calling `shift_tokens_right` function on the `labels` if `decoder_input_ids` are not passed. This is how it's done in our summarization and translation fine-tuning examples. The `decoder_inputs_ids` are prepared in `DataCollatorForSeq2Seq` by calling `model.prepare_decoder_input_ids_from_labels` which then calls `shift_tokens_right`
https://github.com/huggingface/transformers/blob/05c237ea94e08786abbac6c6185cfdfa262a8c53/src/transformers/data/data_collator.py#L600
https://github.com/huggingface/transformers/blob/05c237ea94e08786abbac6c6185cfdfa262a8c53/src/transformers/models/bart/modeling_bart.py#L1397-L1398
**And this will always add `bos` after `eos`**. See:
```python
from transformers import BartTokenizer
from transformers.models.bart.modeling_bart import shift_tokens_right
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
labels = tokenizer("This is a test", return_tensors="pt").input_ids
decoder_input_ids = shift_tokens_right(labels, tokenizer.pad_token_id, tokenizer.eos_token_id)
decoder_input_ids
# tensor([[ 2, 0, 713, 16, 10, 1296]])
tokenizer.batch_decode(decoder_input_ids)
['</s><s>This is a test']
```
but `forced_bos_token_id` is not set by default. So this might affect generations for models trained using these scripts.<|||||>Thanks for the summary @patil-suraj - that's super helpful!
So as I undertsand it, the BartTokenizer will **always** add the BOS token to the beginning.
E.g.:
```python
from transformers import BartTokenizer
tok = BartTokenizer.from_pretrained("facebook/bart-large")
print(tok.decode(tok("hello").input_ids))
```
```
Out[4]: '<s>hello</s>'
```
This means for Seq2Seq if someone follows any of our official examples (both accelerate and Trainer), this means the labels are
created to be:
```
<s> label text </s>
```
with the decoder input ids then (since they are the labels shifted to the left):
```
<decoder-start-token-id><s> label text </s>
```
Now, we know from Sam's comment [here](https://github.com/huggingface/transformers/pull/6526) that it is **not** necessary to add the BOS token (`<s>`) for successful fine-tuning. One also gets good results when **not** adding the BOS token - *i.e.* it doesn't really make a difference.
**But** the problem now is that while we quitly add BOS to the labels in all of our example scripts for fine-tuning because of BartTokenizer's behavior, we don't "force-add" the BOS token when evaluating the model with `generate` because `forced_bos_token_id` is not set by default in the config. This means that while the model is well-trained when using the examples the evaluation results are probably not as good as they could be if we would force the BOS token to be generated.
On the other hand, one could also argue that the model should learn to always generate `<BOS>` as the first token and it's not needed. However, we know that results are better if we force `<BOS>` to be generated **if** the model has been trained as explained above.
As a conclusion, @patil-suraj and I think, we should actually add `forced_bos_token_id=0` by default to the pretrained bart models: [bart-large](https://huggingface.co/facebook/bart-large/blob/main/config.json) and [bart-base](https://huggingface.co/facebook/bart-base/blob/main/config.json) . This would be a breaking change for people that use `bart-large` by default for mask-filling with generate, but as seen above it should improve results.
Since those two checkpoints are highly used, keen to hear your opinion @LysandreJik @sgugger here<|||||>Are we 100% sure that the change would only make predictions better in all circumstances?<|||||>> Are we 100% sure that the change would only make predictions better in all circumstances?
If someone fine-tunes with a BOS as the second token (behind `decoder_start_token_id`) - which is done in all example scripts, then yes the change would make predictions better in all circumstances for the fine-tuned model. <|||||>I'm not talking about fine-tuning, I'm talking about users relying on this model in production right now.<|||||>This is a pretrained checkpoint - so I highly doubt anybody uses this model + `generate()` in production. The only use case is to generate a `<mask>` token as shown in the issue description. It doesn't make sense to use this in production (why use expensive generate for a `<mask>` token instead of BERT?). And even for this use case the change works better (however this is hard to test)<|||||>Does this problem can cause a pretrained model from a scratch to show poor loss and accuracy score? I already pre-train a model with TensorFlow Keras and BART and it show me a good logit accuracy (~80%) for base-scale. However, i was struggling for months to use BART large using the same experimental setup. The logit accuracy for BART large never goes beyond ~4%. I did everything including decreasing and increasing batch size, learning rate .. etc but with no luck.<|||||>@salrowili Yeah, I can confirm it will. Compared with `bart-base`, the prediction of `bart-large` seems to be with some randomness, even an experiment which evaluates if it could convergence on a small and toy dataset.<|||||>> @salrowili Yeah, I can confirm it will. Compared with `bart-base`, the prediction of `bart-large` seems to be with some randomness, even an experiment which evaluates if it could convergence on a small and toy dataset.
Did you manage to fix it with forced_bos_token_id=0 solution? <|||||>> > @salrowili Yeah, I can confirm it will. Compared with `bart-base`, the prediction of `bart-large` seems to be with some randomness, even an experiment which evaluates if it could convergence on a small and toy dataset.
>
> Did you manage to fix it with forced_bos_token_id=0 solution?
According to my efforts now, I can observe the following facts:
- With setting `forced_bos_token_id=0`, the model `bart-large` can be fine-tuned to overfit on a small toy dataset, while the `loss` is still abnormal.
- On a real dataset `WikiSQL`, the model `bart-large` shows promising fine-tuning results on some experimental runs, e.g., showing a comparable performance with the fine-tuned model by `fairseq`.
However, some experimental runs will fail when switching the fine-tuning environment (e.g., from single-card to multi-card). And it often occurs with toooo long evaluating time since the model is trying to produce a sequence such as `</s> <s> <s> <s> <s> ...` until it arrives the end of `val_max_target_length`. These failure predictions will all be empty, and these errors may not be solved by the forced_bos_token_id=0 solution.
>`<s>` corresponds to the `bos_token_id`, and `</s>` corresponds to both the `decoder_start_token_id` and `eos_token_id`.
One failure case can be as below, and the `denotation_accuracy` shows a BIG jump among fine-tuning steps:

<|||||>> > > @salrowili Yeah, I can confirm it will. Compared with `bart-base`, the prediction of `bart-large` seems to be with some randomness, even an experiment which evaluates if it could convergence on a small and toy dataset.
> >
> >
> > Did you manage to fix it with forced_bos_token_id=0 solution?
>
> According to my efforts now, I can observe the following facts:
>
> * With setting `forced_bos_token_id=0`, the model `bart-large` can be fine-tuned to overfit on a small toy dataset, while the `loss` is still abnormal.
> * On a real dataset `WikiSQL`, the model `bart-large` shows promising fine-tuning results on some experimental runs, e.g., showing a comparable performance with the fine-tuned model by `fairseq`.
>
> However, some experimental runs will fail when switching the fine-tuning environment (e.g., from single-card to multi-card). And it often occurs with toooo long evaluating time since the model is trying to produce a sequence such as `</s> <s> <s> <s> <s> ...` until it arrives the end of `val_max_target_length`. These failure predictions will all be empty, and these errors may not be solved by the forced_bos_token_id=0 solution.
>
> > `<s>` corresponds to the `bos_token_id`, and `</s>` corresponds to both the `decoder_start_token_id` and `eos_token_id`.
>
> One failure case can be as below, and the `denotation_accuracy` shows a BIG jump among fine-tuning steps:
>
> 
Thank for sharing this interesting findings. I have also conduct a small experiment involving fine-tuning SQuAD with both BART-base and BART-large with Pytorch XLA on TPU-8 unit with Google Colab (attached). its based on this colab https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb . Although i uses Google Colab pro which has more memory 35GB, i think Google Colab free gives free TPU with 25GB so for everyone who is interesting in replicating this experiment can do it , but you may need to reduce the value of "per_device_train_batch_size" to use less memory. per_device_train_batch_size*8= Total Batch size. If you ran into out of memory error reduce the per_device_train_batch_size. if you got SIG error restart the colab and run again all cells that are needed to restore variable and importing packages. At beginning the training will be slow since XLA compilation needs more time especially for bart-large (~10mins). largest batch size is 16 for bart-large (per_device_train_batch_size=2)
I ran both large and base for one epoch. loss score are very close and it seems fine-tuning BART-large is not affected by this issue. I tried to do prediction with BART-large but i got OOM error upon saving the model. However, i needed to add skip_special_tokens=True to tokenizer.decode function to get rid of s and pad tokens in BART-base. I am also still worry about pre-training BART large from scratch because it involve using a lot of resource and uses [mask] token.
[Comparing_BART_base_vs_BART_large_on_TPU.zip](https://github.com/huggingface/transformers/files/8194916/Comparing_BART_base_vs_BART_large_on_TPU.zip)
<|||||>@salrowili Thanks for sharing! I think currently this bug will affect all NLG tasks (e.g., summarization), but not for NLU tasks (e.g., classification and extractive machine reading comprehension). I get some ideas why it becomes so, and would like to share here later when it is confirmed.<|||||>In #8005 it was mentioned that [the forced BOS-token is correct for mask-infilling](https://github.com/huggingface/transformers/issues/8005#issuecomment-723670045), but may be suboptimal for other tasks.
At least for generation tasks the forced BOS-token seems necessary, right? Maybe the solution could be to implement the originally proposed solution of adding the forced BOS-token as `task_specific_params` for all generation tasks, although I'm not sure how these parameters are used.
Maybe alternatively `forced_bos_token_id=-1` could be used as a new default in the configs for all BART-models; then each BART-subclass could set its preferred default, in case `-1` is set. That would be `forced_bos_token_id = config.bos_token_id` for `BartForConditionalGeneration` and `forced_bos_token_id = None` for all others (?) (provided all others benefit from a lack of a forced BOS-token).
Although that might be confusing, as the config's value of `-1` is never actually used.
As a sidenote: The `Implementation notes` for BART in general seem at least partially outdated. `fairseq.encode` is not used by the tokenizer at all. Also prepending a space to any text passed to the tokenizer actually makes generations *worse*, even if the BOS-token is forced:
```python3
>>> tokenizer.batch_decode(model.generate(tokenizer("hello", return_tensors="pt").input_ids))
['</s><s>hello</s>']
>>> tokenizer.batch_decode(model.generate(tokenizer(" hello", return_tensors="pt").input_ids))
['</s><s></s>']
```
@patrickvonplaten I find that note misleading and would advocate for it to be removed/clarified.<|||||>Hi guys, after being confused by this question for a while months, eventually I found the root issue and the fundamental solution to the failure of `bart-large`:
- **The issue (TL;DR)**: the ill-conditioned representation of token `bos_token` (i.e., the token `0`) in `bart-large` pre-trained model.
- **The solution (TL;DR)**: re-initialize (e.g., add some noise) the `bos_token` representation.
### What is the problem
BART (Lewis et al. 2020) is a popular pre-trained encoder-decoder model which is for both Natural Language Understanding (e.g, sequence classificaiton) and Natural Language Generation (e.g., text summarization) tasks.
However, there are two known issues (#8005, #15559) for BART family:
- **Mask Filling**: when performing the mask filling task, BART-base behaves as expected (e.g., `</s><s>My friends are healthy, but they eat too many carbs.</s>`), while BART-Large's output (e.g., `</s>My,, but they eat too many carbs.</s>`) is wonky.
- **Fine-tuning**: when fine-tuning BART with 🤗 transformers for Natural Language Generation tasks, we can readily get a satisfactory fine-tuning performance for BART-base, but we cannot obtain a reasonable performance for BART-Large. The loss during training seems normal (e.g., decreasing with steps), but the loss during evaluation is unexpectedlly high. Most importantly, BART-Large cannot overfit a small toy NLG dataset!
### Fix the issule of Mask Filling
[This answer](https://github.com/huggingface/transformers/issues/8005#issuecomment-723670045) provides a solution that making `config.force_bos_token_to_be_generated` as `True` could encourage BART-Large to behave as expected in the task mask filling.
The corresponding latest solution (for `v4.17.0`) is as:
```python
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)
```
However, such a setting cannot solve the issue encountered in the fine-tuning procedure perfectly. In my premiliary study, the prediction on the small toy task can be perfect by setting `forced_bos_token_id=0`, while not for the loss. The loss also jumps during different evaluation steps, making me very confused. After digging into the model prediction for a while days, I finally found that the most important thing in this steaky bug may be the `bos_token` (i.e., token `<s>` or the `0` token id). There are always two strange phenomena corresponding to `bos_token`:
1. When `decoder_start_token` is fed to the decoder, it is possible that the model does not predict `bos_token`, even after thousands of training steps. It is very strange since the only corresponding output of `decoder_start_token` is `bos_token`! It is because that during data preparation, the BART model always prefixes each output with the `bos_token` (details see [here](https://github.com/huggingface/transformers/issues/15559#issuecomment-1056793478)).
2. There is a chance that the model will generate a sequence such as `</s> <s> <s> <s> <s> ...` (i.e., token id `2 0 0 0 0 ...`) until it arrives the end of maximum target length.
After setting `forced_bos_token_id=0`, it forces the model to predict `bos_token` when `decoder_start_token` is fed to the decoder, and it can fix the first case, but not for the second case!
### Fix the issue of Fine-tuning
As stated above, the most strange part in this bug is that BART-Large does not act the same as BART-base on both mask filling tasks and the fine-tuning procedure. However, after carefully checking and confirming that their configs are the same, I believe there must be something "wrong" with BART-Large's pre-trained model weight.
Then I try to print the norm of each token for BART:
```python
from transformers import BartModel, BartTokenizer
import torch
model = BartModel.from_pretrained("facebook/bart-base")
print(model.shared.weight.norm(dim=1))
# tensor([1.7076, 0.7572, 1.6171, ..., 0.9508, 0.9553, 1.6455], grad_fn=<NormBackward3>)
model = BartModel.from_pretrained("facebook/bart-large")
print(model.shared.weight.norm(dim=1))
# tensor([6.9603, 0.4269, 2.4219, ..., 1.3628, 1.3857, 2.4098], grad_fn=<NormBackward3>)
```
As `bos_token` (token 0) and `eos_token` (token 2) both appear in all training sentences, they should have a similar embedding norm. However, for BART-base the 0-th norm is close to 2-nd norm, but not for BART-Large! For BART-Large, the norm of 0-th norm is as high as 6.96, nearly twice as the 2-nd norm. Therefore, I think the fine-tuning failure is caused by ill-conditioned representation of token `bos_token` in BART-Large, at least for the current optimizer in 🤗 transformers.
Then, I tried a simple solution which only adds some noise to the `bos_token` representation as below:
```python
import torch
# please download the model in your local directory and manually change the weight
state_dict = torch.load("bart-large/pytorch_model.bin")
state_dict["model.shared.weight"][0] = state_dict["model.shared.weight"][0] + torch.randn(1024)
torch.save(state_dict, open("bart-large/pytorch_model.bin", "wb"))
```
### How about the Performance after Perturbing `bos_token` representation
The above solution works! On both toy benchmarks and real datasets, the loss during evaluation becomes normal, and the performance after fine-tuning BART-Large is expected. Most importantly, even without setting `forced_bos_token_id=0`, the output of fine-tuned BART-Large models are satisfactory.
### Take Away
In summary, I recommend you guys to perturb the representation of `bos_token` to obtain great fine-tuning performance of BART-Large on NLG tasks.<|||||>Thanks! I am still curious about:
1. Why does this issue happen with bart-large but not bart-base?
2. Is this issue only related to the Hugging Face model, or affects the model in the original Facebook repository as well?<|||||>Thank you @SivilTaram for this detailed explanation. Wonderful effort.
Could this problem be related to mBART? because mBART has only a large-scale version, not a base scale and I think the first token is designed for language code. see https://github.com/huggingface/transformers/pull/9811<|||||>Thanks a lot for all the work guys! I think at this point we can only add `forced_bos_token_id=0` to the config of the pretrained checkpoints as discussed previously - @patil-suraj and I just did it here:
- https://huggingface.co/facebook/bart-large/commit/030bb1bda8b56e9a918a6f3e764cdfeffa781ccc
- https://huggingface.co/facebook/bart-base/commit/d0af9887e98da87934e6ecaf42e724f1684bb72b
Also maybe we can ask the official authors what they think about your findings.
Gently pinging @ngoyal2707 here<|||||>> Thanks a lot for all the work guys! I think at this point we can only add `forced_bos_token_id=0` to the config of the pretrained checkpoints as discussed previously - @patil-suraj and I just did it here:
>
> * https://huggingface.co/facebook/bart-large/commit/030bb1bda8b56e9a918a6f3e764cdfeffa781ccc
> * https://huggingface.co/facebook/bart-base/commit/d0af9887e98da87934e6ecaf42e724f1684bb72b
>
> Also maybe we can ask the official authors what they think about your findings. Gently pinging @ngoyal2707 here
Yes I agree that my finding on perturbing `bos_token` cannot be a default option for `bart-large` since it will affect a lot of users, maybe until we finally figure out why the perturbing works.<|||||>> Thanks! I am still curious about:
>
> 1. Why does this issue happen with bart-large but not bart-base?
> 2. Is this issue only related to the Hugging Face model, or affects the model in the original Facebook repository as well?
1. This is the most tricky part in this bug. I do not know if the official authors have performed different training strategies on `bart-large` or `bart-base`, but the fact is that the token `bos_token` makes `bart-large` hard to optimize in the context of 🤗 transformers.
2. Good catch! I think it should not be only related to 🤗 . I have also seen many issues [#12237](https://github.com/huggingface/transformers/issues/12237#issuecomment-865135764) complaining about the abnormal results after fine-tuning `bart-large` related models on some NLG datasets. If it is finally a optimization issue, I think we may encounter the same issue in `fairseq` after trying enough random seeds. However, everything in `fairseq` under the default random seed seems okay for me by now.<|||||>> Thank you @SivilTaram for this detailed explanation. Wonderful effort.
>
> Could this problem be related to mBART? because mBART has only a large-scale version, not a base scale and I think the first token is designed for language code. see #9811
Good point @salrowili ! I have no idea now, but I agree that the first token **is originally desgined to serve for mBART instead of BART**.<|||||>what is your loss score on both, not EM? I am curious to know<|||||>This might have been a false alarm, let me triple check<|||||>Yeah false alarm my bad folks.<|||||>Actually right now with the same transformers bart for conditional generation instance, AllenAI's beam search gives me near 100% EM and hugging face's gives me very low EM, with a much lower per token accuracy (~65%). Both with num_beams of 4 and max length of 500, trying to figure out the difference...<|||||>Ok found the cause of the difference, the huggingface `generate` reads a `no_repeat_ngram_size` of `3` from the `bart` config and the AllenAI decoder does not look at it.<|||||>Can also confirm that gradients of `bart-large` are generally **very** high which might lead to unexpected behavior. See: https://discuss.huggingface.co/t/gradients-verification-between-jax-flax-models-and-pytorch/15970<|||||>> Can also confirm that gradients of `bart-large` are generally **very** high which might lead to unexpected behavior. See: https://discuss.huggingface.co/t/gradients-verification-between-jax-flax-models-and-pytorch/15970
Good job! This reinforces my belief that it is `bart-large` itself that is sensitive to optimization, and not a bug in the codebase.<|||||>We just found out that `bart-large` has its weights accidently stored in fp16 on the Hub see: https://github.com/huggingface/transformers/issues/16736
This might be a reason for this behavior here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>There is a similar issue with `led-large`, which has the same weight as `bart-large`. See my comment [here](https://github.com/huggingface/transformers/issues/18190#issuecomment-1216584872).
In short, no matter what the encoder input sequences are, the decoder sequence `[</s>, <s>]` will produce identical (differences are in the range ~1e-6) LM logits (before any finetuning). This explains why we have training trouble, as well as why perturbing `bos_token` makes sense. However it's not clear why this happens.
I am not very convinced by @SivilTaram's reasoning on the norms of `<eos>` and `<bos>`. As if they have larger differences in norm for `bart-large`, it means they have more different embeddings as inputs, but the output LM logits are very close at the end. This looks very strange.<|||||>I have verified the same situation occurs for `bart-large` using HuggingFace's `transformers`.
Furthermore, the following code snippet confirms the same for `fairseq`'s `bart-large`.
```
import torch
bart = torch.hub.load('pytorch/fairseq', 'bart.large')
bart.eval() # disable dropout (or leave in train mode to finetune)
model = bart.model
#print(model.decoder.embed_tokens.weight[0][:16])
#print(model.decoder.embed_tokens.weight[2][:16])
for _ in range(20):
# random encoder input sequences with random length
seq_len = torch.randint(low=8, high=64, size=(1,))[0]
src_tokens = torch.randint(low=0, high=50265, size=(1, seq_len), dtype=torch.int32)
src_tokens = torch.cat([torch.tensor([[0]], dtype=torch.int32), src_tokens, torch.tensor([[2]], dtype=torch.int32)], dim=-1)
src_lengths = seq_len + 2
prev_output_tokens = torch.tensor([[2, 0]], dtype=torch.int32)
o = model(
src_tokens=src_tokens,
src_lengths=src_lengths,
prev_output_tokens=prev_output_tokens,
)
print(o[0].shape)
print(o[0][0, :2, :16])
```
The results:
```
tensor([[11.3236, -0.9138, 6.5245, -0.3914, 9.7808, 2.2912, 8.1962, 1.9227,
4.7425, 3.1021, 2.6246, 3.7791, 9.5079, 2.6686, 2.1119, 2.0344],
[11.3236, -0.9138, 6.5245, -0.3914, 9.7808, 2.2912, 8.1962, 1.9227,
4.7425, 3.1021, 2.6246, 3.7791, 9.5079, 2.6686, 2.1119, 2.0344]],
grad_fn=<SliceBackward0>)
torch.Size([1, 2, 50265])
tensor([[10.7945, -1.0372, 7.0184, -0.3236, 10.4561, 2.6929, 8.5290, 3.1237,
5.1213, 3.5141, 2.8864, 3.9884, 10.1654, 3.6481, 1.8212, 2.2971],
[10.7945, -1.0372, 7.0184, -0.3236, 10.4561, 2.6929, 8.5290, 3.1237,
5.1213, 3.5141, 2.8864, 3.9884, 10.1654, 3.6481, 1.8212, 2.2971]],
grad_fn=<SliceBackward0>)
torch.Size([1, 2, 50265])
tensor([[ 9.5601, -1.0508, 6.5913, -1.1130, 10.1133, 2.0899, 8.3389, 2.9828,
4.7401, 3.2962, 1.6522, 3.3447, 10.1464, 3.2324, 2.4950, 1.9132],
[ 9.5601, -1.0508, 6.5913, -1.1130, 10.1133, 2.0899, 8.3389, 2.9828,
4.7401, 3.2962, 1.6522, 3.3447, 10.1464, 3.2324, 2.4950, 1.9132]],
grad_fn=<SliceBackward0>)
torch.Size([1, 2, 50265])
tensor([[13.9500, -1.1926, 8.6385, -1.3860, 10.3129, 2.6660, 9.1016, 3.1353,
5.2579, 3.4807, 2.5914, 3.9124, 11.1577, 3.0453, 1.7675, 2.6812],
[13.9500, -1.1926, 8.6385, -1.3860, 10.3129, 2.6660, 9.1016, 3.1353,
5.2579, 3.4807, 2.5914, 3.9124, 11.1577, 3.0453, 1.7675, 2.6812]],
grad_fn=<SliceBackward0>)
torch.Size([1, 2, 50265])
tensor([[16.2144, -0.7714, 9.2828, 0.7236, 11.1654, 1.8226, 9.0874, 2.7226,
5.5879, 3.4060, 1.7666, 4.0211, 11.2787, 2.9147, 1.6007, 1.7492],
[16.2144, -0.7714, 9.2828, 0.7236, 11.1654, 1.8226, 9.0874, 2.7226,
5.5879, 3.4060, 1.7666, 4.0211, 11.2787, 2.9147, 1.6007, 1.7492]],
grad_fn=<SliceBackward0>)
```<|||||>For the record: `bart-large` seems learned to predict the first token after `<s>` in the encoder input sequence, for both the first two decoder tokens `[</s>, <s>]`. I provide a script to confirm this at the end.
Here are some outputs
```bash
example idx: 1
max diff in lm logits: 1.621246337890625e-05
--------------------
predicted token ids: [1640, 1640, 13989, 212, 4038]
predicted tokens: ['(', '(', 'ĠMLS', 'th', 'Ġanniversary']
document tokens: ['<s>', '(', 'CNN', ')', 'On', 'Ġthe', 'Ġ6', 'th', 'Ġof', 'ĠApril', 'Ġ1996', ',', 'ĠSan', 'ĠJose', 'ĠClash', 'Ġand']
========================================
example idx: 10
max diff in lm logits: 7.033348083496094e-06
--------------------
predicted token ids: [11770, 11770, 16, 6308, 5678]
predicted tokens: ['March', 'March', 'Ġis', 'Ġcontains', 'Ġlinks']
document tokens: ['<s>', 'March', 'Ġ10', ',', 'Ġ2015', 'Ġ.', 'ĠWe', "'re", 'Ġtruly', 'Ġinternational', 'Ġin', 'Ġscope', 'Ġon', 'ĠTuesday', '.', 'ĠWe']
========================================
example idx: 20
max diff in lm logits: 7.62939453125e-06
--------------------
predicted token ids: [41650, 41650, 11, 1429, 224]
predicted tokens: ['Tok', 'Tok', 'Ġin', 'ĠJapan', 'Ġsay']
document tokens: ['<s>', 'Tok', 'yo', 'Ġ(', 'CNN', ')', 'Police', 'Ġin', 'ĠJapan', 'Ġsay', 'Ġthey', 'Ġhave', 'Ġarrested', 'Ġa', 'Ġ40', '-']
========================================
example idx: 24
max diff in lm logits: 9.5367431640625e-06
--------------------
predicted token ids: [23122, 23122, 52, 9, 10]
predicted tokens: ['London', 'London', 'Ġwe', 'Ġof', 'Ġa']
document tokens: ['<s>', 'London', 'Ġ(', 'CNN', ')', 'A', 'Ġphoto', 'Ġof', 'Ġa', 'Ġwe', 'asel', 'Ġh', 'itching', 'Ġa', 'Ġsurprise', 'Ġlift']
========================================
example idx: 36
max diff in lm logits: 8.106231689453125e-06
--------------------
predicted token ids: [4030, 4030, 2238, 18, 7396]
predicted tokens: ['New', 'New', 'ĠKorean', "'s", 'izes']
document tokens: ['<s>', 'New', 'ĠDelhi', ',', 'ĠIndia', 'Ġ(', 'CNN', ')', 'The', 'ĠNorth', 'ĠKorean', 'Ġambassador', 'Ġin', 'ĠBangladesh', 'Ġissued', 'Ġan']
```
```python
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import datasets
summarization_name_mapping = {
"cnn_dailymail": ("article", "highlights"),
"xsum": ("document", "summary"),
}
def get_dataset(dataset_name, dataset_config, tokenizer, n_samples):
max_source_length = 1024
max_target_length = 128
padding = True
ignore_pad_token_for_loss = True
padding = "max_length"
prefix = ""
max_train_samples = n_samples
max_eval_samples = n_samples
preprocessing_num_workers = 8
raw_datasets = datasets.load_dataset(dataset_name, dataset_config)
text_column, summary_column = summarization_name_mapping[dataset_name]
def foo(x):
if x == tokenizer.cls_token_id:
return 1
elif x == tokenizer.pad_token_id:
return -1
else:
return 0
def preprocess_function(examples):
# remove pairs where at least one record is None
inputs, targets = [], []
for i in range(len(examples[text_column])):
if examples[text_column][i] and examples[summary_column][i]:
inputs.append(examples[text_column][i])
targets.append(examples[summary_column][i])
inputs = [prefix + inp for inp in inputs]
model_inputs = tokenizer(inputs, max_length=max_source_length, padding=padding, truncation=True)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
if tokenizer.__class__.__name__.startswith("LED"):
model_inputs["global_attention_mask"] = [[foo(y) for y in x] for x in model_inputs["input_ids"]]
return model_inputs
train_dataset = raw_datasets["train"]
eval_dataset = raw_datasets["validation"]
train_dataset = train_dataset.select(range(max_train_samples))
eval_dataset = eval_dataset.select(range(max_eval_samples))
train_dataset = train_dataset.map(
preprocess_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=[text_column, summary_column, 'id'],
desc="Running tokenizer on train dataset",
)
eval_dataset = eval_dataset.map(
preprocess_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=[text_column, summary_column, 'id'],
desc="Running tokenizer on validation dataset",
)
return train_dataset, eval_dataset
def check_model(train_dataset, eval_dataset, model, tokenizer, n_samples, text_column, summary_column):
for idx, eval_example in enumerate(eval_dataset):
input_ids = eval_example["input_ids"]
decoder_input_ids = model.prepare_decoder_input_ids_from_labels(labels=torch.tensor([eval_example["labels"]], dtype=torch.int32))
decoder_input_ids = decoder_input_ids.numpy().tolist()
eval_example["decoder_input_ids"] = decoder_input_ids[0] # remove batch dim
eval_example.pop("labels")
decoder_input_ids = eval_example.pop("decoder_input_ids")
eval_example["decoder_input_ids"] = [2, 0] + decoder_input_ids[2:5]
for k in eval_example:
eval_example[k] = torch.tensor([eval_example[k]], dtype=torch.int32)
output = model(**eval_example)
print(f"example idx: {idx}")
print(f'max diff in lm logits: {np.amax(np.abs((output.logits[0, 0] - output.logits[0, 1]).detach().to("cpu").numpy()))}')
print(f"-" * 20)
pred_ids = torch.argmax(output.logits, dim=-1).detach().to("cpu").numpy().tolist()[0]
print(f'predicted token ids: {pred_ids}')
pred_tokens = tokenizer.convert_ids_to_tokens(pred_ids)
print(f'predicted tokens: {pred_tokens}')
document_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(f'document tokens: {document_tokens[:16]}')
print(f"=" * 40)
def run(checkpoint_name, dataset_name, dataset_config=None, n_samples=100):
text_column, summary_column = summarization_name_mapping[dataset_name]
tokenizer = AutoTokenizer.from_pretrained(checkpoint_name)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_name)
train_dataset, eval_dataset = get_dataset(dataset_name, dataset_config=dataset_config, tokenizer=tokenizer, n_samples=n_samples)
check_model(train_dataset, eval_dataset, model, tokenizer, n_samples, text_column, summary_column)
run("facebook/bart-large", "cnn_dailymail", "3.0.0", n_samples=100)
#run("facebook/bart-large", "xsum", None, n_samples=10)
#run("allenai/led-large-16384", "cnn_dailymail", "3.0.0", n_samples=10)
#run("allenai/led-large-16384", "xsum", None, n_samples=10)
```<|||||>I just saw this thread - sorry for all the pain here! The problem was caused by an unfortunate config bug in the original BART-large training run, which caused decoder sequences to start with an extra </ s> token
I assume that got fixed in BART-base, which is why it's behaving differently.<|||||>> I just saw this thread - sorry for all the pain here! The problem was caused by an unfortunate config bug in the original BART-large training run, which caused decoder sequences to start with an extra </ s> token
>
> I assume that got fixed in BART-base, which is why it's behaving differently.
Thank you so much for clarifying it @mikelewis0 well appreciated!<|||||>> I just saw this thread - sorry for all the pain here! The problem was caused by an unfortunate config bug in the original BART-large training run, which caused decoder sequences to start with an extra </ s> token
>
> I assume that got fixed in BART-base, which is why it's behaving differently.
I think you misinterpreted the cause. The extra `</s>` token is intended and it is also used in bart-base. |
transformers | 15,558 | closed | [GPTJ] fix docs | # What does this PR do?
Fixes #15546 | 02-08-2022 14:25:10 | 02-08-2022 14:25:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,557 | closed | unable to pass through create vocabulary_from_data when using a custom dataset loaded from disk which is a result of concatenating multiple datasets | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 4.17.0.dev0
- Platform:Ubuntu 20.04.3 LTS
- Python version: 3.8.10
- PyTorch version (GPU?):1.10.2+cu113
- Tensorflow version (GPU?): not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @anton-l
Models:
- Wav2Vec2
Library:
- Speech
## Information
Model I am using : facebook/wav2vec2-xls-r-300m
The problem arises when using:
* [ ] the official example script: https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
The tasks I am working on is:
* [ ] dataset: using a load_from_disk() method to load a locally preprocessed and saved dataset. i have downloaded and preprocessed marathi dataset from 3 different huggingface datasets locations: (openslr, openslr64) , (mozillafoundation/common_voice_8_0, mr) , (indic_voice, mr)
finally the indic_voice dataset was preprocessed into to standard format of common_voice and openslr and the 3 datasets were combined using the datasets.concatenate_datasets() method and finally the dataset was saved to disk,
the same dataset is then loaded in the official run_speech_recognition_ctc.py file
## To reproduce
Steps to reproduce the behavior:
download multiple marathi datasets and concatenate and save to disk, load the dataset in the official training script
I have been running the official script `run_speech_recognition_ctc.py` as provided in
[https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition)
1. `python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="mr" \
--output_dir="./wav2vec2-common_voice-mr" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval `
## Error Message
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-05,
"layerdrop": 0.1,
"mask_feature_length": 10,
"mask_feature_min_masks": 0,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_min_masks": 2,
"mask_time_prob": 0.075,
"model_type": "wav2vec2",
"num_adapter_layers": 3,
"num_attention_heads": 16,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"num_negatives": 100,
"output_hidden_size": 1024,
"pad_token_id": 0,
"proj_codevector_dim": 768,
"tdnn_dilation": [
1,
2,
3,
1,
1
],
"tdnn_dim": [
512,
512,
512,
512,
1500
],
"tdnn_kernel": [
5,
3,
3,
1,
1
],
"torch_dtype": "float32",
"transformers_version": "4.17.0.dev0",
"use_weighted_layer_sum": false,
"vocab_size": 32,
"xvector_output_dim": 512
}
0%| | 0/1 [00:00<?, ?ba/s]
stays hung up here and blows up all the RAM (i have 64 GB of ram) Once the all the RAM in the system is consumed, the program is killed by bash.
upon the script being killed in the output_dir there is no vocab.json file
[pre-processing and training files.zip](https://github.com/huggingface/transformers/files/8024270/pre-processing.and.training.files.zip)
follow these zip file that contains a jupyter notebook used to concatenate multiple datasets and saving to disk, the .py files contains code on loading the dataset from disk and continuing with the training process
| 02-08-2022 13:50:26 | 02-08-2022 13:50:26 | This sounds like it's a problem with the `datasets` library. @StephennFernandes could you try to reproduce the bug just by using the Datasets library? It would be great if you could provide a small code snippet the reproduces the bug easily :-)<|||||>Hey @patrickvonplaten tried to pass the almost the same datasets through the create_vocabulary_from_data function in a jupyter notebook and it processed it in a second, but in the training script its overloading the ram and crashing.
Also tried uninstalling ans reinstalling transformers and datasets from source<|||||>Hey @patrickvonplaten heres something really weird that i've discovered
when i preprocess the 3 datasets slr, commonvoice and indic_voice, i put the datasets through a lot of .map methods
combined_dataset = combined_dataset.map(...) and after this when the processed dataset is finally passed through the create_vocabulary_from_data() function it is RAM heavy and slows the entire process down.
i tried to load the indic_voice dataset, which is almost 90 % of the entire dataset in this specific language. when i tired passing its ['sentence'] column through the create vocabulary function() it works absolutely fine (it processes everything under 3 seconds),
but the combined dataset blows up the RAM,
I have a hunch that when the datasets goes back and forth into the .map function it slowing the process down, maybe it has something to do deep internally with how datasets sores and processes data internally. <|||||>Hey @StephennFernandes - this sounds like a question for https://github.com/huggingface/datasets actually. Could you maybe try to ask there? Also cc'ing @lhoestq and @polinaeterna <|||||>The extraction of all the characters of each text is done completely in memory:
https://github.com/huggingface/transformers/blob/2de99e6c43403d5988ae20cfc0af063797d69c29/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L320
Removing `keep_in_memory=True` should fix this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,556 | closed | Unable to ignore pad tokens when using `decoder_input_ids` and `decoder_attention_mask` with `BartForConditionalGeneration` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, but it's not necessary
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): `BartForConditionalGeneration`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Script: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-mask_filling-py
Output: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-output-txt
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Mask filling as in the official code snippet: https://huggingface.co/docs/transformers/model_doc/bart#mask-filling
My goal is very simple: I want to generate the exact same output as the official code snippet when manually specifying `decoder_input_ids` where `decoder_input_ids` contains some pad tokens, i.e. pad tokens should be ignored. How can I achieve this goal?
Sidenote: My actual task is not to replicate the default output when using pad tokens in `decoder_input_ids` just for the sake of using pad tokens but requires resolving this issue.
First, I generate the output without manually specifying `decoder_input_ids` using `model.generate(**kwargs)`. Per default, `use_cache=True` is used in order to speed up decoding. However, when `decoder_input_ids` is manually specified, then `use_cache=False` is required. I use `use_cache=False` for all generations.
For more information, read the explanation in `BART_INPUTS_DOCSTRING`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/bart/modeling_bart.py#L590-L603
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/bart/modeling_bart.py#L634-L648
For all outputs, I also print the beam search score of the generated sequence which allows me to check if two outputs have the exact same score.
Default output:
```
DEFAULT OUTPUT WITHOUT MANUAL DECODER INPUT
-0.4835893213748932
</s><s>UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria</s>
```
Outputs with manually specified `decoder_input_ids`:
```
decoder_input_ids: [[2]]
-0.4835893213748932
</s><s>UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 0]]
-0.4835893213748932
</s><s>UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria</s>
```
As expected, when `decoder_input_ids=[[2]]` or `decoder_input_ids=[[2,0]]` the default output is exactly replicated, given that when `decoder_input_ids` is not manually specified `BartForConditionalGeneration` always starts `decoder_input_ids` with `model.config.decoder_start_token_id=2` and then forces `model.config.bos_token_id=0` to be generated due to `BartForConditionalGeneration.from_pretrained("facebook/bart-large", forced_bos_token_id=0)`.
Outputs when `decoder_input_ids` contains pad tokens where `model.config.pad_token_id=1`:
```
decoder_input_ids: [[2 1 0]]
-0.852648913860321
</s><pad><s>UN Chief Says</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[1 2 0]]
-0.7636396884918213
<pad></s><s> UN</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 0 1]]
-0.5718380212783813
</s><s><pad>.UN Chief Says There Is No Place for Human Rights in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 0 1 1]]
-0.6295203566551208
</s><s><pad><pad> in Syria.UN Chief Says There Is No Alternative to Assad</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 0 1 1 1]]
-0.49761223793029785
</s><s><pad><pad><pad>.UN Chief Says There Is No Alternative to Assad in Syria</s>
```
I wasn't sure where exactly to place the pad tokens, so I tried all three possibilities: left, center, right. Placing the pad token to the left and in the center yields short outputs and placing the pad token to the right yields the best although not the same output as for the default and with a strange dot in the beginning. Having multiple pad tokens also changes the output.
According to the docstring in `BART_INPUTS_DOCSTRING`, this behaviour is not expected and pad tokens should be ignored by default without having to manually specify `decoder_attention_mask`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/models/bart/modeling_bart.py#L604-L610
Given that pad tokens are not ignored by default, I manually specify `decoder_attention_mask` such that pad tokens are ignored. However, by default `decoder_attention_mask` is not used when using `model.generate(**kwargs)`. Hence, I create the class `CustomBart(BartForConditionalGeneration, GenerationMixin)` where I exactly copy the two functions below and add some code to enable `decoder_attention_mask` to be passed along when using it in `model.generate(**kwargs)`. The adjustments are clearly marked. Note that I use `CustomBart` for all generations since whenever `decoder_attention_mask` is not specified, it generates the exact same output as `BartForConditionalGeneration`.
https://github.com/huggingface/transformers/blob/0fe17f375a4f0fdd9aea260d0645ccfd4896e958/src/transformers/models/bart/modeling_bart.py#L1366
Adjustment: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-mask_filling-py-L32-L36
https://github.com/huggingface/transformers/blob/0fe17f375a4f0fdd9aea260d0645ccfd4896e958/src/transformers/generation_utils.py#L559
Adjustment: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-mask_filling-py-L72-L78
Output when manually specifying `decoder_attention_mask` such that it ignores pad tokens in `decoder_input_ids`:
```
decoder_input_ids: [[2 0 1]]
decoder_attention_mask: [[1 1 0]]
Traceback (most recent call last):
File "mask_filling.py", line 179, in <module>
print_results(kwargs)
File "mask_filling.py", line 93, in print_results
outputs = model.generate(**kwargs)
File "venv/lib64/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "transformers/src/transformers/generation_utils.py", line 1234, in generate
return self.beam_search(
File "transformers/src/transformers/generation_utils.py", line 1974, in beam_search
outputs = self(
File "venv/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "transformers/src/transformers/models/bart/modeling_bart.py", line 1326, in forward
outputs = self.model(
File "venv/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "transformers/src/transformers/models/bart/modeling_bart.py", line 1216, in forward
decoder_outputs = self.decoder(
File "venv/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "transformers/src/transformers/models/bart/modeling_bart.py", line 1007, in forward
attention_mask = self._prepare_decoder_attention_mask(
File "transformers/src/transformers/models/bart/modeling_bart.py", line 898, in _prepare_decoder_attention_mask
expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 3
```
Both `decoder_inputs_ids` and `decoder_attention_mask` should have shape `(batch_size, target_sequence_length)` according to `BART_INPUTS_DOCSTRING` as can be seen above. However, this produces a `RuntimeError`.
## To reproduce
Steps to reproduce the behavior:
1. Run my script using `transformers` version 4.16.2: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-mask_filling-py
2. Compare your output with my output: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-output-txt
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect one of the following, ideally both:
1. Pad tokens should be ignored by default when manually specifying `decoder_input_ids`.
2. One should be able to use `decoder_attention_mask` without getting a `RuntimeError`.
As a consequence, the generated output should be the same as the default when manually specifying `decoder_input_ids` with additional pad tokens or if necessary, additionally specifying `decoder_attention_mask` to ignore pad tokens. | 02-08-2022 08:49:58 | 02-08-2022 08:49:58 | This post in the Hugging Face forum is related to my issue, albeit for another model:
https://discuss.huggingface.co/t/is-t5-expected-to-ignore-padding-tokens-in-decoder-input-ids-when-decoder-attention-mask-is-not-provided/10271<|||||>I've identified the issue. During beam search, there is a while loop which decodes the next tokens one step at a time. At each step `input_ids` (these are the `decoder_input_ids`) get extended with the next tokens found via beam search. However, `transformers` version 4.16.2 does not extend the `decoder_attention_mask` after each decoding step when it's manually specified in `model_kwargs` in `beam_search`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/generation_utils.py#L1794-L1809
Hence the `RuntimeError` described above appears.
More specifically, here the `input_ids` get extendend:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/generation_utils.py#L2039
In the next loop, the error occurs because `decoder_attention_mask` has not been extended:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/generation_utils.py#L1972-L1979
I could fix the issue, by adjusting the function `prepare_inputs_for_generation` such that at each step `decoder_attention_mask` gets extended if needed:
https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/30fe77c0651ac949ee61be7f46113262bbecce45#file-mask_filling-py-L11-L58<|||||>Even though I can now use `decoder_attention_mask` to ignore pad tokens, surprisingly, depending on where the pad token is placed (left, center, right), a different output gets generated with the best result (according to the beam score) when the pad token is placed in the center, i.e. after the decoder start token and before the BOS token. However, the default output could still not be exactly replicated. Moreover, the output also changes with the padding size which should not be the case in my opinion.
```
DEFAULT OUTPUT WITHOUT MANUAL DECODER INPUT
-0.4835893213748932
</s><s>UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 0 1]]
decoder_attention_mask: [[1 1 0]]
-0.5901848673820496
</s><s><pad>.UN Chief Says There Is No Plan to Stop ISIS in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 1 0]]
decoder_attention_mask: [[1 0 1]]
-0.36968478560447693
</s><pad><s>UN Chief Says There Is No Alternative to Assad in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[2 1 1 1 0]]
decoder_attention_mask: [[1 0 0 0 1]]
-0.3912160396575928
</s><pad><pad><pad><s>UN Chief Says There Is No Plan B for Peace in Syria</s>
----------------------------------------------------------------------------------------------------
decoder_input_ids: [[1 2 0]]
decoder_attention_mask: [[0 1 1]]
-1.161877155303955
<pad></s><s>''',,',''"'""''</s>
```<|||||>Instead of updating `decoder_attention_mask` in the function `prepare_inputs_for_generation`, it should be done in the function `_update_model_kwargs_for_generation`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/generation_utils.py#L2041-L2043
Here is the adjusted `_update_model_kwargs_for_generation` function:
https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/1163b4affe94df16bc088c7f8f16ff900bcd82a7#file-mask_filling-py-L75-L84
<|||||>Hey @pullelys,
Sorry I'm a bit lost here. Could you maybe copy-paste a reproducible code snippet here that we can run in a Python shell to spot the error? :-)<|||||>Sure, here are the steps to reproduce the behavior:
1. Run my script using `transformers` version 4.16.2: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-mask_filling-py
2. Compare your output with my output: https://gist.github.com/pullelys/43ebb9d2785d858d17b4e2e61464dc48/3fb1b593f234052be264cb531c0c56502277770e#file-output-txt<|||||>Hey @pullelys,
Sorry it'll take me too much time to dive into this. This is a very long script and includes a custom model definition. We cannot guarantee that custom models built on top of Transformers work as expected.
Is there also a problem with the official `BartForConditionalGeneration(...)` model? If yes, could you try to put a **minimal**, **reproducible** code-snippet below and explain what exactly the error is? This would help us a great deal to better understand the issue here.
Thanks a lot!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,555 | closed | [Bug Fix] Beam search example in docs fails & a fix (integrating `max_length` in `BeamScorer.finalize()`) | # [Bug Fix] Exact beam search example in docs fails & a fix (regarding `max_length` in `BeamScorer.finalize()`)
Fixes a bug found while developing #15416
Specifically mentioned in number three of https://github.com/huggingface/transformers/pull/15416#issuecomment-1028550095
## Example in master documentation fails
All example code in the documentation should not throw an error when copied & pasted & run directly.
However, the following example found [here in the huggingface documentation](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/model#transformers.generation_utils.GenerationMixin.beam_search) fails due to what seems like a recent change with internal workings of `max_length`.
```python
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
encoder_input_str = "translate English to German: How old are you?"
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
# lets run beam search using 3 beams
num_beams = 3
# define decoder start token ids
input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
# add encoder_outputs to model keyword arguments
model_kwargs = {
"encoder_outputs": model.get_encoder()(
encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
)
}
# instantiate beam scorer
beam_scorer = BeamSearchScorer(
batch_size=1,
num_beams=num_beams,
device=model.device,
)
# instantiate logits processors
logits_processor = LogitsProcessorList(
[
MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
]
)
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
self.assertListEqual(outputs, ["Wie alt bist du?"])
```
Running this with:
```
pytest tests/test_generation_utils.py::GenerationIntegrationTests::test_beam_search_example_integration
```
leads to the following error:
```
# select the best hypotheses
sent_lengths = input_ids.new(batch_size * self.num_beam_hyps_to_keep)
best = []
best_scores = torch.zeros(batch_size * self.num_beam_hyps_to_keep, device=self.device, dtype=torch.float32)
# retrieve best hypotheses
for i, beam_hyp in enumerate(self._beam_hyps):
sorted_hyps = sorted(beam_hyp.beams, key=lambda x: x[0])
for j in range(self.num_beam_hyps_to_keep):
best_hyp_tuple = sorted_hyps.pop()
best_score = best_hyp_tuple[0]
best_hyp = best_hyp_tuple[1]
sent_lengths[self.num_beam_hyps_to_keep * i + j] = len(best_hyp)
# append to lists
best.append(best_hyp)
best_scores[i * self.num_beam_hyps_to_keep + j] = best_score
# prepare for adding eos
> sent_max_len = min(sent_lengths.max().item() + 1, max_length)
E TypeError: '<' not supported between instances of 'NoneType' and 'int'
src/transformers/generation_beam_search.py:333: TypeError
```
To ensure that this issue is completely resolved, part of this PR includes a test function that directly copies the example in the documentation.
## The Bug Fix
The suggested testing code `test_beam_search_example_integration(self)` is **taken directly from the current huggingface documentation** for `GenerationMixin.beam_search` and it fails, because `max_length` can be set to None unless otherwise specified, and it throws an error saying that you can't compare a `Tensor` to a `NoneType`.
[From `BeamSearchScorer.finalize()`](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_beam_search.py#L333),
```python
sent_max_len = min(sent_lengths.max().item() + 1, max_length) # ERROR 1
decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`pad_token_id` has to be defined"
decoded.fill_(pad_token_id)
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length: # ERROR 2
decoded[i, sent_lengths[i]] = eos_token_id
```
So I changed that to:
```python
sent_lengths_max = sent_lengths.max().item() + 1
sent_max_len = min(sent_lengths_max, max_length) if max_length is not None else sent_lengths_max
decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)
# shorter batches are padded if needed
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`pad_token_id` has to be defined"
decoded.fill_(pad_token_id)
# fill with hypotheses and eos_token_id if the latter fits in
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < sent_max_len:
decoded[i, sent_lengths[i]] = eos_token_id
```
Which now allows the test to pass.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| 02-08-2022 08:29:57 | 02-08-2022 08:29:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15555). All of your documentation changes will be reflected on that endpoint.<|||||>@patrickvonplaten done!<|||||>Pinging @patrickvonplaten here as well for merge for approved changes |
transformers | 15,554 | closed | feat(flax): allow encoder_outputs in generate | # What does this PR do?
Allow `encoder_outputs` in flax `model.generate`
Fixes #15161
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-08-2022 03:47:26 | 02-08-2022 03:47:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It seems to work fine.
Not a big speedup though as sampling iteratively is probably what takes most time. |
transformers | 15,553 | closed | DataCollatorWithPadding that pads attention_mask | ## EDIT: `DataCollatorWithPadding` already has the requested feature
# 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
A `DataCollatorWithPadding` class that can pad `attention_mask` with any tokenizer class compatible with 🤗Transformers.
## Motivation
It is common to have training examples with both `input_ids` and `attention_mask`. For batches of size larger than 1, you need to pad the examples to form congruent tensors. However, [DataCollatorWithPadding](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorWithPadding) pads `input_ids` but not `attention_mask` and there is apparently no way to instruct it to do so.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
The method `__call__` from `DataCollatorWithPadding` ([code](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/data/data_collator.py#L246)) is a wrapper for `self.tokenizer.pad`. For tokenizers of the class [PreTrainedTokenizerBase](https://huggingface.co/docs/transformers/internal/tokenization_utils#transformers.PreTrainedTokenizerBase), the `pad` method ([code](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/tokenization_utils_base.py#L2683)) has an argument `return_attention_mask` that cannot be forwarded from the `__call__` method of DataCollatorWithPadding.
Perhaps an easy way to implement the issue without breaking everything is to add to DataCollatorWithPadding an attribute `pad_attention_mask` (maybe set to `False` for compatibility with the current code) and pass `return_attention_mask=self.pad_attention_mask` in the call to `self.tokenizer.pad`.
The current docstring of the `pad` method from PreTrainedTokenizerBase is quite confusing (see [here](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/tokenization_utils_base.py#L2740)). It refers to the `return_outputs` attribute, which is not defined anywhere and is not used anywhere in the code. It is perhaps an obsolete feature and I suppose that the issue arose when it was removed.
I could submit a PR, but I suppose that this is best brought to the attention of @sgugger. If not, sorry for bothering you Sylvain.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-08-2022 00:08:42 | 02-08-2022 00:08:42 | Could you show us the code that doesn't run as you expect? Because `DataCollatorWithPadding` definitely pads the `input_ids`, `attention_mask` and `token_type_ids`.<|||||>Thanks for attending this so fast, I really appreciate it. Well, this is one of those embarrassing moments... when I crafted a minimal example to reproduce the issue, it turned out that I could not. And trying the code that did not run yesterday... actually ran fine. Not sure what happened but this was definitely on my side. I'll close the issue with this comment and edit the initial post to clarify that there was in reality no issue.
One thing still, I could not find anything about the `return_outputs` attribute that appears in the documentation of `pad` ([here](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/tokenization_utils_base.py#L2740)). Does it refer to a real thing?
<|||||>I think that attribute existed long ago but I don't find it anywhere indeed. The default is determined by whether `"attention_mask"` is in the tokenizer `model_input_names` attribute.
Do you want to make a PR to fix the doc?<|||||>Sure! I'd be happy to make something useful out of this issue. I will submit a PR shortly.<|||||>@sgugger, while gathering information to edit the doc in a meaningful way, I came across puzzling features for the `DataCollatorForTokenClassification`. It's a different issue, but since I am about to update the doc I can maybe take care of this as well.
Below is a minimum example of my typical pipeline. I crafted this example on DNA data in order to have a tiny alphabet.
``` python
# Train the tokenizer (usually in a first file).
import tokenizers
tokenizer = tokenizers.Tokenizer(tokenizers.models.WordPiece(unk_token="[UNK]"))
trainer = tokenizers.trainers.WordPieceTrainer(
vocab_size=16, special_tokens=["[PAD]", "[CLS]", "[SEP]", "[UNK]", "[MASK]"]
)
tokenizer.post_processor = tokenizers.processors.TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
],
)
tokenizer.train(files=["./text_for_tokenizer.txt"], trainer=trainer)
tokenizer.save("tokenizer_model.json")
# Train the Transformer (usually in a second file).
import transformers
tokenizer = transformers.PreTrainedTokenizerFast(
tokenizer_file="tokenizer_model.json",
bos_token="[CLS]", eos_token="[SEP]", unk_token="[UNK]", sep_token="[SEP]",
pad_token="[PAD]", cls_token="[CLS]", mask_token="[MASK]")
def tokenize_function(examples):
return tokenizer(
[" ".join(x.upper()) for x in examples["seq"]],
return_token_type_ids=False,
return_attention_mask=False,
return_special_tokens_mask=True,
truncation=True,
max_length=36
)
from datasets import load_dataset
raw_dataset = load_dataset('json', data_files={"train": ["./training_data.json"]})
texts = raw_dataset.map(tokenize_function, batched=True)
texts = texts.remove_columns(["seq"])
#texts.set_format("torch") # <==== This line breaks everything!!
config = transformers.BertConfig()
model = transformers.BertForTokenClassification(config)
#data_collator = transformers.DataCollatorForTokenClassification(tokenizer)
class dbgCollator(transformers.DataCollatorForTokenClassification):
def torch_call(self, features):
output = super().torch_call(features)
print(output)
return output
data_collator = dbgCollator(tokenizer)
training_args = transformers.TrainingArguments(
output_dir = ".",
num_train_epochs = 1,
per_device_train_batch_size = 2,
)
trainer = transformers.Trainer(
model = model,
args = training_args,
data_collator = data_collator,
train_dataset = texts["train"],
)
trainer.train()
```
There are two data files. The first is `text_for_the_tokenizer.txt`
```
G A T C
```
The second is `training_data.json`
```
{"seq": "A T A T A T", "labels": [0, 1, 0, 1, 0, 1]}
{"seq": "G C T C T", "labels": [0, 1, 1, 1, 1]}
```
With this you should be able to run the code and observe the following (I have replaced `DataCollatorForTokenClassification` by a debug version that prints its output):
1. After tokenization, special tokens have been added to the input ids.
2. The labels were not shifted, even though `special_tokens_mask` was available to the collator.
3. The script runs fine and the user does not get notified that this happened.
So how is the user supposed to deal with this? Do they have to manually add the label padding value at the time they tokenize? Also, shouldn't there be a warning that labels have been assigned to special tokens?
4. If you uncomment the line `texts.set_format("torch")`, there seems to be no way to make the collator work. It suggests padding and truncating, which makes it really hard to discover that this line causes the error (I discovered it by accident).
Do you have an idea of what the problem is and how to document it properly?<|||||>Your `tokenize` function should also align the labels: a word can be split in several tokens and so you have to duplicate the labels for that word, so the `input_ids` and `labels` are the same length. See the section on [token classification](https://huggingface.co/course/chapter7/2?fw=pt) in the course.
The data collator is just there to pad the input IDs and labels once this has been done.<|||||>Thanks @sgugger, it's very clear. I wish I had discovered the tutorial earlier, it's very well made.
The docstring of `DataCollatorForTokenClassification` is succinct and matter-of-fact but maybe I can add the info somewhere that the labels and the tokens are assumed to be aligned.
What I find confusing is that the collator seems to assume that labels and tokens are aligned (and does not trigger a warning) even if they have different lengths. If the `input_ids` are longer than the `labels` it works fine, but if the `labels` are longer then it seems to crash. There is probably a common case where the `input_ids` are longer but I cannot figure out why this should makes sense. Would you know why this is the case?
And what about point 4 above? If you want I can look into what causes the crash when the format is set to `"torch"`. I am not sure I will fully get it but at least I can try.<|||||>We could issue a warning if we detect the labels and input IDs don't have the same length indeed.<|||||>Wow, this is information that I definitely needed :)
So if you want to pass along other entries from tokenized data, say word-level masks, you can add them to the tokenizer like this:
`tokenizer.model_input_names.append("prediction_mask")`
Then the datacollator will take care of padding. |
transformers | 15,552 | closed | Improved documentation/blog post around `generate()` | Hi there.
I have seen a bit of confusion in the community around the beam search procedure and the returned classes, particularly the `scores` attribute. This seemed to be the motivation behind the recent [PR](https://github.com/huggingface/transformers/pull/14654) that aims to improve generation to make things easier for the user.
I am finding however that I still have a lot of questions that aren’t answered by the current documentation. I was thinking an updated blog post about `generate()` (as mused about [here](https://github.com/huggingface/transformers/pull/14654#issuecomment-1020503207)) would certainly be very useful to myself and to other users.
Here are my main questions about the process that I’m hoping could be covered:
* what exactly are `scores`, and how do they differ from log probabilities? What was changed in the pull request about the way they are calculated?
* how are `scores`different from the output of the new function `model.compute_transition_beam_scores()`?
* how do I interpret the new returned `beam_indicies`?
* what exactly are `sequence_scores`? What’s the motivation behind their calculation (e.g why is `length_penalty` involved?)
In addition, I have some questions about behaviours I have noticed:
* when I sum up the transition scores from `model.compute_transition_beam_scores()`, I find that the ordering of the returned sequences isn’t the same as that of `sequence_scores` , which means that the most probable sequences aren’t those being returned first - why is this?
* like some other people I am using gradients of the generate function by removing the `no_grad` decorator from the generate function. I notice when I do this I can get gradients when calling `model.compute_transition_beam_scores()` but not with the `sequence_scores` output - why is this?
Finally I would like to thank @patrickvonplaten for all the hard work he has put into the beam search process, and to give my apologies if this post comes off a little entitled.
| 02-07-2022 23:40:55 | 02-07-2022 23:40:55 | Hey @puzzler10,
That's great feedback - thanks a lot :-) It helps me a lot to find good content for an updated version of the `generate()` blog post and some documentation. I hope to be able to finish such a blog post in ~1 month!
Maybe we can use this issue as some kind of feedback what people would like to understand better with regards to `generate()` - I'll post this on Twitter if you don't mind.
To answer some of your questions:
> When I sum up the transition scores from model.compute_transition_beam_scores(), I find that the ordering of the returned sequences isn’t the same as that of sequence_scores , which means that the most probable sequences aren’t those being returned first - why is this?
This sounds like a bug to me then or a bad design. Let me look into it.
> Like some other people I am using gradients of the generate function by removing the no_grad decorator from the generate function. I notice when I do this I can get gradients when calling model.compute_transition_beam_scores() but not with the sequence_scores output - why is this?
I've never used `generate()` to compute gradients - do lots of people use this now?
<|||||>Thanks for looking into this!
> Maybe we can use this issue as some kind of feedback what people would like to understand better with regards to generate() - I'll post this on Twitter if you don't mind.
No worries from me.
> I've never used generate() to compute gradients - do lots of people use this now?
It's something I've seen a bit through github issues and forum posts (e.g. [1](https://github.com/huggingface/transformers/issues/6951), [2](https://github.com/huggingface/transformers/issues/3720)). My particular use case is that I use the gradients to fine-tune a transformer to maximise some particular reward function via a policy gradient algorithm. To get the gradients on I use this work-around to create a modified version of `generate`:
```
from undecorated import undecorated
from types import MethodType
generate_with_grad = undecorated(model.generate)
model.generate_with_grad = MethodType(generate_with_grad, model)
```
which seems to work well for me.
<|||||>@patrickvonplaten are there any news on this or the blogpost? I'm trying to use the scores and also find myself with the same questions about the differences between the scores `model.compute_transition_beam_scores()`, `sequence_scores` and `scores`<|||||>Got caught up in another project :-/ But I'm thinking about how to structure the new blogpost<|||||>Hey! I wouldn't mind helping out planning out or kick-starting the blogpost but I do have some questions regarding the internals and how the compute_transition_probabilities relate to beam sequence scores!<|||||>Cool! It'd already be very helpful if you could drop your questions here - I'll try to answer them in the blog post then :-)<|||||>Linking some more feedback here https://github.com/huggingface/transformers/issues/10012#issuecomment-1060219687<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
unstale<|||||>> undecorated
It seems not to work now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sadly I don't think I'll find the time in the near future to write another blog post here. I'll add a tutorial to the docs, but probably won't manage to rewrite the whole blog post. Linking the best resource there is so far: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks for looking into this!
>
> > Maybe we can use this issue as some kind of feedback what people would like to understand better with regards to generate() - I'll post this on Twitter if you don't mind.
>
> No worries from me.
>
> > I've never used generate() to compute gradients - do lots of people use this now?
>
> It's something I've seen a bit through github issues and forum posts (e.g. [1](https://github.com/huggingface/transformers/issues/6951), [2](https://github.com/huggingface/transformers/issues/3720)). My particular use case is that I use the gradients to fine-tune a transformer to maximise some particular reward function via a policy gradient algorithm. To get the gradients on I use this work-around to create a modified version of `generate`:
>
> ```
> from undecorated import undecorated
> from types import MethodType
>
> generate_with_grad = undecorated(model.generate)
> model.generate_with_grad = MethodType(generate_with_grad, model)
> ```
>
> which seems to work well for me.
Hi Tom @puzzler10, thanks for the discussion that helped me a lot! I have a following question that, when we are trying to optimize a reward function via policy gradient, one could run into issues that gradients can not be backproped due to in-place operations like [this thread](https://github.com/huggingface/transformers/issues/13954). Do you have some elegant solutions/tips to implement policy gradient with huggingface code base? |
transformers | 15,551 | closed | [trainer docs] document how to select specific gpus | This PR addresses a user question of how to control which GPUs are used with the HF Trainer.
@sgugger, @LysandreJik | 02-07-2022 22:08:05 | 02-07-2022 22:08:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,550 | open | DeepMind Retro | # 🌟 New model addition
Basically a retrieval augmented model like RAG, but without expensive retriever end2end training
## Open source status
* [ ] the model implementation is available: Not available
* [ ] the model weights are available: Not available
* [ ] who are the authors:(https://arxiv.org/pdf/2112.04426.pdf)
| 02-07-2022 20:46:59 | 02-07-2022 20:46:59 | Hi,
No weights are available at the moment. I'd suggest to take a look at this repo: https://github.com/lucidrains/RETRO-pytorch<|||||>Yeah, I saw it. Anyways I am thinking of combining Retro with RAG model. The results seem very promising also the model is very practical as recent work suggests end-to-end retriever training is very expensive and impractical.
<|||||>I would love to get some help :)<|||||>> I would love to get some help :)
Hi! I am interested in this implementation and I can help for it!<|||||>@halixness do you think we should modify the RAG implementation ? <|||||>@shamanez @halixness @NielsRogge I would like to participate in the implementation, please let me know<|||||>Hi All,
I think we can use the RAG pipeline. But we need to implement the reader and its loss function.
We might be able to use [this](https://github.com/lucidrains/RETRO-pytorch).
any suggestions? <|||||>> I would love to get some help :)
I'd like to help as well!<|||||>yeah this model seems cool. feel like it's getting closer to true open source LLMs. anyone aware if there are similar open-source efforts as bloom to reproduce and publish this?<|||||>Hi @shamanez, very I'm interested in RETRO. How far along are you in an implementation?<|||||>I did some digging and apparently Nvidia is picking up this idea again. I found it via an [article](https://www.marktechpost.com/2023/04/22/this-ai-paper-from-nvidia-provides-the-recipe-to-reproduce-retro-up-to-9-5b-parameters-while-retrieving-a-text-corpus-with-330b-tokens/) from April '23. It links to their [GitHub repo](https://github.com/NVIDIA/Megatron-LM/tree/main/tools/retro) and the corresponding [paper](https://arxiv.org/pdf/2304.06762.pdf) by Nvidia. In this paper they describe how they reproduce RETRO including the generation of the pretraining corpus.
However, in the paper they mention that the implementation and the weights are not open-sourced, but just being able to regenerate the pretraining corpus would get us one step ahead. |
transformers | 15,549 | closed | PoC for a ProcessorMixin class | # What does this PR do?
This PR refactors some common saving/loading functionality of `Processor`s behind a `ProcessorMixin` class which implements the `save_pretrained` and `from_pretrained` method.
It's designed to work with main attributes (by default `feature_extractor` and `tokenizer`, but that list can be changed or extended in subclasses) that are then required by the init and on which `save_pretrained`/`from_pretrained` is called when saving or loading the processor.
It also handles having several classes for a given tokenizer (fast/not fast) so we can have that feature automatically handled. I showed how this works in refactoring two processors (I did not put the tokenizer fast in CLIP because there are some problems with it, but adding it is as easy as changing the `tokenizer_class`).
I will refactor the other processors if the design suits everyone. | 02-07-2022 20:28:47 | 02-07-2022 20:28:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,548 | closed | Fine-tune blenderbot 2.0 to retrieve from my own documents instead internet | i have database contain of many documents , and try to build chatbot which reply for users from those documents in my database .
can i use this [way(ONLY GOLD DOCS)](https://github.com/facebookresearch/ParlAI/tree/main/projects/blenderbot2/agents)? | 02-07-2022 20:26:01 | 02-07-2022 20:26:01 | Hi @BayanIssa ! It would be nice if you could use the [forum](https://discuss.huggingface.co/) for such general questions. We use issues to report bugs and for feature requests.
Also blenderbot 2.0 will soon be included in Transformers :) #12807<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,547 | closed | Model card reports wrong evaluation metrics when load_best_model_at_end=True | ## Environment info
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-1060-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
- Trainer: @sgugger
## Information
Model I am using: distilroberta-base
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Train a model using `Trainer()`, set `load_best_model_at_end=True` and add `EarlyStoppingCallback`
2. Use `trainer.push_to_hub()` to publish the model
Here's the full training config I was using:
```
def simple_accuracy(preds, labels):
return (preds == labels).mean()
def compute_metrics(p):
preds = np.argmax(p.predictions, axis=1)
return {"acc": simple_accuracy(preds, p.label_ids)}
training_args = TrainingArguments(
output_dir=output_dir,
do_train=True,
do_eval=True,
fp16=True,
weight_decay=0.01,
num_train_epochs=20,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
gradient_accumulation_steps=1,
learning_rate=5e-5,
warmup_steps=16,
logging_steps=16,
evaluation_strategy='epoch',
save_strategy='epoch',
save_total_limit=patience,
load_best_model_at_end=True,
metric_for_best_model='acc',
greater_is_better=True,
seed=seed,
report_to='none'
)
trainer = Trainer(
model=clf,
args=training_args,
train_dataset=ds_train,
eval_dataset=ds_valid,
compute_metrics=compute_metrics,
)
patience = 5
trainer.add_callback(EarlyStoppingCallback(patience, 1e-3))
```
Training progress:

However in the model card, the reported accuracy is the last checkpoint rather than the best checkpoint:
> This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base)[](https://huggingface.co/valurank/distilroberta-offensive#model-description) on an unknown dataset. It achieves the following results on the evaluation set:
>
> Loss: 0.4526
> Acc: 0.8975
| 02-07-2022 17:11:58 | 02-07-2022 17:11:58 | The model card always reports the last metrics, there is no easy way to change that. In general, you should always do a last manual evaluation with `trainer.evaluate()` to have the correct info being populated in the model card, like we do in all our example scripts.<|||||>@sgugger thanks for the quick response. Good tip, I'll make sure to add a call to `evaluate`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.