repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,526 | closed | Really long training time on bert fine-tuning for classification | Hi, I am training a dataset with 700,000 samples. Basically they are just text with a binary label. What I am doing is
model = transformer.BertModel.from_pretrained("uncased_bert")
outputs = model(ids=ids, mask=mask, token_type_ids=token_type_ids)
loss = loss_fun(outputs, targets).
So for each piece of text, I used encode_plus(text) to get ids, mask, etc .
I am using a batch size of 32 and learning rate is lr=3e-5(smaller is better by some research) find this is taking really really long. Like a few days if I calculate correctly. I also found that using 2 gpus, each of them will be cost like 6G memory and using 1 gpu, it will take 11G. Is there anything that I can do to speed this up? Thanks! | 07-06-2021 01:32:30 | 07-06-2021 01:32:30 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 12,525 | closed | [debugging utils] minor doc improvements | This PR includes various minor doc improvements.
@sgugger | 07-06-2021 00:03:50 | 07-06-2021 00:03:50 | |
transformers | 12,524 | closed | [doc] DP/PP/TP/etc parallelism | This PR graduates the original notes from https://github.com/huggingface/transformers/issues/9766 combined with notes from
@anton-l https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530 and adding some extra materials into a separate doc.
Fixes: https://github.com/huggingface/transformers/issues/9766
| 07-06-2021 00:00:56 | 07-06-2021 00:00:56 | |
transformers | 12,523 | closed | How much GPU memory is required for DistilBERT? | Looked through issues and searched on Google and can't seem to find a answer to this question.
Batch size is 1.
Thanks, | 07-05-2021 22:52:46 | 07-05-2021 22:52:46 | https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit#gid=0 |
transformers | 12,522 | closed | Keep getting an OOM when doing a evaluation | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.8.0-55-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes cuda:0 only one GPU
- Using distributed or parallel set-up in script?: No
### Who can help
No idea.
Models:
- albert, bert
Library:
- trainer: @sgugger
## Information
The problem arises when using evaluating a model, I train using a batch size of 64 and 80% of my data but when I do an evaluation (600 000 data points) it ALWAYS give a OOM at 94% even with a batch size of one and the memory allocation IS ALWAYS 3.75 GB no matter the size of the batch. I've deleted the .cache many times and still got the problem.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: which is a sequence classification of postal address (max 30 tokens).
## To reproduce
My script is base on the [token classification with trainer](https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py).
| 07-05-2021 22:45:44 | 07-05-2021 22:45:44 | Found [this](https://discuss.huggingface.co/t/cuda-out-of-memory-during-evaluation-but-training-is-fine/1783/2) explication that fixed my problem.
i.e. Huggingface keeps all the prediction in the GPU. |
transformers | 12,521 | closed | Improve documentation of pooler_output in ModelOutput | # What does this PR do?
Improves the doc string for pooler_output in modeling_outputs.py – making it more clear, and opening its availability to a more generic use-case than just BERT-family of models.
**Motivation**:
I was writing a `cls_pooler` for a sentence embeddings usage, and initially thought this is the CLS token output from the last layer – which is not the case, that would just be `last_hidden_state[0]`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger | 07-05-2021 21:35:43 | 07-05-2021 21:35:43 | Thanks, you just need to run `make style` on your branch to fix the styling issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks, you just need to run `make style` on your branch to fix the styling issue.
Oh I missed this message @sgugger – just noticed.
Created a fresh PR – https://github.com/huggingface/transformers/pull/13228 |
transformers | 12,520 | closed | [Wav2Vec2] Flax - Adapt wav2vec2 script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Corrects the Wav2Vec2 script a bit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-05-2021 20:12:33 | 07-05-2021 20:12:33 | |
transformers | 12,519 | closed | [Flax] Fix hybrid clip | # What does this PR do?
This PR fixes an issue with saving and loading `HybridCLIPConfig` which was causing issues when loading the model.
The config was expecting the `text_config_dict` and `vision_config_dict` arguments, but in the `to_dict` it was saved as `text_config` and `vision_config`. So the next time when `config` was loaded using `from_pretrained` the `model_type` key was missing from `text_config_dict` and `vision_config_dict` as we pop that in the init.
This PR renames `text_config_dict` and `vision_config_dict` to `text_config` and `vision_config` respectively. This is slightly breaking as we won't be able to load previously saved configs. A simple workaround for that is to re-initialize the config using
`from_text_vision_configs` method and resave them.
This PR also adds some usage examples for `FlaxHybridCLIP`.
Fixes #12513
(I should really add tests for this) | 07-05-2021 19:00:11 | 07-05-2021 19:00:11 | |
transformers | 12,518 | closed | Can hidden states be passed instead of input_ids or inputs_embeds in Transformers OpenAI GPT2? | I am working on an encoder decoder model which uses a fine tuned RoBERTa as the encoder and GPT2 as the decoder. Before passing the encoder context to the decoder, I am mixing it with some context from a different domain. This mixing module is a simple NN. Hence, I now want to pass these transformed hidden states to the GPT2 decoder to do decoding, and I will train the decoder and the mixer only, not the encoder. How can I pass these transformed hidden states to the GPT2 decoder instead of the `input_ids` or `inputs_embeds`? The shape of my transformed hidden states is `(n_layers, batch_size, 1, hidden_size)` and I am currently using `batch_size=1`. Any help will be appreciated. | 07-05-2021 18:27:00 | 07-05-2021 18:27:00 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>Thank you! I have asked my [question](https://discuss.huggingface.co/t/can-hidden-states-be-passed-instead-of-input-ids-or-inputs-embeds-in-transformers-openai-gpt2/8073) on the forum.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,517 | closed | MLM training fails with no validation file(same as #12406 for pytorch now) | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using (Bert, XLNet ...): distilbert-base-cased
The problem arises when using: the official example scripts: (give details below)
The tasks I am working on is: MLM finetuning
## To reproduce
Steps to reproduce the behavior:
1. Just run the tensorflow examples
2. python3 ./transformers/examples/pytorch/language-modeling/run_mlm.py\
--model_name_or_path distilbert-base-cased \
--output_dir ./g \
--train_file "customdata.txt" \
3. The model fails with error message that no validation file is there.
## Expected behavior
it should use the validation split percentage parameter to divide the training set into training and eval samples. | 07-05-2021 17:37:34 | 07-05-2021 17:37:34 | @sgugger @patil-suraj ..
Checks done.. |
transformers | 12,516 | closed | [Flax] Fix another bug in logging steps | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Follow-up PR from #12515
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-05-2021 17:34:45 | 07-05-2021 17:34:45 | |
transformers | 12,515 | closed | [Flax] Correct logging steps flax | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a bug that was unnoticed in #12514
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-05-2021 17:20:36 | 07-05-2021 17:20:36 | |
transformers | 12,514 | closed | [Flax] Correct flax training scripts | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adapts all language modeling scripts to keep `train_metrics` small enough to avoid OOM errors.
Would be awesome if @marcvanzee could take a look as well :-)
**Note**: By default `logging_steps` is set to 500.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-05-2021 16:35:53 | 07-05-2021 16:35:53 | |
transformers | 12,513 | closed | Loading FlaxHybridCLIP trained model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
Models:
- FlaxHybridCLIP
## Information
I am not sure about how to load a trained FlaxHybridCLIP model from a folder. We trained using [this](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip).
I tried: `FlaxHybridCLIP.from_text_vision_pretrained(PATH_TRAINED_MODEL, PATH_TRAINED_MODEL)`, but I got the following error:
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vinid/transformers/examples/research_projects/jax-projects/hybrid_clip/modeling_hybrid_clip.py", line 333, in from_text_vision_pretrained
text_config = AutoConfig.from_pretrained(text_model_name_or_path)
File "/home/raphaelp/transformers/src/transformers/models/auto/configuration_auto.py", line 452, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'hybrid-clip'
```
The folder (PATH_TRAINED_MODEL) contains the two following files:
- `config.json`
- `flax_model.msgpack`
Thank you :) :)
| 07-05-2021 14:49:41 | 07-05-2021 14:49:41 | Sorry, totally forgot to tag you @patrickvonplaten, @patil-suraj :)
Thanks :) <|||||>Hi @vinid
To load the trained model or if you save the `FlaxHybridCLIP` model using the `save_pretrained` method, you could directly use the `FlaxHybridCLIP.from_pretrained` method. Let me know If that works.<|||||>Thanks for your reply! :)
I am getting a new error now, seems like it is looking for something in the configs
```
>>> FlaxHybridCLIP.from_pretrained(PATH_TRAINED_MODEL)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/giuseppe/transformers/src/transformers/modeling_flax_utils.py", line 265, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
```
<|||||>Aah, I see. That's actually a bug, I will work on a fix.
As a workaround could you try loading the config and then pass it to the `from_pretrained` ?
```python
config = HybridCLIPConfig.from_pretrained("...")
model = FlaxHybridCLIP.from_pretrained("...", config=config)
```<|||||>I think this works. Or at least it doesn't throw an error:
```python
with open(path_to_config, 'r') as f:
config_dict = json.load(f)
config_dict['vision_config']['model_type'] = 'clip'
config = HybridCLIPConfig(text_config_dict=config_dict['text_config'], vision_config_dict=config_dict['vision_config'])
model = FlaxHybridCLIP.from_pretrained(path_to_msgpack, config=config)
```<|||||>Fix is here #12519<|||||>This same issue is present for FlaxEncoderDecoderModel. Would it be possible for use the same fix?<|||||>Hi @timothybrooks
Could you open a new issue for `FlaxEncoderDecoderModel` and post the stack trace and code to reproduce there? Thanks! |
transformers | 12,512 | closed | Remove tf.roll wherever not needed | It was used in shift_right.
After this change TF code is more similar to Pytorch implementations
Also, TF graphs are optimized (one node less)
# What does this PR do?
This change optimizes TF graphs and code without modifying the math.
Using roll is not necessary (probably most accelerators implement it as Slice + Concat).
## Who can review?
@patrickvonplaten, @LysandreJik | 07-05-2021 14:49:22 | 07-05-2021 14:49:22 | @szutenberg Thanks for this PR! The changes look good - is there anything else you want to edit, or are you happy for us to merge it as is?<|||||>Hi @Rocketknight1 ,
Thanks for the review. The change is ready for merge as it is.
Could you have a look at: https://github.com/huggingface/transformers/pull/12332 ? It resolves issues with bf16. We can fix fp16 in a separate PR (I didn't have time to debug it but gave some hints in comments and I think that starting from debugging the inference graph would be a good idea). |
transformers | 12,511 | closed | Add a warning for broken ProphetNet fine-tuning | This PR adds a warning for the broken ProphetNet fine-tuning (#9804). | 07-05-2021 13:27:27 | 07-05-2021 13:27:27 | |
transformers | 12,510 | closed | Fix order of state and input in Flax Quickstart README | # What does this PR do?
Correct the order of state and input for `flax.linen.apply()` in Flax Quickstart README
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@patrickvonplaten @patil-suraj | 07-05-2021 13:07:30 | 07-05-2021 13:07:30 | Note: if the above correction is valid, then we ll need to update [this image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/lm_flax_inference.png) as well |
transformers | 12,509 | closed | `transformers-cli env` doesn't work out-of-the-box on v3-8 TPU | **Running `transformers-cli env` returns the following stack trace**
```bash
(gpt-2-german) christopher@t1v-n-97918dcd-w-0:~$ transformers-cli env
Traceback (most recent call last):
File "/home/christopher/gpt-2-german/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())
File "/home/christopher/gpt-2-german/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/lib/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/christopher/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module>
from .run import RunCommand
File "/home/christopher/transformers/src/transformers/commands/run.py", line 17, in <module>
from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline
File "/home/christopher/transformers/src/transformers/pipelines/__init__.py", line 26, in <module>
from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor
File "/home/christopher/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module>
from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/home/christopher/transformers/src/transformers/__init__.py", line 3060, in __getattr__
return super().__getattr__(name)
File "/home/christopher/transformers/src/transformers/file_utils.py", line 1890, in __getattr__
value = getattr(module, name)
File "/home/christopher/transformers/src/transformers/file_utils.py", line 1889, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/christopher/transformers/src/transformers/models/speech_to_text/__init__.py", line 82, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/christopher/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py", line 23, in <module>
import torchaudio.compliance.kaldi as ta_kaldi
File "/home/christopher/gpt-2-german/lib/python3.8/site-packages/torchaudio/__init__.py", line 13, in <module>
from torchaudio.backend import (
File "/home/christopher/gpt-2-german/lib/python3.8/site-packages/torchaudio/backend/__init__.py", line 2, in <module>
from . import utils
File "/home/christopher/gpt-2-german/lib/python3.8/site-packages/torchaudio/backend/utils.py", line 7, in <module>
from . import (
File "/home/christopher/gpt-2-german/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py", line 11, in <module>
import soundfile
File "/home/christopher/gpt-2-german/lib/python3.8/site-packages/soundfile.py", line 142, in <module>
raise OSError('sndfile library not found')
OSError: sndfile library not found
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '4.9.0.dev0' (editable pip install from `git` source)
- Platform: TPU v3-8 (Operating System: Ubuntu 20.04.2 LTS, Kernel: Linux 5.4.0-1043-gcp, Architecture: x86-64)
- Python version: Python 3.8.10 [GCC 9.4.0] on linux
- PyTorch version (GPU?): '1.9.0+cu102' (extras['all'])
- Tensorflow version (GPU?): '2.5.0' (extras['all'])
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten and @patil-suraj
## Information
Model I am using: roBERTa
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the `env` subcommand on a v3-8 TPU
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The environment information to be printed out
<!-- A clear and concise description of what you would expect to happen. -->
| 07-05-2021 13:03:53 | 07-05-2021 13:03:53 | Thank you for reporting this!
@LysandreJik do we need all `dev` dependencies for `transformers-cli` ?
The above error is caused by missing `sndfile` which is required for `torchaudio` or `soundfile` .
as a workaround, you could install `sndfile`
```sudo apt-get install libsndfile1```<|||||>Not 100% sure, but I think the issue comes from having `torchaudio` installed but not `sndfile`
It's not required to have all the `dev` dependencies for `transformers-cli`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,508 | closed | Our Model works well in the local ! But when we upload that it doesnt work showing irrelevent performance ! | We tried every possible way . We followed the uploading steps perfectly . Deleting and re uploading are done several times . It performs perfectly in the local ! When we download after uploading it it shows this mysterious problem . | 07-05-2021 12:29:11 | 07-05-2021 12:29:11 | Do you have a link to your model?<|||||>
Yes ! can you tell something about tokenizer ?
|
transformers | 12,506 | closed | [Flax] Non working model when exporting to Huggingface | I have trained a RoBERTa base Norwegian according to instructions given at https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#masked-language-modeling.
The final mlm accuracy is 0.63, indicating a working model.
I am trying to load the model, and export til to PyTorch (or TF) for using the inference widget on Hugging Face.
The following code runs without errors:
from transformers import AutoTokenizer, RobertaForMaskedLM
model = RobertaForMaskedLM.from_pretrained('model_dir', from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('model_dir')
model.save_frompretrained('.')
tokenizer.save_frompretrained('.')
Example widget here: https://huggingface.co/pere/norwegian-roberta-base?text=Dette+er+en+%3Cmask%3E.
The outputs makes absolutely no sense. What is the correct way of exporting a Flax model (with and without the MLM head)? | 07-05-2021 10:36:28 | 07-05-2021 10:36:28 | cc @patrickvonplaten @patil-suraj <|||||>Hi @peregilk, how did you save the flax Roberta model using the `FlaxRobertaModel` class or using the `FlaxRobertaForMaskedLM` class?
I tried loading the flax model with
```python
fx_model = FlaxRobertaForMaskedLM.from_pretrained("pere/norwegian-roberta-base")
```
and it gives this warning
```
Some weights of FlaxRobertaForMaskedLM were not initialized from the model checkpoint at pere/norwegian-roberta-base and are newly initialized: {('lm_head', 'dense', 'kernel'), ('lm_head', 'layer_norm', 'bias'), ('lm_head', 'layer_norm', 'scale'), ('lm_head', 'bias'), ('lm_head', 'dense', 'bias')}
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
which means the model was not saved with `FlaxRobertaForMaskedLM`, so the `lm_head` is missing. In which case it is randomly initialized, which might explain the above behavior.
Please make sure you use the `*ForMaskedLM` class to save, load MLM models. Thanks!<|||||>Thanks for the feedback @patil-suraj. I think you are absolutely right that the uploaded model does not contain the LM head. The issue is then probably related to the loading.
I have tried both these alternatives:
```
model = RobertaForMaskedLM.from_pretrained('model_dir', from_flax=True)
model = FlaxRobertaForMaskedLM.from_pretrained('model_dir')
```
Both giving roughly the same "random" result. I actually notice errors from both these methods when loading. Slightly different, but both pointing at the lm head not being loaded correctly:
```
- This IS NOT expected if you are initializing RobertaForMaskedLM from a Flax model that you expect to be exactly identical (e.g. initializing a BertForSequenceClassification model from a FlaxBertForSequenceClassification model).
This is the version that is currently pushed, and you are probably right the head is not included.
```
```
Some weights of FlaxRobertaForMaskedLM were not initialized from the model checkpoint at . and are newly initialized: {('lm_head', 'layer_norm', 'scale'), ('lm_head', 'bias'), ('lm_head', 'dense', 'kernel'), ('lm_head', 'layer_norm', 'bias'), ('lm_head', 'dense', 'bias')}
```
How can I load the Flax model with the LM head?
<|||||>Related: How do I check that `flax_model.msgpack` actually is saved with the LM head?<|||||>@peregilk - I seems like the uploaded flax model weights are not correct. When running:
```python
from transformers import FlaxRobertaForMaskedLM
model = FlaxRobertaForMaskedLM.from_pretrained("pere/norwegian-roberta-base")
```
The lm weights are missing. <|||||>I think this commit: https://huggingface.co/pere/norwegian-roberta-base/commit/163f3993e02f514ce29901020a7ec73958fb415b was where they were deleted. Can you revert this change and instead use the Flax model weights from this commit: https://huggingface.co/pere/norwegian-roberta-base/commit/506fb71065742c28f1efe8ccb9cae48ddcc562ad .
If you convert the flax weights of this commit: https://huggingface.co/pere/norwegian-roberta-base/commit/506fb71065742c28f1efe8ccb9cae48ddcc562ad to a `RobertaForMaskedLM` pytorch model, the widget should work correctly<|||||>@patrickvonplaten. Awesome! Works perfectly! I must have overwritten this in my first attempts. Did not realise that loading and saving FlaxRobertaModel removes the LM head. But it makes perfectly sense that it does.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,505 | closed | Error while running rn_clm_flax.py training script | Ran the run_clm_flax script with the following parameters.
./run_clm_flax.py \
--output_dir="${MODEL_DIR}" \
--model_type="gpt2" \
--config_name="${MODEL_DIR}" \
--tokenizer_name="${MODEL_DIR}" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_sv" \
--do_train --do_eval \
--block_size="512" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--num_train_epochs="20" \
--push_to_hub
The script ran for one epoch around 7 hours before crashing.
Traceback (most recent call last):
File "./run_clm_flax.py", line 625, in <module>
main()
File "./run_clm_flax.py", line 572, in main
state, train_metric = p_train_step(state, batch)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 183, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/_src/api.py", line 1647, in f_pmapped
out = pxla.xla_pmap(
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/core.py", line 1620, in bind
return call_bind(self, fun, *args, **params)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/core.py", line 1551, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/core.py", line 1623, in process
return trace.process_map(self, fun, tracers, params)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/core.py", line 606, in process_call
return primitive.impl(f, *tracers, **params)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 637, in xla_pmap_impl
return compiled_fun(*args)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1152, in execute_replicated
out_bufs = compiled.execute_sharded_on_local_devices(input_bufs)
jax._src.traceback_util.UnfilteredStackTrace: RuntimeError: Resource exhausted: Attempting to allocate 121.12M. That was not possible. There are 127.34M free. Due to fragmentation, the largest contiguous region of free memory is 120.12M.; (0x0x0_HBM0): while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./run_clm_flax.py", line 625, in <module>
main()
File "./run_clm_flax.py", line 572, in main
state, train_metric = p_train_step(state, batch)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1152, in execute_replicated
out_bufs = compiled.execute_sharded_on_local_devices(input_bufs)
RuntimeError: Resource exhausted: Attempting to allocate 121.12M. That was not possible. There are 127.34M free. Due to fragmentation, the largest contiguous region of free memory is 120.12M.; (0x0x0_HBM0): while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well).
Here is the training script file.
#!/usr/bin/env python
# coding=utf-8
# Copyright 2021 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Pre-training/Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=causal-lm
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import logging
import math
import os
import sys
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Callable, Optional
import datasets
from datasets import Dataset, load_dataset
from tqdm import tqdm
import jax
import jax.numpy as jnp
import optax
import transformers
from flax import jax_utils, traverse_util
from flax.jax_utils import unreplicate
from flax.training import train_state
from flax.training.common_utils import get_metrics, onehot, shard, shard_prng_key
from transformers import (
CONFIG_MAPPING,
FLAX_MODEL_FOR_CAUSAL_LM_MAPPING,
AutoConfig,
AutoTokenizer,
FlaxAutoModelForCausalLM,
HfArgumentParser,
TrainingArguments,
is_tensorboard_available,
)
from transformers.testing_utils import CaptureLogger
logger = logging.getLogger(__name__)
# Cache the result
has_tensorboard = is_tensorboard_available()
if has_tensorboard:
try:
from flax.metrics.tensorboard import SummaryWriter
except ImportError as ie:
has_tensorboard = False
print(f"Unable to display metrics through TensorBoard because some package are not installed: {ie}")
else:
print(
"Unable to display metrics through TensorBoard because the package is not installed: "
"Please run pip install tensorboard to enable."
)
MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_CAUSAL_LM_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization."
"Don't set if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
dtype: Optional[str] = field(
default="float32",
metadata={
"help": "Floating-point format in which the model weights should be initialized and trained. Choose one of `[float32, float16, bfloat16]`."
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[int] = field(
default=5,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
block_size: Optional[int] = field(
default=None,
metadata={
"help": "Optional input sequence length after tokenization. "
"The training dataset will be truncated in block of this size for training. "
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
def __post_init__(self):
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
class TrainState(train_state.TrainState):
dropout_rng: jnp.ndarray
def replicate(self):
return jax_utils.replicate(self).replace(dropout_rng=shard_prng_key(self.dropout_rng))
def data_loader(rng: jax.random.PRNGKey, dataset: Dataset, batch_size: int, shuffle: bool = False):
"""
Returns batches of size `batch_size` from truncated `dataset`, sharded over all local devices.
Shuffle batches if `shuffle` is `True`.
"""
steps_per_epoch = len(dataset) // batch_size
if shuffle:
batch_idx = jax.random.permutation(rng, len(dataset))
else:
batch_idx = jnp.arange(len(dataset))
batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch.
batch_idx = batch_idx.reshape((steps_per_epoch, batch_size))
for idx in batch_idx:
batch = dataset[idx]
batch = {k: jnp.array(v) for k, v in batch.items()}
batch = shard(batch)
yield batch
def write_metric(summary_writer, train_metrics, eval_metrics, train_time, step):
summary_writer.scalar("train_time", train_time, step)
train_metrics = get_metrics(train_metrics)
for key, vals in train_metrics.items():
tag = f"train_{key}"
for i, val in enumerate(vals):
summary_writer.scalar(tag, val, step - len(vals) + i + 1)
for metric_name, value in eval_metrics.items():
summary_writer.scalar(f"eval_{metric_name}", value, step)
def create_learning_rate_fn(
train_ds_size: int, train_batch_size: int, num_train_epochs: int, num_warmup_steps: int, learning_rate: float
) -> Callable[[int], jnp.array]:
"""Returns a linear warmup, linear_decay learning rate function."""
steps_per_epoch = train_ds_size // train_batch_size
num_train_steps = steps_per_epoch * num_train_epochs
warmup_fn = optax.linear_schedule(init_value=0.0, end_value=learning_rate, transition_steps=num_warmup_steps)
decay_fn = optax.linear_schedule(
init_value=learning_rate, end_value=0, transition_steps=num_train_steps - num_warmup_steps
)
schedule_fn = optax.join_schedules(schedules=[warmup_fn, decay_fn], boundaries=[num_warmup_steps])
return schedule_fn
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty."
"Use --overwrite_output_dir to overcome."
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# Setup logging, we only want one process per machine to log things on the screen.
logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR)
if jax.process_index() == 0:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# Set the verbosity to info of the Transformers logger (on main process only):
logger.info(f"Training/evaluation parameters {training_args}")
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantees that only one local process can concurrently
# download the dataset.
if data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
dataset = load_dataset(
data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, keep_in_memory=False
)
if "validation" not in dataset.keys():
dataset["validation"] = load_dataset(
data_args.dataset_name,
data_args.dataset_config_name,
split=f"train[:{data_args.validation_split_percentage}%]",
cache_dir=model_args.cache_dir,
)
dataset["train"] = load_dataset(
data_args.dataset_name,
data_args.dataset_config_name,
split=f"train[{data_args.validation_split_percentage}%:]",
cache_dir=model_args.cache_dir,
)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer
)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer
)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if model_args.model_name_or_path:
model = FlaxAutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path, config=config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype)
)
else:
model = FlaxAutoModelForCausalLM.from_config(
config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype)
)
# Preprocessing the datasets.
# First we tokenize all the texts.
if training_args.do_train:
column_names = dataset["train"].column_names
else:
column_names = dataset["validation"].column_names
text_column_name = "text" if "text" in column_names else column_names[0]
# since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
def tokenize_function(examples):
with CaptureLogger(tok_logger) as cl:
output = tokenizer(examples[text_column_name])
# clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning(
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model."
)
return output
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
if data_args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > config.max_position_embeddings:
logger.warning(
f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --block_size xxx."
)
block_size = 1024
else:
if data_args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(data_args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
# for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=not data_args.overwrite_cache,
)
if training_args.do_train:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = lm_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
if training_args.do_eval:
if "validation" not in tokenized_datasets:
raise ValueError("--do_eval requires a validation dataset")
eval_dataset = lm_datasets["validation"]
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
# Enable tensorboard only on the master node
if has_tensorboard and jax.process_index() == 0:
summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir))
# Initialize our training
rng = jax.random.PRNGKey(training_args.seed)
rng, dropout_rng = jax.random.split(rng)
# Store some constant
num_epochs = int(training_args.num_train_epochs)
train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count()
eval_batch_size = int(training_args.per_device_eval_batch_size) * jax.device_count()
steps_per_epoch = len(train_dataset) // train_batch_size
total_train_steps = steps_per_epoch * num_epochs
# Create learning rate schedule
linear_decay_lr_schedule_fn = create_learning_rate_fn(
len(train_dataset),
train_batch_size,
training_args.num_train_epochs,
training_args.warmup_steps,
training_args.learning_rate,
)
# We use Optax's "masking" functionality to not apply weight decay
# to bias and LayerNorm scale parameters. decay_mask_fn returns a
# mask boolean with the same structure as the parameters.
# The mask is True for parameters that should be decayed.
# Note that this mask is specifically adapted for FlaxGPT2.
# For other models, one should correct the layer norm parameter naming
# accordingly.
def decay_mask_fn(params):
flat_params = traverse_util.flatten_dict(params)
flat_mask = {
path: (path[-1] != "bias" and path[-2:] not in [("ln_1", "scale"), ("ln_2", "scale"), ("ln_f", "scale")])
for path in flat_params
}
return traverse_util.unflatten_dict(flat_mask)
# create adam optimizer
adamw = optax.adamw(
learning_rate=linear_decay_lr_schedule_fn,
b1=training_args.adam_beta1,
b2=training_args.adam_beta2,
eps=training_args.adam_epsilon,
weight_decay=training_args.weight_decay,
mask=decay_mask_fn,
)
# Setup train state
state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw, dropout_rng=dropout_rng)
def loss_fn(logits, labels):
shift_logits = logits[..., :-1, :]
shift_labels = labels[..., 1:]
loss = optax.softmax_cross_entropy(shift_logits, onehot(shift_labels, shift_logits.shape[-1]))
return loss.mean()
# Define gradient update step fn
def train_step(state, batch):
dropout_rng, new_dropout_rng = jax.random.split(state.dropout_rng)
def compute_loss(params):
labels = batch.pop("labels")
logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
loss = loss_fn(logits, labels)
return loss
grad_fn = jax.value_and_grad(compute_loss)
loss, grad = grad_fn(state.params)
grad = jax.lax.pmean(grad, "batch")
new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng)
metrics = {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return new_state, metrics
# Define eval fn
def eval_step(params, batch):
labels = batch.pop("labels")
logits = model(**batch, params=params, train=False)[0]
loss = loss_fn(logits, labels)
# summarize metrics
metrics = {"loss": loss}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return metrics
# Create parallel version of the train and eval step
p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,))
p_eval_step = jax.pmap(eval_step, "batch")
# Replicate the train state on each device
state = state.replicate()
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {num_epochs}")
logger.info(f" Instantaneous batch size per device = {training_args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel & distributed) = {train_batch_size}")
logger.info(f" Total optimization steps = {total_train_steps}")
train_time = 0
epochs = tqdm(range(num_epochs), desc=f"Epoch ... (1/{num_epochs})", position=0)
for epoch in epochs:
# ======================== Training ================================
train_start = time.time()
# Create sampling rng
rng, input_rng = jax.random.split(rng)
train_metrics = []
# Generate an epoch by shuffling sampling indices from the train dataset
train_loader = data_loader(input_rng, train_dataset, train_batch_size, shuffle=True)
steps_per_epoch = len(train_dataset) // train_batch_size
# train
for _ in tqdm(range(steps_per_epoch), desc="Training...", position=1, leave=False):
batch = next(train_loader)
state, train_metric = p_train_step(state, batch)
train_metrics.append(train_metric)
train_time += time.time() - train_start
train_metric = unreplicate(train_metric)
epochs.write(
f"Epoch... ({epoch + 1}/{num_epochs} | Loss: {train_metric['loss']}, Learning Rate: {train_metric['learning_rate']})"
)
# ======================== Evaluating ==============================
eval_metrics = []
eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size)
eval_steps = len(eval_dataset) // eval_batch_size
for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False):
# Model forward
batch = next(eval_loader)
metrics = p_eval_step(state.params, batch)
eval_metrics.append(metrics)
# normalize eval metrics
eval_metrics = get_metrics(eval_metrics)
eval_metrics = jax.tree_map(jnp.mean, eval_metrics)
try:
eval_metrics["perplexity"] = math.exp(eval_metrics["loss"])
except OverflowError:
eval_metrics["perplexity"] = float("inf")
# Print metrics and update progress bar
desc = f"Epoch... ({epoch + 1}/{num_epochs} | Eval Loss: {eval_metrics['loss']} | Eval Perplexity: {eval_metrics['perplexity']})"
epochs.write(desc)
epochs.desc = desc
# Save metrics
if has_tensorboard and jax.process_index() == 0:
cur_step = epoch * (len(train_dataset) // train_batch_size)
write_metric(summary_writer, train_metrics, eval_metrics, train_time, cur_step)
# save checkpoint after each epoch and push checkpoint to the hub
if jax.process_index() == 0:
params = jax.device_get(unreplicate(state.params))
model.save_pretrained(
training_args.output_dir,
params=params,
push_to_hub=training_args.push_to_hub,
commit_message=f"Saving weights and logs of epoch {epoch+1}",
)
if __name__ == "__main__":
main()
| 07-05-2021 09:50:26 | 07-05-2021 09:50:26 | Copied from comment in flax.
@jheek pointed out that the fragmentation/OOM is caused by storing training metrics (on device) in the training_metrics list. If the number of steps per epoch is really large (which is the case bigger datasets) then we should periodically fetch the metrics to memory or do logging periodically rather than at the end of epoch so the device metrics won't get accumulated in training_metrics. (cf huggingface/transformers#12023 (comment))
If I understand correctly the training metrics is filling up the disk drive on the device and eventually the disk is filled up?
<|||||>Indeed, the training metrics are all concatenated to an array and written at the end of the epoch. However, before they are written these metrics are of type`DeviceArray`, which means they are actually stored on the device (TPU). These don't have to much memory so they will OOO if your dataset is too large.
I think a nice to solution is to do something similar to what we do in our Imagenet example, where we have an additional flag `log_every_steps`, which will `device_get` the metrics after X steps and write them to disk: https://github.com/google/flax/blob/master/examples/imagenet/train.py#L345.<|||||>I made a quick-fix that simply skips writing the metrics. This is obviously not a solution, but I was just wondering if this script will be able to train or if the metrics are needed for training?
```
def write_metric(summary_writer, train_metrics, eval_metrics, train_time, step):
print("ignoring writing metric")
# summary_writer.scalar("train_time", train_time, step)
# train_metrics = get_metrics(train_metrics)
# for key, vals in train_metrics.items():
# tag = f"train_{key}"
# for i, val in enumerate(vals):
# summary_writer.scalar(tag, val, step - len(vals) + i + 1)
# for metric_name, value in eval_metrics.items():
# summary_writer.scalar(f"eval_{metric_name}", value, step)
``` |
transformers | 12,504 | closed | Creating Flax VisualBert based on Flax Bert | I am using the [VisualBert](https://github.com/huggingface/transformers/blob/master/src/transformers/models/visual_bert/modeling_visual_bert.py) model and the [FlaxBert](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_flax_bert.py) model to create a model similar to VisualBert in Flax (which will use ViT instead of Detectron, hence the name).
Here are the embeddings:
```python
class FlaxViTBertEmbeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings."""
config: BertConfig
dtype: jnp.dtype = jnp.float32 # the dtype of the computation
def setup(self):
self.word_embeddings = nn.Embed(
self.config.vocab_size,
self.config.hidden_size,
embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
dtype=self.dtype,
)
self.position_embeddings = nn.Embed(
self.config.max_position_embeddings,
self.config.hidden_size,
embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
dtype=self.dtype,
)
self.token_type_embeddings = nn.Embed(
self.config.type_vocab_size,
self.config.hidden_size,
embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
dtype=self.dtype,
)
self.visual_projection = nn.Dense(self.config.hidden_size, dtype=self.dtype, kernel_init=jax.nn.initializers.normal(self.config.initializer_range, self.dtype))
self.visual_position_embeddings = nn.Embed(
self.config.max_position_embeddings,
self.config.hidden_size,
embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
dtype=self.dtype,
)
self.visual_token_type_embeddings = nn.Embed(
self.config.type_vocab_size,
self.config.hidden_size,
embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range),
dtype=self.dtype,
)
self.LayerNorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype)
self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob)
def __call__(self, input_ids, token_type_ids, position_ids, visual_inputs_embeds, visual_token_type_ids, visual_position_ids, deterministic: bool = True):
# Embed
inputs_embeds = self.word_embeddings(input_ids.astype("i4"))
position_embeds = self.position_embeddings(position_ids.astype("i4"))
token_type_embeddings = self.token_type_embeddings(token_type_ids.astype("i4"))
# Sum all embeddings
word_embeddings = inputs_embeds + token_type_embeddings + position_embeds
# Visual Embed
visual_inputs_embeds = self.visual_projection(visual_inputs_embeds)
visual_token_type_embeddings = self.visual_token_type_embeddings(visual_token_type_ids.astype("i4"))
visual_position_embeds = self.visual_position_embeddings(visual_position_ids.astype("i4"))
# Sum all visual embeddings
visual_embeddings = visual_inputs_embeds + visual_token_type_embeddings + visual_position_embeds
# Concat
hidden_states = jnp.concatenate((word_embeddings, visual_embeddings),axis=1)
# Layer Norm
hidden_states = self.LayerNorm(hidden_states)
hidden_states = self.dropout(hidden_states, deterministic=deterministic)
return hidden_states
```
These embeddings work fine. I generate parameters using a random key and then apply those parameters.
Then, I create the model like so:
```python
class FlaxViTBertModule(nn.Module):
config: BertConfig
dtype: jnp.dtype = jnp.float32 # the dtype of the computation
add_pooling_layer: bool = True
def setup(self):
self.embeddings = FlaxViTBertEmbeddings(self.config, dtype=self.dtype)
self.encoder = FlaxBertEncoder(self.config, dtype=self.dtype)
self.pooler = FlaxBertPooler(self.config, dtype=self.dtype)
def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, visual_input_shape) -> FrozenDict:
# init input tensors
input_ids = jnp.zeros(input_shape, dtype="i4")
token_type_ids = jnp.zeros_like(input_ids)
position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape)
attention_mask = jnp.ones_like(input_ids)
visual_inputs_embeds = jnp.random(visual_input_shape),
visual_attention_mask = jnp.ones(visual_input_shape[:-1])
visual_token_type_ids = jnp.ones(visual_input_shape[:-1])
visual_position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(visual_input_shape).shape[-2]), visual_input_shape[:-1])
params_rng, dropout_rng = jax.random.split(rng)
rngs = {"params": params_rng, "dropout": dropout_rng}
return self.module.init(rngs, input_ids, attention_mask, token_type_ids, position_ids, visual_inputs_embeds,
visual_attention_mask,
visual_token_type_ids,
visual_position_ids, return_dict=False)[
"params"
]
def __call__(
self,
input_ids,
attention_mask,
token_type_ids,
position_ids,
visual_inputs_embeds,
visual_attention_mask,
visual_token_type_ids,
visual_position_ids,
deterministic: bool = True,
output_attentions: bool = False,
output_hidden_states: bool = False,
return_dict: bool = True,
):
hidden_states = self.embeddings(
input_ids, token_type_ids, position_ids, visual_input_embeds, visual_token_type_ids, visual_position_ids, deterministic=deterministic
)
outputs = self.encoder(
hidden_states,
attention_mask,
deterministic=deterministic,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = outputs[0]
pooled = self.pooler(hidden_states) if self.add_pooling_layer else None
if not return_dict:
# if pooled is None, don't return it
if pooled is None:
return (hidden_states,) + outputs[1:]
return (hidden_states, pooled) + outputs[1:]
return FlaxBaseModelOutputWithPooling(
last_hidden_state=hidden_states,
pooler_output=pooled,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
class FlaxViTBertModel(FlaxBertPreTrainedModel):
module_class = FlaxViTBertModule
```
When I try this:
```python
flax_model = FlaxViTBertModel.from_pretrained('bert-base-uncased')
```
I get the following error:
```python
TypeError: __call__() missing 4 required positional arguments: 'visual_inputs_embeds', 'visual_attention_mask', 'visual_token_type_ids', and 'visual_position_ids'
```
I believe the issue is because `FlaxBertPreTrainedModel`
only takes in `input_shape`. But `FlaxBertPreTrainedModel` in turn calls `FlaxPreTrainedModel`'s `__init__()`, which again only has `input_shape` only.
What would be an elegant way to deal with this? How do I create few random weights, and few initialized from the pre-trained checkpoint?
EDIT:
I am aware there will be a shape mismatch in the model. I will fix it when it comes to that. | 07-05-2021 09:17:49 | 07-05-2021 09:17:49 | I tried another way (by modifying the pre-trained class):
``` python
class FlaxViTBertPreTrainedModel(FlaxPreTrainedModel):
"""
An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
models.
"""
config_class = BertConfig
base_model_prefix = "vitbert"
module_class: nn.Module = None
def __init__(
self, config: BertConfig, input_shape: Tuple = ((1, 1),(1,1,1)), seed: int = 0, dtype: jnp.dtype = jnp.float32, **kwargs
):
module = self.module_class(config=config, dtype=dtype, **kwargs)
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict:
# init input tensors
textual_input_shape = input_shape[0]
input_ids = jnp.zeros(textual_input_shape, dtype="i4")
token_type_ids = jnp.zeros_like(input_ids)
position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), textual_input_shape)
attention_mask = jnp.ones_like(input_ids)
visual_input_shape = input_shape[1]
visual_inputs_embeds = jax.random.normal(visual_input_shape),
visual_attention_mask = jnp.ones(visual_input_shape[:-1])
visual_token_type_ids = jnp.ones(visual_input_shape[:-1])
visual_position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(visual_input_shape).shape[-2]), visual_input_shape[:-1])
params_rng, dropout_rng = jax.random.split(rng)
rngs = {"params": params_rng, "dropout": dropout_rng}
return self.module.init(rngs, input_ids, attention_mask, token_type_ids, position_ids, visual_inputs_embeds,
visual_attention_mask,
visual_token_type_ids,
visual_position_ids, return_dict=False)[
"params"
]
def __call__(
self,
input_ids,
attention_mask=None,
token_type_ids=None,
position_ids=None,
visual_inputs_embeds=None,
visual_attention_mask=None,
visual_token_type_ids=None,
visual_position_ids=None,
params: dict = None,
dropout_rng: jax.random.PRNGKey = None,
train: bool = False,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
):
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
return_dict = return_dict if return_dict is not None else self.config.return_dict
# init input tensors if not passed
if token_type_ids is None:
token_type_ids = jnp.zeros_like(input_ids)
if position_ids is None:
position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape)
if attention_mask is None:
attention_mask = jnp.ones_like(input_ids)
if visual_token_type_ids is None:
visual_token_type_ids = jnp.ones(visual_inputs_embeds.shape[:-1])
if visual_position_ids is None:
visual_position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(visual_input_embeds).shape[-2]), visual_inputs_embeds.shape[:-1])
if visual_attention_mask is None:
visual_attention_mask = jnp.ones(visual_inputs_embeds.shape[:-1])
# Handle any PRNG if needed
rngs = {}
if dropout_rng is not None:
rngs["dropout"] = dropout_rng
return self.module.apply(
{"params": params or self.params},
jnp.array(input_ids, dtype="i4"),
jnp.array(attention_mask, dtype="i4"),
jnp.array(token_type_ids, dtype="i4"),
jnp.array(position_ids, dtype="i4"),
jnp.array(visual_inputs_embeds, dtype=jnp.float32),
jnp.array(visual_attention_mask, dtype="i4"),
jnp.array(visual_token_type_ids, dtype="i4"),
jnp.array(visual_position_ids, dtype="i4"),
not train,
output_attentions,
output_hidden_states,
return_dict,
rngs=rngs,
)
class FlaxViTBertModule(nn.Module):
config: BertConfig
dtype: jnp.dtype = jnp.float32 # the dtype of the computation
add_pooling_layer: bool = True
def setup(self):
self.embeddings = FlaxViTBertEmbeddings(self.config, dtype=self.dtype)
self.encoder = FlaxBertEncoder(self.config, dtype=self.dtype)
self.pooler = FlaxBertPooler(self.config, dtype=self.dtype)
def __call__(
self,
input_ids,
attention_mask,
token_type_ids,
position_ids,
visual_inputs_embeds,
visual_attention_mask,
visual_token_type_ids,
visual_position_ids,
deterministic: bool = True,
output_attentions: bool = False,
output_hidden_states: bool = False,
return_dict: bool = True,
):
hidden_states = self.embeddings(
input_ids, token_type_ids, position_ids, visual_input_embeds, visual_token_type_ids, visual_position_ids, deterministic=deterministic
)
combined_attention_mask = jnp.concatenate((attention_mask, visual_attention_mask), axis=1)
outputs = self.encoder(
hidden_states,
attention_mask,
deterministic=deterministic,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
hidden_states = outputs[0]
pooled = self.pooler(hidden_states) if self.add_pooling_layer else None
if not return_dict:
# if pooled is None, don't return it
if pooled is None:
return (hidden_states,) + outputs[1:]
return (hidden_states, pooled) + outputs[1:]
return FlaxBaseModelOutputWithPooling(
last_hidden_state=hidden_states,
pooler_output=pooled,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
class FlaxViTBertModel(FlaxViTBertPreTrainedModel):
module_class = FlaxViTBertModule
```
Now I get the following issue no trying:
```python
flax_model = FlaxViTBertModel.from_pretrained('bert-base-multilingual-uncased')
```
```python
TypeError: _random_bits got invalid prng key.
```
Any idea why this happens?
<|||||>Nevermind, I forgot to pass the random key to `normal` method. I will update here when I am successful with the model.<|||||>I have updated the code based on the Hybrid CLIP example. But use `FlaxViTModule` inside `FlaxViTBertEmbeddings`. Now I get the following error:
```python
AssertionError: A state dict must only have string keys.
```
Notebook:
https://colab.research.google.com/drive/1mNzt4NRBpibJ_7U73Rj3sDPAkcMRPXTd?usp=sharing |
transformers | 12,503 | closed | model.generate does not work when using a FlaxGPTNeoForCausalLM model in PT (flax-community-event) | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj
## Models
FlaxGPTNeoForCausalLM, GPTNeoForCausalLM
## Information
I have finetuned a FlaxGPTNeoForCausalLM model on the provided TPU and I'm trying to translate it to PT and generate text, but I'm unable to make it work. These are the steps I followed:
```
model = GPTNeoForCausalLM.from_pretrained('gptneo-125M-finetuned', from_flax=True)`
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-125M', use_fast=True)
text = 'A house with three bedrooms'
input_ids = tokenizer(text)
model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=100)
```
and the stack trace:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/transformers/src/transformers/tokenization_utils_base.py in __getattr__(self, item)
241 try:
--> 242 return self.data[item]
243 except KeyError:
KeyError: 'new_ones'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
/tmp/ipykernel_387004/1910535609.py in <module>
----> 1 model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=300)
~/neo/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
~/transformers/src/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
906 if model_kwargs.get("attention_mask", None) is None:
907 # init `attention_mask` depending on `pad_token_id`
--> 908 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
909 input_ids, pad_token_id, eos_token_id
910 )
~/transformers/src/transformers/generation_utils.py in _prepare_attention_mask_for_generation(self, input_ids, pad_token_id, eos_token_id)
402 if is_pad_token_in_inputs_ids and is_pad_token_not_equal_to_eos_token_id:
403 return input_ids.ne(pad_token_id).long()
--> 404 return input_ids.new_ones(input_ids.shape, dtype=torch.long)
405
406 def _prepare_encoder_decoder_kwargs_for_generation(
~/transformers/src/transformers/tokenization_utils_base.py in __getattr__(self, item)
242 return self.data[item]
243 except KeyError:
--> 244 raise AttributeError
245
246 def __getstate__(self):
AttributeError:
```
As always, being new to all this, I'm fairly certain I missed something obvious :) But in the case I didn't, I thought I'd share and see what you all think.
Thanks!
| 07-05-2021 09:14:25 | 07-05-2021 09:14:25 | Can you try:
```python
from transformers import GPTNeoForCausalLM, AutoTokenizer
model = GPTNeoForCausalLM.from_pretrained('gptneo-125M-finetuned', from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-125M', use_fast=True)
text = 'A house with three bedrooms'
input_ids = tokenizer(text, return_tensors="pt")
model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=20)
```<|||||>Also it would be great if you could upload `gptneo-125M-finetuned` to the hub so that we can better debug the error :-)<|||||>Thanks @patrickvonplaten trying now! And also figuring out how to upload heh.<|||||>Hi @TheodoreGalanos From your code snippet it seems the issue that the tokenizer is not returning pytorch tensors
```python
text = 'A house with three bedrooms'
input_ids = tokenizer(text)
```
This returns a dict like `BatchEncoding` object with keys `input_ids and `attention_mask`, which are lists.
To get tensors one should pass the `return_tensors` argument and set it to `pt`, so it'll return PyTorch tensors
So the attribute error is caused by passing the `BatchEncoding` object instead of tensors.
This should fix it.
```python
inputs = tokenizer(text, return_tensors="pt")
model.generate(**inputs, ....)
```
<|||||>Thank you both! Apologies for the newbie mistake, this solves it indeed!<|||||>Maybe I'm misreading this conversation, but I seem to be having the same issue and the above patch @patil-suraj suggests doesn't solve it.
```python
from transformers import GPTNeoForCausalLM, AutoTokenizer
model = GPTNeoForCausalLM.from_pretrained('EleutherAI/gpt-neo-125M')
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neo-125M', use_fast=True)
text = 'This is a test'
input_ids = tokenizer(text, return_tensors="pt")
model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=20)
```
Gives me the same error @TheodoreGalanos reported.<|||||>Hi @StellaAthena
```python
input_ids = tokenizer(text, return_tensors="pt")
model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=20)
```
tokenizer returns a `dict` like object `BatchEncoding`, so here `input_ids` is not a `tensor` but a `BatchEncoding`. And `generate` expects the first argument `input_ids` to be a `tensor`.
So here, we could get the `input_ids` using the `input_ids` attribute on the `BatchEncoding` object
```python3
input_ids = tokenizer(text, return_tensors="pt").input_ids
model.generate(input_ids, do_sample=True, top_p=0.84, top_k=100, max_length=20)
```
or as it's a `dict` like object, we could also pass it as kwargs
```python3
inputs = tokenizer(text, return_tensors="pt")
model.generate(**inputs, do_sample=True, top_p=0.84, top_k=100, max_length=20)
``` |
transformers | 12,502 | closed | jax hybrip_clip error ( numpy is not a valid JAX type ) | i tried to run script([run_hybrid_clip.py](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip))
after few step i got error
```
it/s/home/acul/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:699: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
Epoch ... (1/40): 0%| | 0/40 [02:43<?, ?it/s]
Traceback (most recent call last): | 39/462 [02:43<06:48, 1.03it/s]
File "run_hybrid_clip.py", line 556, in <module>
main()
File "run_hybrid_clip.py", line 502, in main
state, train_metric = p_train_step(state, batch)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 183, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/api.py", line 1623, in f_pmapped
for arg in args: _check_arg(arg)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/api.py", line 2296, in _check_arg
raise TypeError(f"Argument '{arg}' of type {type(arg)} is not a valid JAX type.")
jax._src.traceback_util.UnfilteredStackTrace: TypeError: Argument '[[list([1, 1, 1, 1,
.
.
.
.
list([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])]]' of type <class 'numpy.ndarray'> is not a valid JAX type.
```
i use [bert](https://huggingface.co/indobenchmark/indobert-base-p1) and clip as model vision and text
command that i use
```
python run_hybrid_clip.py \
--output_dir ${MODEL_DIR} \
--text_model_name_or_path="indobenchmark/indobert-base-p1" \
--vision_model_name_or_path="openai/clip-vit-base-patch32" \
--tokenizer_name="indobenchmark/indobert-base-p1" \
--train_file="coco_dataset/train_dataset.json" \
--validation_file="coco_dataset/validation_dataset.json" \
--do_train --do_eval \
--num_train_epochs="40" --max_seq_length 96 \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--preprocessing_num_workers 32 \
--push_to_hub
```
i also use same dataset as the example
pinging @patil-suraj
any idea how to fix ? | 07-05-2021 08:37:48 | 07-05-2021 08:37:48 | Looking into it.<|||||>I just tried it, and couldn't reproduce the issue. Could you please verify again?
Also, note that in COCO the captions are in English so this won't work for Indonesian. You should use a dataset that has captions in Indonesian. The [WIT](https://github.com/google-research-datasets/wit) dataset might help.<|||||>fyi i translated coco dataset to indonesian using [EasyNmt](https://github.com/UKPLab/EasyNMT)
it seems the problem is between my translate coco dataset or the jax version in my tpu vm
because when i run the code in my local machine(with "jax[cuda111]" ) using the same dataset, the training run smoothly
will try to explore more to confirm issue!<|||||>this problem occurs when a sequence longer than `max_seq_length` appears, since the defualt tokenization function does not truncate the sequences.
Try increasing the `max_seq_len` (preferably to a multiple of 64, in order to improve TPU performance) or adding the truncation in the tokenization function |
transformers | 12,501 | closed | Add `Repository` import to the FLAX example script | Add `Repository` import to the FLAX example script | 07-05-2021 07:50:47 | 07-05-2021 07:50:47 | |
transformers | 12,500 | closed | text-classification example broken; version issue / inconsistency issues | ## Environment info
- `transformers` version: 4.8.2 (latest on master)
- Platform: Ubuntu 20.04
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Didn't get that far
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger (because they have a commit that mentions to the 4.9dev release; and cross-checked that they are listed under the examples)
@patil-suraj (because they are listed under the examples)
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The text-classification example crashes because it wants version 4.9.dev0 or more recent, but the latest on master is 4.8.2. I tried commenting this line out and running with 4.8.2, but there are arguments used that I'm guessing were introduced in 4.9 that 4.8.2 does not understand. In other words, the example crashes immediately.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Any of the tasks. I tried the example mrpc task in the text-classification readme.
## To reproduce
Steps to reproduce the behavior:
Copied from the readme / what is currently on master in terms of the code, run:
```
export TASK_NAME=mrpc
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/
```
## Expected behavior
Crashes because 4.8.2 is the latest in master and at least 4.9 is expected; when forcing to run with 4.8.2, crashes due to unrecognized arguments.
| 07-05-2021 04:29:25 | 07-05-2021 04:29:25 | Hello! Installing from `master` should yield the appropriate v4.9.0dev0 version, as this is the current version: https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py#L25
Did you install it from Pypi? `pip install transformers`
or from source? `pip install git+https://github.com/huggingface/transformers`
The latter should be done to install from source.
If you're looking for v4.8.2 examples, you can find them by going to that tag version: https://github.com/huggingface/transformers/tree/v4.8.2/examples<|||||>Thank you so much for the quick response. I suppose the issue was installing via conda instead of pip. I will give this a try and get back to you. Thanks again!<|||||>Thank you! Seems to be working. I am going to double check a few things tomorrow before closing. Thanks again! |
transformers | 12,499 | closed | `max_steps` would not override `num_train_epochs` when training with IterableDataset | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.4.0-1047-azure-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: No
### Who can help
Library/trainer & Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): N/A
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
In [the documentation of Trainer](https://huggingface.co/transformers/_modules/transformers/trainer.html), it says that the `max_steps` argument in `transformers.TrainingArguments`:
> If set to a positive number, the total number of training steps to perform. Overrides `num_train_epochs`.
However, the override is not true when the dataset is an `IterableDataset`. In
<https://github.com/huggingface/transformers/blob/2df63282e010ac518683252d8ddba21e58d2faf3/src/transformers/trainer.py#L1091-L1111>
when the dataset is not an instance of `collections.abc.Sized` (aka, it does not implement `__len__()`), the `num_train_epochs` is independent with `max_steps`.
And the default value of `num_train_epochs` is set to 3.0:
<https://github.com/huggingface/transformers/blob/2df63282e010ac518683252d8ddba21e58d2faf3/src/transformers/training_args.py#L410>
It brings unexpected behavior when training models with iterable dataset and `num_train_epochs` not set (model is only trained for 3 epochs). I hope it could be clarified in documentation.
## To reproduce
Steps to reproduce the behavior:
1. Use an `IterableDatset` as trainset. We assume that it has 1024 items and use 128 as batch size.
2. Set `max_steps` to 80 (= 1024 / 128 * 10) and do not set `num_train_epochs` when instancing `TrainingArguments`, and then set trainer.
3. `trainer.train()`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
MWE:
```python
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification
import torch
class Dataset(torch.utils.data.IterableDataset):
def __init__(self):
super(Dataset).__init__()
def __iter__(self):
for i in range(1024):
yield {
'labels': [1],
'input_ids': [100, 200, 300, 400]
}
def main():
epochs = 10
batch = 128
data_line_count = 1024
steps = int(data_line_count / batch * epochs)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
dataset = Dataset()
training_args = TrainingArguments(
output_dir='/tmp', # output directory
max_steps=steps,
per_device_train_batch_size=batch, # batch size per device during training
logging_dir='./logs', # directory for storing logs
)
trainer = Trainer(
args=training_args,
model=model,
train_dataset=dataset,
)
trainer.train()
if __name__ == "__main__":
main()
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected that this model is trained for 10 epochs (80 steps), but it actually been trained for 3 epochs (24 steps). | 07-05-2021 03:36:23 | 07-05-2021 03:36:23 | Thanks for reporting! Should be fixed by the PR mentioned above.<|||||>Hi, @sgugger I took a look at the issue you mentioned above and saw that [this PR](https://github.com/huggingface/transformers/pull/14413) disables the ability to loop over the iterable dataset in an infinite fashion (for example, how it is done in the [CodeParrot example](https://github.com/huggingface/transformers/blob/master/examples/research_projects/codeparrot/scripts/codeparrot_training.py)). I am not able to have my iterable dataset "restart" when it is empty. Was that intended? <|||||>To loop several times on the iterable dataset, you have to use `num_epochs` instead of `max_steps`.<|||||>Hmm ok, however, if I set `max_steps=-1` I get the error:
```
ValueError: train_dataset does not implement __len__, max_steps has to be specified
```
But if I set max steps, then the value of `num_train_epochs` gets ignored and the training stops when the dataset has run out of samples.
Is there a way to set the length inside of the dataset but still load in streaming mode?<|||||>You can just implement the length of the dataset if you have it.<|||||>Oh ok, would it be easiest to way to force add the length to the dataset? Seems from looking at the code that it would potentially be easiest to subclass the `IterableDataset` and add in the length. But it seems somewhat unclear to me still if that is better than just using a torch iterable dataset. <|||||>
> Hmm ok, however, if I set `max_steps=-1` I get the error:
>
> ```
> ValueError: train_dataset does not implement __len__, max_steps has to be specified
> ```
>
> But if I set max steps, then the value of `num_train_epochs` gets ignored and the training stops when the dataset has run out of samples.
>
> Is there a way to set the length inside of the dataset but still load in streaming mode?
hi @gabeorlanski can you solve this problem. if you hava reslove this problem by IterableDataset ,could you give me a sample |
transformers | 12,498 | closed | [Flax] Fix wav2vec2 pretrain arguments | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-05-2021 03:01:18 | 07-05-2021 03:01:18 | Great catch! Thanks :-) |
transformers | 12,497 | closed | implementing tflxmertmodel integration test | What does this PR do?
this PR implement an integration test for TFLxmertmodel as requested in [#9954](https://github.com/huggingface/transformers/issues/9954)
@LysandreJik | 07-04-2021 21:50:19 | 07-04-2021 21:50:19 | I believe your version of the code quality tools isn't up to date - can you try running the following at the root of your clone: `pip install -e .[quality] -U` to update your tools?<|||||>Hi @LysandreJik Yep It was my black version |
transformers | 12,496 | closed | Feature Request: Flax Encoder-Decoder Model | # 🚀 Feature request
I am trying to combine ViT with BART/mBART in Flax and it would be great to have something similar to the [EncoderDecoderModel](https://github.com/huggingface/transformers/blob/master/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py) in Flax.
I am willing to look into it but it will probably take some time (I am not familiar with Flax).
Please let me know if anything can be done regarding this.
Thanks,
Gunjan | 07-04-2021 18:46:49 | 07-04-2021 18:46:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,495 | closed | Flax MLM example script has PyTorch dependency | **Script in question:** [run_mlm_flax.py](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_mlm_flax.py)
```bash
Traceback (most recent call last):
File "./run_mlm_flax.py", line 319, in <module>
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
File "/home/christopher/transformers/src/transformers/file_utils.py", line 1641, in wrapper
raise ImportError(f"Method `{func.__name__}` requires PyTorch.")
ImportError: Method `device` requires PyTorch.
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: '4.9.0.dev0' (editable pip install from `git` source)
- Platform: TPU v3-8 (Operating System: Ubuntu 20.04.2 LTS, Kernel: Linux 5.4.0-1043-gcp, Architecture: x86-64)
- Python version: Python 3.8.10 [GCC 9.4.0] on linux
- PyTorch version (GPU?): '1.9.0+cu102' (extras['all'])
- Tensorflow version (GPU?): '2.5.0' (extras['all'])
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten and @patil-suraj
## Information
Model I am using: roBERTa
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set up a Flax-based Huggingface environment (`$ pip install -e ".[flax]"`) according to step 4 [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#tpu-vm)
2. Follow [the tutorial](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-use-the-hub-for-collaboration) on training roBERTa on Alemannic OSCAR.
## Expected behavior
The script should run with no error. The absence of PyTorch should have no bearing on a Flax script. | 07-04-2021 11:22:34 | 07-04-2021 11:22:34 | |
transformers | 12,494 | closed | Add padding for decoder inputs for flax t5 example | # What does this PR do?
Improve TPU training performance by adding padding for decoder inputs in flax t5 training example.
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 07-04-2021 06:29:16 | 07-04-2021 06:29:16 | Thanks for the measuring the speed-up! Seems like it's always a good idea to pad to powers of 2. The label length had a fixed input length previously (118 I think), but it seems that one should always pad to multiples of 2 then.
Note that with this fix however we need to make sure that no loss is computed on the added padded tokens which in this current form it is. So we need to add a label_mask etc... -> I'll do a more through review on Monday :-) <|||||>You are right, labels should be masked here https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/flax/language-modeling/run_t5_mlm_flax.py#L657
I will update it later.
BTW, the choice of 128 was following the suggestion here: https://cloud.google.com/tpu/docs/troubleshooting#memory-usage<|||||>@patrickvonplaten
Will this cause memory leak? The `train_metric` is `ShardedDeviceArray`, and it's only de-referenced at the end of an epoch.
https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/flax/language-modeling/run_t5_mlm_flax.py#L720-L721<|||||>> @patrickvonplaten
> Will this cause memory leak? The `train_metric` is `ShardedDeviceArray`, and it's only de-referenced at the end of an epoch.
> https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/flax/language-modeling/run_t5_mlm_flax.py#L720-L721
That's a very good point - I'm currently working on updating all those scripts :-)<|||||>> > @patrickvonplaten
> > Will this cause memory leak? The `train_metric` is `ShardedDeviceArray`, and it's only de-referenced at the end of an epoch.
> > https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/flax/language-modeling/run_t5_mlm_flax.py#L720-L721
>
> That's a very good point - I'm currently working on updating all those scripts :-)
One thing to note, convert the metrics to numpy at the end of each step sounds like it would block the process. Did not specifically test it, and I don't know how to move data from tpu to cpu without blocking.
Edit: I found this: https://github.com/google/jax/issues/2851
I think this would work:
```
cpus = jax.devices("cpu")
firsts = jax.tree_map(lambda x: x[0], train_metric) # take the first shard, might need to modify the code later
# train_metric = jax.device_get(firsts) # this will convert to ordinary numpy array, probably blocking
train_metric = jax.device_put(firsts, cpus[0]) # this will convert to jax device array on cpu, probably non-blocking
```<|||||>> > > @patrickvonplaten
> > > Will this cause memory leak? The `train_metric` is `ShardedDeviceArray`, and it's only de-referenced at the end of an epoch.
> > > https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/flax/language-modeling/run_t5_mlm_flax.py#L720-L721
> >
> >
> > That's a very good point - I'm currently working on updating all those scripts :-)
>
> One thing to note, convert the metrics to numpy at the end of each step sounds like it would block the process. Did not specifically test it, and I don't know how to move data from tpu to cpu without blocking.
>
> Edit: I found this: [google/jax#2851](https://github.com/google/jax/issues/2851)
>
> I think this would work:
>
> ```
> cpus = jax.devices("cpu")
>
> firsts = jax.tree_map(lambda x: x[0], train_metric) # take the first shard, might need to modify the code later
> # train_metric = jax.device_get(firsts) # this will convert to ordinary numpy array, probably blocking
> train_metric = jax.device_put(firsts, cpus[0]) # this will convert to jax device array on cpu, probably non-blocking
> ```
Did you notice a big speed up by doing so? :-)<|||||>Also cc @patil-suraj here.
This looks pretty interesting! Might make the scripts safer and faster<|||||>I didn't specifically test it, but I think it shouldn't be an issue if you use dataloader that support pre-fectching, like tfds or pytorch dataloader. And there won't be much of a difference if the cpu time is low.
But this is how I do it anyway:
https://github.com/cccntu/fnet-generation/blob/42cf70ebba26940afc9fbdb21b6576653cb3827c/scripts/run_summarization_flax.py#L811-L832<|||||>I played a bit around with `to_cpu_device` and it causes a ~5-10% slow-down which I think is kind of expected given that jax makes use of asynchronus futures. If we force the metrics to be computed after every training step, we can't make good use of device arrays being computed asynchronously. See: https://jax.readthedocs.io/en/latest/async_dispatch.html#async-dispatch<|||||>However, this change seems to lead to a huge speed-up: https://github.com/huggingface/transformers/pull/13012<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,493 | closed | FlaxGPTNeo | # What does this PR do?
This PR adds the Flax version of GPTNeo. For local attention, it uses the fix proposed by @finetuneanon in #11630.
Thanks a lot, @finetuneanon for proposing the solution, it's especially important in JAX/Flax where we can't have dynamic shapes.
Official GPTNeo flax checkpoints are up on the hub and slow tests are passing. | 07-04-2021 05:28:26 | 07-04-2021 05:28:26 | |
transformers | 12,492 | closed | Bug in MLM script not parsing arguments properly + lack of warning on incorrect args | ## (Continuation of #12438)
### Who can help
- tokenizers: @LysandreJik
- datasets: @lhoestq
## Information
Model I am using (Bert, XLNet ...): `BidBird`
## Issue
This is a small issue regarding solving #12438. From some digging, it seems that setting the `maximum sequence length` flag might be able to solve this issue, probably because it falls back on the tokenizer length which is arbitrarily long.
However, despite passing the `--max_seq_length="16000"` flag in the marked issue, it doesn't appear from the logging:-
```py
INFO:run_mlm:Training/evaluation parameters TrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.98,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=10,
eval_steps=50,
evaluation_strategy=IntervalStrategy.STEPS,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
greater_is_better=True,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.0003,
length_column_name=length,
load_best_model_at_end=True,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=./logs,
logging_first_step=False,
logging_steps=50,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=validation,
mp_parameters=,
no_cuda=False,
num_train_epochs=5.0,
output_dir=./results,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=1,
per_device_train_batch_size=1,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=results,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=./results,
save_steps=500,
save_strategy=IntervalStrategy.EPOCH,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=8,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=1000,
weight_decay=0.01,
)
```
Apparently, it seems that the `max_seq_len` argument doesn't exist at all in the script. Since the sequence length is something that I expect should be controlled by the script flag, I think these 2 issues should be remedied.
1. - [x] Missing `max_seq_length` flag for `run_MLM` script to override any existing defaults that may interfere and cause unwanted bugs
2. - [ ] Warning/Error upon passing incorrect formatted arguments which would be very helpful if the user indeed has passed an incorrect argument
I may be incorrectly asumming this issue, since there exists this arg in the `DataTrainingArguments`
https://github.com/huggingface/transformers/blob/a76eebfc806b863bf1eb721ba8c49ef9c2f5049f/examples/pytorch/language-modeling/run_mlm.py#L145-L151
Which is mysterious as to why its default is `None`, and why it doesn't register at all.
Perhaps @sgugger might be able to shed light on this? | 07-03-2021 23:06:58 | 07-03-2021 23:06:58 | Ah, great catch. I recall getting this error when the tokenizer had no max length set, and `datasets` would then crash as unable to return enough values. Could you try this out for me: set the `max_seq_length` value to something low, like 512 or 256. Does it still crash then?
If not, it's possible that your dataset isn't big enough to generate 16k tokens, and `datasets` then crashes with an index out of bounds error (which should be clearer).
Otherwise, I'd be happy to try and see what's going on; if you can open a reproducer as you offer, that would be great. Thanks!<|||||>solution at #12438 |
transformers | 12,491 | closed | [examples/flax] clip style image-text training example | # What does this PR do?
This PR adds an example script to train CLIP style vision-text dual encoder models using pre-trained vision and text encoder.
It supports `CLIP` and `ViT` as image encoder and `BERT` and ROBERTa` (or any other text encoder in flax) as text encoder.
This example uses`torchvision` and torch `Dataloader` for faster pre-processing and data loading. This is important as data loading and processing can become a bottleneck on TPU. This will be revised later to use `datasets`. | 07-03-2021 15:04:03 | 07-03-2021 15:04:03 | And think you might have to run `make style` once<|||||>Merging!<|||||>hi @patil-suraj i tried to run your script(run_hybrid_clip.py)
after few step i got error
```
it/s/home/acul/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:699: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
tensor = as_tensor(value)
Epoch ... (1/40): 0%| | 0/40 [02:43<?, ?it/s]
Traceback (most recent call last): | 39/462 [02:43<06:48, 1.03it/s]
File "run_hybrid_clip.py", line 556, in <module>
main()
File "run_hybrid_clip.py", line 502, in main
state, train_metric = p_train_step(state, batch)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 183, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/api.py", line 1623, in f_pmapped
for arg in args: _check_arg(arg)
File "/home/acul/env/lib/python3.8/site-packages/jax/_src/api.py", line 2296, in _check_arg
raise TypeError(f"Argument '{arg}' of type {type(arg)} is not a valid JAX type.")
jax._src.traceback_util.UnfilteredStackTrace: TypeError: Argument '[[list([1, 1, 1, 1,
.
.
.
.
list([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])]]' of type <class 'numpy.ndarray'> is not a valid JAX type.
```
i use [bert](https://huggingface.co/indobenchmark/indobert-base-p1) and clip as model vision and text
any idea how to fix<|||||>Thank you for reporting!
Could you please open an issue and post the command that you used?<|||||>sure
opening here:
https://github.com/huggingface/transformers/issues/12502 |
transformers | 12,490 | closed | non-identical position_ids in an input batch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The possibility of defining different position_ids for each input example through a batch.
This is useful when we wish to pad some tokens in the input while simultaneously removing them to improve efficiency.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-03-2021 09:46:04 | 07-03-2021 09:46:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,489 | closed | Get core dump after import FlaxRobertaModel | ## Environment info
I tried to run "transformers-cli env" but I get also this error message:
2021-07-03 05:37:33.144628: F external/org_tensorflow/tensorflow/core/tpu/tpu_executor_init_fns.inc:110] TpuTransferManager_ReadDynamicShapes not available in this library.
Aborted (core dumped)
- `transformers` version: 4.9.0.dev0
- Platform: TPU vm from the Jax/Flax event
- Python version: 3.8
- PyTorch version (GPU?): not installed or pytorch 1.8.1, the same issue
- Tensorflow version (GPU?): not installed or tensorflow 2.4.1, the same issue
- Using GPU in script?:
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
I followed the instruction https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries in a TPU vm for the JAX/Flax event. But after the installation of transformers and the datasets, I get following error message when I import FlaxRobertaModel:
```
$ python
Python 3.8.10 (default, Jun 4 2021, 15:09:15)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import FlaxRobertaModel
2021-07-03 05:55:17.729321: F external/org_tensorflow/tensorflow/core/tpu/tpu_executor_init_fns.inc:110] TpuTransferManager_ReadDynamicShapes not available in this library.
Aborted (core dumped)
```
or run the transformers-cli:
```
$ transformers-cli env
WARNING:tensorflow:From /home/cahya/transformers/src/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2021-07-03 05:56:23.495538: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-07-03 05:56:23.555453: F external/org_tensorflow/tensorflow/core/tpu/tpu_executor_init_fns.inc:110] TpuTransferManager_ReadDynamicShapes not available in this library.
Aborted (core dumped)
```
The problem arises when using:
* [ x] the official example scripts: (see above)
The tasks I am working on is:
* [ ] JAX/Flax project
* [ ]
## To reproduce
see above
## Expected behavior
There should be no core dump | 07-03-2021 05:59:10 | 07-03-2021 05:59:10 | It seems that the jax version 0.2.16 installed automatically during transformers installation make this issue. I can fix it after reinstalling the jax using following command:
`pip install "jax[tpu]>=0.2.16" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html`
|
transformers | 12,488 | closed | Fine-tuning t5-large: greedy predictions don't match teacher-forcing results | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.0
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
Tagged contributors: @patrickvonplaten, @patil-suraj
## Information
Model I am using t5-large.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
## Context
I am fine-tuning the T5-large pretrained model with my own training script on a custom task (learning to generate a simplified version of the parse tree for each sentence in the SST dataset).
All works well when using normal model(inputs) to get logits (teacher-forcing inference). When I try to use "generate()" to do a greedy inference (to predict a sequence without using the labels), the results that I get are **very different** than the TF inference (for the same training examples). See below for test case and output.
## To reproduce
Steps to reproduce the behavior:
1. after model has been fine-tuned for about 100 steps, take a sample of your training data and compare the results from
model(input_ids=src, decoder_input_ids=trg) to model.generate(input_ids=src)
Here is a **test case** I added to my eval loop, that reproduces the problem:
```
x = "You 'll probably love it ."
y = "( * ( ( ( * * ) ( * * ) ) * ) )"
t5 = self.model.model
src = self.tokenizer.tokenizer(x)["input_ids"]
trg = self.tokenizer.tokenizer(y)["input_ids"]
src = torch.tensor(src).to(torch.long).to(t5.device).unsqueeze(0)
trg = torch.tensor(trg).to(torch.long).to(t5.device).unsqueeze(0)
# get TF results
logits = t5(input_ids=src, decoder_input_ids=trg)["logits"]
tf_preds = torch.argmax(logits, dim=-1)[0]
# get GREEDY results
gi_preds = t5.generate(input_ids=src)[0]
tf_y = self.arr_to_str(tf_preds, " ", True)
gi_y = self.arr_to_str(gi_preds, " ", True)
print("x: ", x)
print("y: ", y)
print("tf_y: ", tf_y)
print("gi_y: ", gi_y)
```
## Test Case output
```
x: You 'll probably love it .
y: ( * ( ( ( * * ) ( * * ) ) * ) )
tf_y: ▁( ▁( ▁* ▁* ▁* ▁( ▁ ) ▁ ▁* ▁( ▁ ) ▁ ) ▁ ▁ ) ▁ ) ▁ </s>
gi_y: <extra_id_0> . <extra_id_1> . <extra_id_2> . <extra_id_3> . ▁You ▁ ' ll ▁probably ▁love ▁it ▁ . <extra_id_4> . <extra_id_5> ll ▁probably ▁love ▁it ▁ . ▁You ▁ ' ll ▁probably ▁love ▁it ▁ . ▁You ▁ ' ll ▁probably ▁love ▁it ▁ . <extra_id_6> . <extra_id_7> ll ▁probably ▁love ▁it ▁ . <extra_id_8> ▁ . <extra_id_9> ▁ . ▁ . <extra_id_10> ▁You ▁ ' ll ▁probably ▁love ▁it ▁ . <extra_id_11> ▁You ▁ ' ll ▁probably ▁love ▁it ▁ .
<extra_id_14> ▁ . <extra_id_15> ▁ . ▁ . ▁ . <extra_id_16> ▁ . ▁ . ▁ . ▁ . ▁ . <extra_id_26> . ▁ . ▁ . ▁ . <extra_id_27> . ▁ </s>
```
## Expected behavior
The results should match exactly, but instead, the greedy/generate output seems to come from a completely different context. Is generate() the wrong way to get a greedy prediction?
| 07-02-2021 20:34:47 | 07-02-2021 20:34:47 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @rfernand2 - could you please upload the fine-tuned checkpoint to the hub and provide a fully reproducible code snippet? :-)
It would be great if we could just copy paste the code to reproduce the error <|||||>Hi @patrickvonplaten - I'll work on that (may take a day or 2 given other ongoing work). Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I happened to encounter something similar; in my case, the decoder had very few tokens (3), and the decoder learned to predict the token given by the teacher because that was the trivial solution and worked most of the time. When the model needed to make a prediction based on its own predictions it just went haywire and regressed to a single label because that's what it had learned. |
transformers | 12,487 | closed | Fix Padded Batch Error 12282 | This fixes the padded batch [issue](https://github.com/huggingface/transformers/issues/12282). The error was generated due to the maximum sequence length of the attention mask not matching the padded sequence length of the hidden_states. `np.allclose` now passes with a 1e-2 absolute tolerance.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-02-2021 19:00:25 | 07-02-2021 19:00:25 | The `1e-2` absolute tolerance still bothers me, but I'm not 100% sure where the variation is coming from. One thing I thought about was if the padding could have a subtle effect when passed through the `feature_extractor`. |
transformers | 12,486 | closed | GPT2: discrepancy between `inputs_embeds` and `input_ids` when the input sentence has length = 1 | Calling GPT2 with `inputs_embeds` vs calling it with `input_ids` leads to different outcomes when the input sentence has length 1. See the script to reproduce the issue.
## Environment info
- `transformers` version: tried 4.2.1 and 4.8.2
- Platform: MacOS and Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.9.0
- Using GPU in script?: tried GPU and CPU
- Using distributed or parallel set-up in script?: Nope
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
## To reproduce
```python
model_name = 'gpt2'
print(" ==> Loading the models . . . ")
model = GPT2LMHeadModel.from_pretrained(model_name, output_hidden_states=True)
model.to(device)
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
batch_size = 5
prefix_length = 1
for sentence_len in range(10, 0, -1):
sentence_str = " ".join(["good"] * sentence_len)
input_ids = tokenizer([sentence_str] * batch_size, return_tensors='pt')['input_ids'].to(device)
inputs_embeds = model.transformer.wte(input_ids).squeeze()
out1 = model(inputs_embeds=inputs_embeds)
out2 = model(input_ids=input_ids)
diff = torch.abs(torch.mean(out1.logits - out2.logits))
print(f" - sentence length: {sentence_len} <-> {diff}")
```
which gives me the following:
```
- sentence length: 10 <-> 0.0
- sentence length: 9 <-> 0.0
- sentence length: 8 <-> 0.0
- sentence length: 7 <-> 0.0
- sentence length: 6 <-> 0.0
- sentence length: 5 <-> 0.0
- sentence length: 4 <-> 0.0
- sentence length: 3 <-> 0.0
- sentence length: 2 <-> 0.0
- sentence length: 1 <-> 30.124570846557617
```
The last line is the surprise: when the input sentence has length == 1, `inputs_embeds` and `input_ids` lead to different outcomes.
FYI @qkaren | 07-02-2021 17:44:50 | 07-02-2021 17:44:50 | Hello! I think the issue here lies in this line: `inputs_embeds = model.transformer.wte(input_ids).squeeze()`. You're calling `squeeze` on the inputs_embeds, which are of dimension `(5, 1, 768)`, which correspond to `(batch_size, sequence_length, hidden_size)`. Unfortunately, you're squeezing out the sequence length dimension (which is always required), which the model needs in order to construct the attention mask.
Remove the `.squeeze` or specify which dimension it should squeeze and you should be fine. |
transformers | 12,485 | closed | Tensorboard error while running mlm_flax TPU example script on TPU | ## Environment info
Following the https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-setup-tpu-vm
## Script
./run_mlm_flax.py \
--output_dir="./" \
--model_type="roberta" \
--config_name="./" \
--tokenizer_name="./" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_als" \
--max_seq_length="128" \
--per_device_train_batch_size="4" \
--per_device_eval_batch_size="4" \
--learning_rate="3e-4" \
--warmup_steps="1000" \
--overwrite_output_dir \
--num_train_epochs="8" \
--push_to_hub
## Error
v File "./run_mlm_flax.py", line 66
print(f"Unable to display metrics through TensorBoard because some package are not installed: {ie}")
^
## Information
Tensorboard is already installed using pip install tensorboard
| 07-02-2021 15:41:05 | 07-02-2021 15:41:05 | Can you send me your tpu_name & name of virtual environment in a personal Slack message? I'll have a look<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,484 | closed | checkpoints are not saved after implementing a custom loss | I subclassed Trainer class to implement the supervised constructive loss function (code found using google search). The code is running but it does not save the models checkpoints
`
class SupervisedContrastiveLoss(nn.Module):
def __init__(self, temperature, device):
"""
Implementation of the loss described in the paper Supervised Contrastive Learning :
https://arxiv.org/abs/2004.11362
:param temperature: int
"""
super(SupervisedContrastiveLoss, self).__init__()
self.temperature = temperature
self.device = device
def forward(self, projections, targets):
"""
:param projections: torch.Tensor, shape [batch_size, projection_dim]
:param targets: torch.Tensor, shape [batch_size]
:return: torch.Tensor, scalar
"""
projections = F.normalize(projections, p=2, dim=1)
dot_product_tempered = torch.mm(projections, projections.T) / self.temperature
# Minus max for numerical stability with exponential. Same done in cross entropy. Epsilon added to avoid log(0)
exp_dot_tempered = (
torch.exp(dot_product_tempered - torch.max(dot_product_tempered, dim=1, keepdim=True)[0]) + 1e-5
)
mask_similar_class = (targets.unsqueeze(1).repeat(1, targets.shape[0]) == targets).to(self.device)
mask_anchor_out = (1 - torch.eye(exp_dot_tempered.shape[0])).to(self.device)
mask_combined = mask_similar_class * mask_anchor_out
cardinality_per_samples = torch.sum(mask_combined, dim=1)
log_prob = -torch.log(exp_dot_tempered / (torch.sum(exp_dot_tempered * mask_anchor_out, dim=1, keepdim=True)))
supervised_contrastive_loss_per_sample = torch.sum(log_prob * mask_combined, dim=1) / cardinality_per_samples
supervised_contrastive_loss = torch.mean(supervised_contrastive_loss_per_sample)
return supervised_contrastive_loss
class SCLTrainer(Trainer):
def __init__(self,temperature, loss_weight, *args, **kwargs):
super().__init__(*args, **kwargs)
self.add_callback(EarlyStoppingCallback())
self.temperature = temperature
self.loss_weight = loss_weight
self.device = self.args.device
print ("SCL loss_weight: ", self.loss_weight)
print ("SCL temperature: ", self.temperature)
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
feature_vectors = outputs.hidden_states[-1] [:,0,:]
logits = outputs["logits"] if isinstance(outputs, dict) else outputs[0]
#print ("feature_vectors.shape: ", feature_vectors.shape)
#print ("logits.shape: ", logits.shape)
self.ce_loss = nn.CrossEntropyLoss().to(self.device)
self.scl_loss = SupervisedContrastiveLoss(self.temperature,self.device).to(self.device)
loss = (1-self.loss_weight) * self.ce_loss(logits, labels) + self.loss_weight * self.scl_loss(feature_vectors, labels)
return (loss, outputs) if return_outputs else loss
`
can you please let me know where is the problem? | 07-02-2021 15:26:31 | 07-02-2021 15:26:31 | Hello! Could you provide the command you ran to run your script? If it's not an official example script, do you have the code you used handy? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,483 | closed | Text Classification on GLUE on TPU using Jax/Flax : BigBird | The notebook for Text Classification on GLUE tasks, provided [here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification_flax.ipynb) can do with some updates:
1. It has a missing ``from flax import traverse_utils`` which can be added.
2. Replace ``gradient_transformation`` with ``adamw(1e-7)`` in ``TrainState.create()``.
3. Probably a note to ``pip install sentencepiece`` since ``RoBERTA`` etc. models which are available in Flax at this point, use it.
Also, I couldn't get the notebook to run for ``google/bigbird-roberta-base`` with ``batch_size=1`` and the default task(``cola``).
I got the following error:
```
Shapes of inputs: (8, 1, 128) (8, 1, 128) (8, 1)
---------------------------------------------------------------------------
UnfilteredStackTrace Traceback (most recent call last)
<ipython-input-32-cb9e92a30675> in <module>()
7 print(batch['input_ids'].shape, batch['attention_mask'].shape, batch['labels'].shape)
----> 8 state, train_metrics, dropout_rngs = parallel_train_step(state, batch, dropout_rngs)
9 progress_bar_train.update(1)
67 frames
/usr/local/lib/python3.7/dist-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
142 try:
--> 143 return fun(*args, **kwargs)
144 except Exception as e:
/usr/local/lib/python3.7/dist-packages/jax/_src/api.py in f_pmapped(*args, **kwargs)
1658 name=flat_fun.__name__, donated_invars=tuple(donated_invars),
-> 1659 global_arg_shapes=tuple(global_arg_shapes_flat))
1660 return tree_unflatten(out_tree(), out)
/usr/local/lib/python3.7/dist-packages/jax/core.py in bind(self, fun, *args, **params)
1623 assert len(params['in_axes']) == len(args)
-> 1624 return call_bind(self, fun, *args, **params)
1625
/usr/local/lib/python3.7/dist-packages/jax/core.py in call_bind(primitive, fun, *args, **params)
1555 with maybe_new_sublevel(top_trace):
-> 1556 outs = primitive.process(top_trace, fun, tracers, params)
1557 return map(full_lower, apply_todos(env_trace_todo(), outs))
/usr/local/lib/python3.7/dist-packages/jax/core.py in process(self, trace, fun, tracers, params)
1626 def process(self, trace, fun, tracers, params):
-> 1627 return trace.process_map(self, fun, tracers, params)
1628
/usr/local/lib/python3.7/dist-packages/jax/core.py in process_call(self, primitive, f, tracers, params)
608 def process_call(self, primitive, f, tracers, params):
--> 609 return primitive.impl(f, *tracers, **params)
610 process_map = process_call
/usr/local/lib/python3.7/dist-packages/jax/interpreters/pxla.py in xla_pmap_impl(fun, backend, axis_name, axis_size, global_axis_size, devices, name, in_axes, out_axes_thunk, donated_invars, global_arg_shapes, *args)
622 donated_invars, global_arg_shapes,
--> 623 *abstract_args)
624 # Don't re-abstractify args unless logging is enabled for performance.
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in memoized_fun(fun, *args)
261 else:
--> 262 ans = call(fun, *args)
263 cache[key] = (ans, fun.stores)
/usr/local/lib/python3.7/dist-packages/jax/interpreters/pxla.py in parallel_callable(fun, backend_name, axis_name, axis_size, global_axis_size, devices, name, in_axes, out_axes_thunk, donated_invars, global_arg_shapes, *avals)
698 with core.extend_axis_env(axis_name, global_axis_size, None): # type: ignore
--> 699 jaxpr, out_sharded_avals, consts = pe.trace_to_jaxpr_final(fun, global_sharded_avals, transform_name="pmap")
700 jaxpr = xla.apply_outfeed_rewriter(jaxpr)
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr_final(fun, in_avals, transform_name)
1208 main.jaxpr_stack = () # type: ignore
-> 1209 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(fun, main, in_avals)
1210 del fun, main
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_subjaxpr_dynamic(fun, main, in_avals)
1187 in_tracers = map(trace.new_arg, in_avals)
-> 1188 ans = fun.call_wrapped(*in_tracers)
1189 out_tracers = map(trace.full_raise, ans)
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
<ipython-input-24-f522714f3451> in train_step(state, batch, dropout_rng)
10 grad_function = jax.value_and_grad(loss_function)
---> 11 loss, grad = grad_function(state.params)
12 grad = jax.lax.pmean(grad, "batch")
/usr/local/lib/python3.7/dist-packages/jax/_src/traceback_util.py in reraise_with_filtered_traceback(*args, **kwargs)
142 try:
--> 143 return fun(*args, **kwargs)
144 except Exception as e:
/usr/local/lib/python3.7/dist-packages/jax/_src/api.py in value_and_grad_f(*args, **kwargs)
886 if not has_aux:
--> 887 ans, vjp_py = _vjp(f_partial, *dyn_args)
888 else:
/usr/local/lib/python3.7/dist-packages/jax/_src/api.py in _vjp(fun, has_aux, *primals)
1965 flat_fun, out_tree = flatten_fun_nokwargs(fun, in_tree)
-> 1966 out_primal, out_vjp = ad.vjp(flat_fun, primals_flat)
1967 out_tree = out_tree()
/usr/local/lib/python3.7/dist-packages/jax/interpreters/ad.py in vjp(traceable, primals, has_aux)
113 if not has_aux:
--> 114 out_primals, pvals, jaxpr, consts = linearize(traceable, *primals)
115 else:
/usr/local/lib/python3.7/dist-packages/jax/interpreters/ad.py in linearize(traceable, *primals, **kwargs)
100 jvpfun_flat, out_tree = flatten_fun(jvpfun, in_tree)
--> 101 jaxpr, out_pvals, consts = pe.trace_to_jaxpr(jvpfun_flat, in_pvals)
102 out_primals_pvals, out_tangents_pvals = tree_unflatten(out_tree(), out_pvals)
/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py in trace_to_jaxpr(fun, pvals, instantiate)
497 fun = trace_to_subjaxpr(fun, main, instantiate)
--> 498 jaxpr, (out_pvals, consts, env) = fun.call_wrapped(pvals)
499 assert not env
/usr/local/lib/python3.7/dist-packages/jax/linear_util.py in call_wrapped(self, *args, **kwargs)
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
<ipython-input-24-f522714f3451> in loss_function(params)
5 def loss_function(params):
----> 6 logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
7 loss = state.loss_function(logits, targets)
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, input_ids, attention_mask, token_type_ids, position_ids, params, dropout_rng, train, output_attentions, output_hidden_states, return_dict)
1420 return_dict,
-> 1421 rngs=rngs,
1422 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in apply(self, variables, rngs, method, mutable, capture_intermediates, *args, **kwargs)
938 mutable=mutable, capture_intermediates=capture_intermediates
--> 939 )(variables, *args, **kwargs, rngs=rngs)
940
/usr/local/lib/python3.7/dist-packages/flax/core/scope.py in wrapper(variables, rngs, *args, **kwargs)
686 with bind(variables, rngs=rngs, mutable=mutable).temporary() as root:
--> 687 y = fn(root, *args, **kwargs)
688 if mutable is not False:
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in scope_fn(scope, *args, **kwargs)
1177 try:
-> 1178 return fn(module.clone(parent=scope), *args, **kwargs)
1179 finally:
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, input_ids, attention_mask, token_type_ids, position_ids, deterministic, output_attentions, output_hidden_states, return_dict)
1697 output_hidden_states=output_hidden_states,
-> 1698 return_dict=return_dict,
1699 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, input_ids, attention_mask, token_type_ids, position_ids, deterministic, output_attentions, output_hidden_states, return_dict)
1458 output_hidden_states=output_hidden_states,
-> 1459 return_dict=return_dict,
1460 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, hidden_states, attention_mask, deterministic, output_attentions, output_hidden_states, return_dict)
1265 output_hidden_states=output_hidden_states,
-> 1266 return_dict=return_dict,
1267 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, hidden_states, attention_mask, deterministic, output_attentions, output_hidden_states, return_dict)
1221 layer_outputs = layer(
-> 1222 hidden_states, attention_mask, deterministic=deterministic, output_attentions=output_attentions
1223 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, hidden_states, attention_mask, deterministic, output_attentions)
1179 attention_outputs = self.attention(
-> 1180 hidden_states, attention_mask, deterministic=deterministic, output_attentions=output_attentions
1181 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, hidden_states, attention_mask, deterministic, output_attentions)
1113 attn_outputs = self.self(
-> 1114 hidden_states, attention_mask, deterministic=deterministic, output_attentions=output_attentions
1115 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in __call__(self, hidden_states, attention_mask, deterministic, output_attentions)
360 plan_num_rand_blocks=None,
--> 361 output_attentions=output_attentions,
362 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in bigbird_block_sparse_attention(self, query_layer, key_layer, value_layer, band_mask, from_mask, to_mask, from_blocked_mask, to_blocked_mask, n_heads, head_size, plan_from_length, plan_num_rand_blocks, output_attentions)
476 plan_from_length=plan_from_length,
--> 477 plan_num_rand_blocks=plan_num_rand_blocks,
478 )
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_flax_big_bird.py in _bigbird_block_rand_mask_with_head(self, from_seq_length, to_seq_length, from_block_size, to_block_size, num_heads, plan_from_length, plan_num_rand_blocks, window_block_left, window_block_right, global_block_top, global_block_bottom, global_block_left, global_block_right)
1004 global_block_left=global_block_left,
-> 1005 global_block_right=global_block_right,
1006 )
UnfilteredStackTrace: ValueError: could not broadcast input array from shape (0) into shape (3)
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
```
### Who can help
@patrickvonplaten @vasudevgupta7 @patil-suraj
| 07-02-2021 15:22:27 | 07-02-2021 15:22:27 | I think it is probably not valid to use ``block_sparse`` attention, with ``num_random_blocks=3`` and ``block_size=64`` with a sequence length of just ``128``(which is the length for ``cola``).
The notebook works fine upon changing the ``attention_type`` to ``original_full``, as it should be for such short sequence length. Sorry, my bad.
Probably can raise a better error though; when trying to run the model with invalid combination of smaller sequence lengths, with the default ``num_random_blocks`` and ``block_size``.
Shall I open a pr? For the notebook or better error message, in case you guys don't have time for one?<|||||>Thanks for the proposed changes and the very in-detail issue! Feel free to open a PR :-)
Regarding the Big Bird bug, maybe @vasudevgupta7 had an idea? :-)<|||||>Hey @Jeevesh8, yes block - sparse attention is not supported for this configuration. Your seqlen must be greater than (5 + 2*num_random_blocks)*block size.
Also it’s recommended to use original_full (instead of block sparse) till 1024 seqlen.
You can find other constraints in the end of this blog post: https://huggingface.co/blog/big-bird
Let me know if you get into any other troubles. Would be happy to help. <|||||>Thank you @vasudevgupta7 shall I add error messages corresponding to the conditions in your implementation?
Also, @patrickvonplaten there seems to be another possible bug in the Big Bird Tokenizer:
```python
!pip install transformers sentencepiece
from transformers import BigBirdTokenizer
tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base")
print(tokenizer.decode(tokenizer.eos_token_id), tokenizer.decode(tokenizer.bos_token_id))
```
The above code produce ``<s> </s>`` in output. Is it supposed to be so? I was expecting it to be ``</s> <s>`` . Are the pre-trained weights provided on hub, corresponding to this or the opposite way? Or they don't use any of those tokens? And just use ``[CLS]`` and ``[SEP]`` ?<|||||>@Jeevesh8 feel free to create PR for adding error messages as @patrickvonplaten also approved on those in his comment above.
Regarding tokenizer, you are right. Tokenizer code looks absolutely fine, but tokenizer_config (in Hub) is wrong (this happened because tokenizer_config was not updated after the PR got reviewed earlier). I will fix BigBird tokenizer_config from Hub. @patrickvonplaten please approve that once.
Also, i think we are relying on [SEP], [CLS] mainly.<|||||>Sorry to trouble again @vasudevgupta7 , but why is it hard-coded [here](https://github.com/huggingface/transformers/blob/0085e712ddf80fa5cd5f355498fe7f13b839eafa/src/transformers/models/big_bird/modeling_flax_big_bird.py#L460) that ``last_idx`` be 1024 ? Shouldn't we be randomly choosing 3 blocks from all over the sequence ?
Also, is ``[CLS]`` token attending to every other token in the implementation? I am trying to find it through code base, but let me know, if you get the time to reply here. Also, wanted to know if I add multiple ``[CLS]`` tokens, will they all attend globally?<|||||>Regarding `last_idx`, this is how original bigbird attention has been implemented ([see this](https://github.com/google-research/bigbird/blob/db06498ec8804c6438111938d8654b66ddaccd5d/bigbird/core/attention.py#L743)). Not sure, why bigbird authors kept 1024. We kept same number to keep HF implementation similar to original.
Only first & last blocks (i.e. first 64 & last 64 tokens by default) are global & choosing of global tokens is independent of CLS token. Now, since `[CLS]` token lies in first block (since usually added as 1st token), it will be global and will attend all other tokens.<|||||>Thank you for the quick reply @vasudevgupta7 .
I have opened an issue in there original repo [here](https://github.com/google-research/bigbird/issues/18) . It will probably shed some light on the topic of the 1024.
Also, what little I gathered from the code, if the last 64 tokens has mostly ``<pad>``, then they won't be attending globally. Am I right on this?
I think that 1024 may be there to avoid this attention to and from pads.. although not sure.<|||||>Yeah, last block will not get attended if last 64 tokens are `<pad>`. To mitigate that issue, you can probably increase `block_size` so that overall tokens to attend increases or remains same.<|||||>Or probably use dynamic batching and padding with similar sequence length elements batched together.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,482 | closed | Sources of randomness for Longformer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1
- Platform: Ubuntu 20.04.2 LTS
- Python version: 3.7.10
- PyTorch version: 1.7.1
- Using GPU in script?: Yes
### Who can help
@patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Results when using `Longformer` for sequence classification are not consistent across runs after setting a random seed.
## To reproduce
I include below a simple script derived from [this](https://huggingface.co/transformers/custom_datasets.html#seq-imdb) example to reproduce the behavior.
When using `bert-base-uncased`, the generated results (and loss values) will be *exactly* the same across runs. However, when using `allenai/longformer-base-4096` (just swap the comment for lines starting with `MODEL_NAME`), results (and loss values) will vary across runs. In this example, results happen to be very similar because of the simplicity of the problem, but I have experienced higher variability in cases with longer training schedules and larger and more 'complex' datasets. Still, I think this example suffices to illustrate the issue.
P.S. Using the commented `set_seed` function does not help.
```
import torch
from datasets import load_dataset
from sklearn.model_selection import train_test_split
from transformers import Trainer, TrainingArguments, set_seed
from transformers import LongformerTokenizerFast, LongformerForSequenceClassification
from transformers import BertTokenizerFast, BertForSequenceClassification
import os
import random
import numpy as np
from typing import Optional
MODEL_NAME = 'bert-base-uncased'
# MODEL_NAME = 'allenai/longformer-base-4096'
SEED = 42
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
# def set_seed(seed: Optional[int]):
# """ Set all seeds to make results reproducible (deterministic mode).
# When seed is None, disables deterministic mode. """
# if seed is not None:
# torch.manual_seed(seed)
# torch.cuda.manual_seed_all(seed)
# torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = False
# np.random.seed(seed)
# random.seed(seed)
# os.environ['PYTHONHASHSEED'] = str(seed)
def main():
set_seed(SEED)
# Load IMDb
train = load_dataset("imdb", split="train")[:50]
test = load_dataset("imdb", split="test")[:10]
train_texts = train['text']
train_labels= train['label']
test_texts = test['text']
test_labels= test['label']
# Split train into train and val
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2,
random_state=SEED)
# Load tokenizer
if MODEL_NAME == 'bert-base-uncased':
tokenizer = BertTokenizerFast.from_pretrained(MODEL_NAME)
elif MODEL_NAME == 'allenai/longformer-base-4096':
tokenizer = LongformerTokenizerFast.from_pretrained(MODEL_NAME)
else:
raise ValueError
# Generate encodings
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
# Create datasets
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
# Training
training_args = TrainingArguments(
output_dir='../tutorial_results', # output directory
num_train_epochs=5, # total number of training epochs
per_device_train_batch_size=1, # batch size per device during training
per_device_eval_batch_size=1, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='../tutorial_logs', # directory for storing logs
logging_steps=10,
seed=SEED
)
if MODEL_NAME == 'bert-base-uncased':
model = BertForSequenceClassification.from_pretrained(MODEL_NAME)
elif MODEL_NAME == 'allenai/longformer-base-4096':
model = LongformerForSequenceClassification.from_pretrained(MODEL_NAME)
else:
raise ValueError
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
# Test set
test_results = trainer.predict(test_dataset)
print(test_results)
if __name__ == '__main__':
main()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect that, having set a random seed, results would be the same across runs for longformers too. Is there any source of randomness for longformers not 'covered' by the `set_seed` function?
Thanks!
| 07-02-2021 14:40:11 | 07-02-2021 14:40:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I am facing the same issues with reproducibility for the Longformer model for sequence classification.
I posted my example in the [Huggingface discussion forum here](https://discuss.huggingface.co/t/how-can-i-enforce-reproducibility-for-longformer/8862).
Setting the seeds as recommended [here](https://huggingface.co/transformers/testing.html#getting-reproducible-results) produces the exact same training loss in multiple training iterations (each time starting the finetuning from scratch) when using the `roberta-base` model, but not with `allenai/longformer-base-4096`.
<|||||>Hmm, that's interesting! Also pinging the original author @ibeltagy here.
I don't see any weird functionality that's used in Longformer maybe except `torch.Tensor.stride(...)` that is in Longformer but not in other models like RoBERTa. Sadly I won't have the time to dive deep into this reproducible problem, but a good start to find the bug would be to verify that the same random model weights are loaded before training and then working with `print(...)` to see after what layer results start to differ
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@abhishekkrthakur do you know where the randomness comes from?<|||||>I have recently run into this issue. One of the things I've noticed is if you drop the max length down to 512 I start to get reproducible behaviour - not sure if that helps at all<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@abhishekkrthakur @patrickvonplaten
I'm facing same issue. Any update on this behaviour ?
**Environment info**
transformers Version: 4.11.0
Platform: Amazon Linux AMI 2018.03
torch Version: 1.9.0
Number GPU: 1
more information, when activating torch.use_deterministic_algorithms(True) during training.
roberta-base and bert-base-uncased works fine. However with Longformer I get this error:
Loading features from cached file ./training-data/processed-data/text/toy/cached_train_longformer_512
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Validation sanity check: 0it [00:00, ?it/s]Loading features from cached file ./training-data/processed-data/text/toy/cached_dev_longformer_512
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]inputs={'input_ids': tensor([[ 0, 7202, 3063, ..., 1, 1, 1],
[ 0, 1301, 1723, ..., 44828, 15555, 2],
[ 0, 27201, 1000, ..., 3706, 6, 2],
...,
[ 0, 41188, 2444, ..., 1, 1, 1],
[ 0, 3384, 591, ..., 1, 1, 1],
[ 0, 495, 2492, ..., 700, 28607, 2]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 1, 1, 1]], device='cuda:0'), 'labels': tensor([1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1,
0, 0, 1, 0, 1, 0, 1, 1], device='cuda:0')}
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/venv-transformer/bin/nlpsaf-transformer-train", line 33, in <module>
sys.exit(load_entry_point('nlpsaf-transformer', 'console_scripts', 'nlpsaf-transformer-train')())
File "/home/ec2-user/SageMaker/nlpsaf-transformers/nlpsaf_transformer/run_safety_signal.py", line 31, in main
_ = train(args)
File "/home/ec2-user/SageMaker/nlpsaf-transformers/nlpsaf_transformer/run_safety_signal.py", line 17, in train
generic_train(model, args)
File "/home/ec2-user/SageMaker/nlpsaf-transformers/nlpsaf_transformer/model.py", line 462, in generic_train
trainer.fit(model)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit
self._run(model)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 922, in _run
self._dispatch()
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 990, in _dispatch
self.accelerator.start_training(self)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1000, in run_stage
return self._run_train()
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1035, in _run_train
self._run_sanity_check(self.lightning_module)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1122, in _run_sanity_check
self._evaluation_loop.run()
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance
dl_outputs = self.epoch_loop.run(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 111, in advance
output = self.evaluation_step(batch, batch_idx, dataloader_idx)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 158, in evaluation_step
output = self.trainer.accelerator.validation_step(step_kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 211, in validation_step
return self.training_type_plugin.validation_step(*step_kwargs.values())
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 392, in validation_step
return self.model(*args, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 799, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 93, in forward
output = self.module.validation_step(*inputs, **kwargs)
File "/home/ec2-user/SageMaker/nlpsaf-transformers/nlpsaf_transformer/model.py", line 345, in validation_step
outputs = self(**inputs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/SageMaker/nlpsaf-transformers/nlpsaf_transformer/model.py", line 179, in forward
return self.model(**inputs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 1858, in forward
outputs = self.longformer(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 1677, in forward
encoder_outputs = self.encoder(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 1280, in forward
layer_outputs = layer_module(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 1205, in forward
self_attn_outputs = self.attention(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 1141, in forward
self_outputs = self.self(
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ec2-user/anaconda3/envs/venv-transformer/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 708, in forward
attn_probs[is_index_global_attn_nonzero] = 0
RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == value.numel()INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/cuda/Indexing.cu":253, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor1973761<|||||>Facing the same issue with [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384) via the [run_summarization.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py) script. Both `seed` and `data_seed` are set, and there are no randomly initialized weights:
```
[INFO|modeling_utils.py:3032] 2023-03-30 10:44:26,097 >> All model checkpoint weights were used when initializing LEDForConditionalGeneration.
[INFO|modeling_utils.py:3040] 2023-03-30 10:44:26,098 >> All the weights of LEDForConditionalGeneration were initialized from the model checkpoint at allenai/led-large-16384.
```
The same script with another model (e.g. `flan-t5-large`) is perfectly reproducible across runs with the same seed. I don't have a sense of where the remaining sources of randomness would be for Longformer/LED. |
transformers | 12,481 | closed | How to construct a pretrain by myself using Tensorflow2 +Keras? | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 07-02-2021 10:34:06 | 07-02-2021 10:34:06 | In case you want to pre-train a model on your own data in Tensorflow, please take a look at the [Tensorflow examples](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/language-modeling).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,480 | closed | Fix TAPAS test uncovered by #12446 | #12446 cleaned up a typo in a test name that uncovered a failure in the TAPAS test.
The failure in the TAPAS test is the following: `test_save_load_fast_init_from_base`. It adds a mock initialization. However, TAPAS has `nn.Parameter` attributes which are not impacted by that initialization, and end up having random values when loaded from the base model.
Updating this configuration parameter ensures that these values get initialized to 0.
| 07-02-2021 08:22:08 | 07-02-2021 08:22:08 | Thanks for solving this! |
transformers | 12,479 | closed | AttributeError when using custom IterableDataset with set_epoch method | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.4.0-77-generic-x86_64-with-glibc2.31
- Python version: 3.9.4
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: kind of
### Who can help
@sgugger since it concerns trainer and/or documentation.
## Information
Model and task do not matter here.
I am using a custom `IterableDataset` for evaluation which I want to use for distributed training at some point. My development machine is CPU-only, so I am not using distributed training there. Following the [trainer documentation](https://huggingface.co/transformers/main_classes/trainer.html) I added a `set_epoch()` to this `IterableDataset`.
With transformers 4.8.2 and pytorch 1.9.0, evaluation reproducibly fails with an `AttributeError`.
## To reproduce
Steps to reproduce the behavior:
1. Run this minimum working example on CPU:
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer,\
TrainingArguments, IntervalStrategy
from torch.utils.data.dataset import IterableDataset
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(2))
class CustomDataset(IterableDataset):
def __iter__(self):
yield from small_eval_dataset
def __len__(self):
return 2
def set_epoch(self, epoch: int):
pass
custom_eval = CustomDataset()
training_args = TrainingArguments(output_dir="test_trainer",
evaluation_strategy=IntervalStrategy.STEPS,
logging_steps=1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1)
trainer = Trainer(model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=custom_eval)
trainer.train()
```
2. Stacktrace:
```
Traceback (most recent call last):
File "/home/mbugert/.../problem.py", line 42, in <module>
trainer.train()
File "/home/mbugert/.local/bin/pyenv/versions/huggingface-bug-3.9.4/lib/python3.9/site-packages/transformers/trainer.py", line 1325, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/mbugert/.local/bin/pyenv/versions/huggingface-bug-3.9.4/lib/python3.9/site-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/home/mbugert/.local/bin/pyenv/versions/huggingface-bug-3.9.4/lib/python3.9/site-packages/transformers/trainer.py", line 2024, in evaluate
output = eval_loop(
File "/home/mbugert/.local/bin/pyenv/versions/huggingface-bug-3.9.4/lib/python3.9/site-packages/transformers/trainer.py", line 2245, in evaluation_loop
num_samples = eval_dataset.num_examples
File "/home/mbugert/.local/bin/pyenv/versions/huggingface-bug-3.9.4/lib/python3.9/site-packages/torch/utils/data/dataset.py", line 163, in __getattr__
raise AttributeError
AttributeError
```
## Expected behavior
Evaluation succeeds.
## Possible cause
I looked into the problem a bit myself but got stuck:
* The same code runs fine with transformers 4.6.1 and pytorch 1.8.1, so it's likely that the issue is related to the `Dataset` typing changes introduced in pytorch 1.9.0.
* The problem arises here: https://github.com/huggingface/transformers/blob/e52288a140489b486a7a436f8d25d7dea66d0a62/src/transformers/trainer.py#L2242-L2247
`isinstance(eval_dataset, IterableDatasetShard)` returns `True` despite the facts that training isn't distributed and `eval_dataset` is of type `CustomDataset`.
* Debugging revealed that the `isinstance` call leads to `typing._ProtocolMeta.__instancecheck__` where some funky runtime typecheck is performed, which turns out `True` because `CustomDataset` has the all the attributes and methods which `IterableDatasetShard` has. Hence, if one comments out `set_epoch` in `CustomDataset` (or if one adds a `foo()` method to `IterableDatasetShard`), the code runs through.
At this point I wasn't sure anymore if this is a bug in transformers, a bug in pytorch or just user error because I didn't understand how `set_epoch` is supposed to be defined. Please let me know what you think. :smile: | 07-02-2021 07:11:37 | 07-02-2021 07:11:37 | Let us know if the PR above didn't solve your problem!<|||||>It works, thanks for fixing it swiftly. :+1: |
transformers | 12,478 | closed | Mismatch between tokenizer and model in pipeline | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Nope
- Using distributed or parallel set-up in script?: Nope
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Any models with `pipeline`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
```python
In [14]: p = pipeline("sentiment-analysis", tokenizer='cardiffnlp/twitter-roberta-base-sentiment') # only tokenizer is provided
In [15]: p.tokenizer # roberta as provided
Out[15]: PreTrainedTokenizerFast(name_or_path='cardiffnlp/twitter-roberta-base-sentiment', vocab_size=50265, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False)})
In [21]: p.model.config.model_type # falls back to hard coded default model
Out[21]: 'distilbert'
In [22]: p("What a lovely day") # does not work
Out[22]: [{'label': 'NEGATIVE', 'score': 0.8819105625152588}]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The tokenizer and model should be compatible regardless of how arguments to pipeline are given.
I think the if statements in the [`pipeline` function](https://github.com/huggingface/transformers/blob/2d1d92181a0f739b6817a74401c51862d28bb409/src/transformers/pipelines/__init__.py#L265) should be something like below to handle all the cases.
```python
if model is not None and tokenizer is not None:
# case 1: use default
elif model is None and tokenizer is None:
# case 2: model should follow tokenizer
elif model is not None and tokenizer is None:
# case 3: tokenizer should follow model
elif model is None and tokenizer is not None:
# case 4: maybe assert if the two are compatible?
```
In addition, although the [current code](https://github.com/huggingface/transformers/blob/2d1d92181a0f739b6817a74401c51862d28bb409/src/transformers/pipelines/__init__.py#L426) complains that we cannot infer tokenizer when model is given as a `PreTrainedModel` (one scenario under case 3), I think it is possible through `AutoTokenizer.from_pretrained(model.config._name_or_path)` as `_name_or_path` is `'bert-base-cased', for example.
Let me know how you think! I would be happy to submit a PR if consensus is reached :) | 07-02-2021 05:56:44 | 07-02-2021 05:56:44 | It's not possible to just mix and match tokenizers with models. The `sentiment-analysis` pipeline uses the `DistilBertForSequenceClassification` model by default. However, you are providing it with a RoBERTa-based tokenizer. DistilBERT does make use of WordPiece tokenization, whereas RoBERTa-like models make use of a BPE (Byte-Pair Encoding) tokenizer.
What's a good use case of only providing a tokenizer to a pipeline, but not a model?<|||||>Thanks @NielsRogge for a quick reply.
I totally agree with you that no one would want to mix and match, and it is not even possible in many cases as you pointed out.
And that's why I suggested for a model to follow the compatible tokenizer, or vice versa.
Or at least, I think raising an error or printing a warning message would be nice for users who could only provide tokenizer by mistake, since for now it leads to a unwanted results without any warning. (On a personal note, a warning message from transformers library was a huge time saver for me :) )<|||||>I think we could indeed raise an error/better warning if only a tokenizer is provided. When the model is provided, the tokenizer is selected automatically from that ID, which I agree is a bit weird as it doesn't work the other way.
I think erroring out when the `tokenizer` is especially specified but not the model would be nice to prevent unseen errors from happening. Is there a use-case I'm not seeing @Narsil?<|||||>I don't think so, even then, specifying forcibly model + tokenizer should work (and crash later if not compatible) as it becomes a user responsability IMO.
So raising error when specifying tokenizer and NOT model seems the best here.<|||||>I see that many agree to raise an error. So maybe add something like below at [the beginning](https://github.com/huggingface/transformers/blob/2d1d92181a0f739b6817a74401c51862d28bb409/src/transformers/pipelines/__init__.py#L382) of the pipeline function?
```python
if model is None and tokenizer is not None:
raise Exception(
"Impossible to instantiate a pipeline with tokenizer specified but not the model."
"Please provide a PreTrainedModel class or a path/identifier to a pretrained model when providing tokenizer."
)
```
Plus, how about allowing selecting tokenizer from the model that is provided as PreTrainedModel, which is [not supported for the moment](https://github.com/huggingface/transformers/blob/2d1d92181a0f739b6817a74401c51862d28bb409/src/transformers/pipelines/__init__.py#L426)? Perhaps with something like this?
```python
# else:
# # Impossible to guess what is the right tokenizer here
# raise Exception(
# "Impossible to guess which tokenizer to use. "
# "Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer."
# )
elif isinstance(model, PreTrainedModel):
tokenizer = model.config._name_or_path
```
(I found that `model.config._name_or_path` is set [inside `PreTrainedModel.from_pretrained`](https://github.com/huggingface/transformers/blob/2d1d92181a0f739b6817a74401c51862d28bb409/src/transformers/modeling_utils.py#L1269), which I believe every model passes through)
<|||||>I agree with your first proposal! Do you want to open a PR?
For your second proposal, how would that differ from the line above:
```
if isinstance(model_name, str):
tokenizer = model_name
```
?<|||||>checking `model_name` if it's str does not cover the following case where a model is passed as a `PreTrainedModel`.
```python
In [4]: model = AutoModel.from_pretrained(""cardiffnlp/twitter-roberta-base-sentiment")
In [5]: p = pipeline(model=model, task="sentiment-analysis")
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-7-57dfcbcb4261> in <module>
----> 1 p = pipeline(model=model, task="sentiment-analysis")
~/anaconda3/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
437 else:
438 # Impossible to guess what is the right tokenizer here
--> 439 raise Exception(
440 "Impossible to guess which tokenizer to use. "
441 "Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer."
Exception: Impossible to guess which tokenizer to use. Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer.
```
I think the second proposal makes the pipeline guess the correct tokenizer when `model` is provided as `PreTrainedModel` as well.
I will open a PR as soon as we settle on the second proposal :)<|||||>For your second proposal, I actually think the current error is pretty good and point users to a valid solution.
Automatically deriving the tokenizer from model.config._name_or_path seems a bit "too much magic" to me :
- Either, you just want magic and pass strings, or you want to be in full control and pass real objects.
- I don't think we can rely on `_name_or_path` to be the actual desired valid path for the tokenizer. The actual object might have been modified before the call, making the path obsolete (you can also have a directory without any tokenizer, imaging AutoModel.from_pretrained(....), model.save_pretrained('mydirectory'))
- The fact that the variable is marked as "private" leads me to think that pipeline shouldn't rely on it.
Just a personal opinion. In any case the failing error should be as descriptive as possible.
Because we now have audio and image pipelines, the feature_extractor also needs the same logic regardless of the chosen solution IMO.
<|||||>You point makes sense to me -- too much magic can complicate the issue.
I opened a PR #12548 that covers the first proposal. I tried to as descriptive as possible, please take a look :) |
transformers | 12,477 | closed | [Deepspeed] adapt multiple models, add zero_to_fp32 tests | Massive fixing and testing of different models under Deepspeed, followed up by a recovery of full fp32 weights and various other related tweaks.
zero_to_fp32 isn't currently being tested on deepspeed side (as I didn't write any tests), so testing it here and while at it starting to include more and more models instead of waiting for users to discover that this or that doesn't work.
This PR:
- [x] adds multiple model tests which include a short training and zero_to_fp32.py fp32 weights recovery from the shell
- [x] docs update to the new `zero_to_fp32.py` syntax and new API
- [x] adds a test for `load_state_dict_from_zero_checkpoint` (added in deepspeed)
- [x] fixes multiple models to work with deepspeed
- [x] removed some deepspeed craft from `modeling_wav2vec2.py` as it's no longer needed
TODO:
Dependencies to resolve:
- [x] https://github.com/microsoft/DeepSpeed/pull/1181
- [x] https://github.com/microsoft/DeepSpeed/pull/1202 - which will also fix https://github.com/huggingface/transformers/issues/12403 marian has the same issue (the test is already here).
- [x] new deepspeed release after the above PRs are merged (probably 0.4.3)
Fixes: https://github.com/huggingface/transformers/issues/12403
@sgugger, @LysandreJik
| 07-02-2021 04:54:53 | 07-02-2021 04:54:53 | |
transformers | 12,476 | closed | How to fine-tune a model with my custom tokenizer? | 1. I want to train a tokenizer with my own datasets.
2. I have no enough data and resources to train a pretrained model from scratch.
pipeline:
1. train a tokenizer like [tokenizer_training](https://github.com/huggingface/notebooks/blob/master/examples/tokenizer_training.ipynb)
2. do something with current pretrained model, get a new pretrained model
3. fine-tune the pretrained model with my domain datasets, get my domain pretrained model
4. fine-tune the domain pretrained model for my downstream task.
and what should I do in the 2rd step? | 07-02-2021 03:02:02 | 07-02-2021 03:02:02 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,475 | closed | BartForConditionalGeneration: `decoder_input_ids` should not be computed if `decoder_inputs_embeds` is set | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
- Model I am using: Bart. The specific class is BartForConditionalGeneration.
- Summary: Cannot provide `decoder_inputs_embeds` to `BartForConditionalGeneration.forward` if `labels` are provided.
In `BartForConditionalGeneration.forward`, there is the following code on line 1285 of modelling_bart.py in the current version (4.8.2):
if labels is not None:
if decoder_input_ids is None:
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
So, if `labels` are provided, `decoder_input_ids` are set to the `labels` shifted to the right. This is problematic: if `decoder_inputs_embeds` is also set, the call to `self.model`, which eventually gets to `BartDecoder.forward`, will raise an error. In particular, the snipped that does this in `BartDecoder.forward` is (line 967 of modelling_bart.py in version 4.8.2):
if input_ids is not None and inputs_embeds is not None:
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
The fix is quite simple, similar to what is there already in `BartModel.forward` (line 1152 in version 4.8.2). Mainly, we should not compute `decoder_input_ids` if `decoder_inputs_embeds` is provided. That is, the first snippet above in In `BartForConditionalGeneration.forward` should be:
if labels is not None:
if decoder_input_ids is None and decoder_inputs_embeds is None: # <- this line changed
decoder_input_ids = shift_tokens_right(
labels, self.config.pad_token_id, self.config.decoder_start_token_id
)
## To reproduce
Steps to reproduce the behavior:
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
input_ids = torch.tensor([[1, 2, 3]])
labels = torch.tensor([[1, 2, 3]])
decoder_input_ids = torch.tensor([[1, 2, 3]])
decoder_inputs_embeds = model.get_input_embeddings()(decoder_input_ids)
model(input_ids, decoder_inputs_embeds=decoder_inputs_embeds, labels=labels)
## Expected behavior
`decoder_input_ids` should not be computed if `decoder_inputs_embeds` and `labels` are provided.
| 07-01-2021 22:06:34 | 07-01-2021 22:06:34 | same problem here, my solution is removing argument `labels`, but I have to calculate loss outside the forward function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,474 | closed | [Pegasus][tokenizer] pegasus tokenizer doesn't have any BOS token? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.7 (yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@patrickvonplaten, @patil-suraj, @LysandreJik
## Information
Model I am using (PEGASUS ...):
This is not specifically a bug in transformers, but I'm wondering if I can comment and ask my question in this section as there was no other section I could relate this to.
I'm trying to use pegasus's encoder for extractive summarization proposes so that system can identify top sentences. To do so, I want to implement an approach like BERTSUM where [CLS] tokens are being added to the beginning of each sentence.
Talking of pegasus, I noticed there's no so-called BOS (similar to CLS head in BERT) token which denotes the start of each sentence (below) –although it's available in BART's tokenizer.
https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/tokenization_pegasus.py#L102
```
...
def __init__(
self,
vocab_file,
pad_token="<pad>",
eos_token="</s>",
unk_token="<unk>",
mask_token="<mask_2>",
mask_token_sent="<mask_1>",
additional_special_tokens=None,
offset=103, # entries 2 - 104 are only used for pretraining
sp_model_kwargs: Optional[Dict[str, Any]] = None,
**kwargs
) -> None:
...
```
However, there's `additional_special_tokens` by which I can define new special tokens (like the one I intend to use for sentences). Is that possible? I mean, is the new token expected to give representations of what tokens (i.e.,. sentences) following it? Thanks! | 07-01-2021 19:39:37 | 07-01-2021 19:39:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,473 | closed | Release `utils/style_doc.py` as a python package | Hi,
I like the functionality of your [`utils/style_doc.py`](https://github.com/huggingface/transformers/blob/master/utils/style_doc.py).
Did you ever think about open sourcing it as a python package?
I think other projects might also want to use it.
| 07-01-2021 19:37:30 | 07-01-2021 19:37:30 | @sgugger <|||||>It's very much linked to the way we write documentation in Transformers so it would need a bit of work to become a standalone package that can be used. I have no time to do it personally, but if someone wants to start building it, I'm happy to review things!<|||||>> It's very much linked to the way we write documentation in Transformers so it would need a bit of work to become a standalone package that can be used. I have no time to do it personally, but if someone wants to start building it, I'm happy to review things!
@sgugger ok - cool! So if I would create such a standalone package which can be used by you and others - you would:
- help with review
- switch transformers to that implementation if it works and if I add you as maintainers
Would you do that?<|||||>Definitely!<|||||>> Definitely!
That is nice! Thanks.
> It's very much linked to the way we write documentation in Transformers
@sgugger The docstring formatter looks very "normal". Do you mean the rst formatter when you say "very much linked"?
<|||||>I meant there could be plenty of edge cases I never thought of since we write the Transformers documentation a certain way, sorry if I was unclear. Everything in that file is pretty standard otherwise.<|||||>Hey @sgugger and @LysandreJik
We creatated the new repository and added you with "write" role: https://github.com/telekom/style-doc
We can invite more ppl. from HF if you want.
The license is unchanged at "Apache 2.0" with HF still having the copyright with me (us) added.
Credits are given at the readme file.
I hope everything is nice for you.
An review is very welcome. :+1:
Unfortunately, I also found the first small problem right away: https://github.com/telekom/style-doc/issues/10
More README text and a `CONTRIBUTING.md` will be added later.<|||||>Everything is updated now and ready for a first release: https://github.com/telekom/style-doc<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>So since the project is out at pypi now: Do you (HF) want to migrate to that tool?
You have commit rights on the project and can maintain it together with me.
First bug reports (and improvements) are already coming.
@sgugger @LysandreJik
I can provide a PR if wanted.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,472 | closed | DistilBERT - Operation reordering for compatibility with TensorRT | This change avoids an `expand_as` operation on a boolean tensor. Improves compatibility with TensorRT 8.0.
@LysandreJik
@mfuntowicz
| 07-01-2021 19:07:11 | 07-01-2021 19:07:11 | Many thanks @novatig 🙏🏻 💪🏻
As part of Infinity, this PR enables exporting DistilBERT based models to TensorRT plans for efficient inference as explained by @novatig <|||||>Test failure unrelated and fixed on `master`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,471 | closed | Rework notebooks and move them to the Notebooks repo | This PR updates the notebooks table with:
- the documentation notebooks
- the new notebooks to train a tokenizer and a LM from scratch
It also moves the notebooks present in this repo to https://github.com/huggingface/notebooks so that they are all in the same place. | 07-01-2021 17:54:06 | 07-01-2021 17:54:06 | |
transformers | 12,470 | closed | [Flax] Dataset streaming example | # What does this PR do?
This PR adds an example of how to use dataset streaming.
@lhoestq - I would be super happy if you could take a look! I've left some comments that should help understand my thought process. I'd be very happy about some Feedback :-)
The scripts runs as expected at the moment - after your feedback, I'll make it nicer (add better defaults) & add the eval coed as well :-) | 07-01-2021 17:53:09 | 07-01-2021 17:53:09 | The training speed of this script compared to `run_mlm_flax` is sadly reduced by ~50%. @lhoestq - I think the "batchify" function that is called before every training step is probably the reason for this. |
transformers | 12,469 | closed | NER example for Tensorflow | 07-01-2021 17:16:38 | 07-01-2021 17:16:38 | ||
transformers | 12,468 | closed | Add guide on how to build demos for the Flax sprint | Add a guide on how to build a demo for the Flax sprint. This guide will be also updated early next week, but I would like to submit this version on Friday.
Note: this will be extended more once the evaluation is a bit more defined and with the launch of S. | 07-01-2021 16:30:55 | 07-01-2021 16:30:55 | Some TODO ideas (for a later iteration)
* Add small guide on Streamlit
* Add small guide on Gradio
* Add small guide on how to use S (or a video)
* Explain why the importance of a demo<|||||>I'll merge this today. I'll do a follow up PR on Monday extending the demo section. |
transformers | 12,467 | closed | Import check_inits handling of duplicate definitions. | # Improves the handling of duplicate definitions for `utils/check_inits.py`
Currently if there are duplicate definitions of import structure or type hints, the model may or not may fail depending on if the import_structure and TYPE_HINTS are duplicated in the same way. If it fails, it will show the wrong error message.
This ensures this test always fails and displays the appropriate error message.
This is particularly useful if things go wrong while merging master into a model PR after a long time.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you write any new necessary tests? -> N/A.
## Who can review?
@sgugger since he first wrote the tool | 07-01-2021 16:22:26 | 07-01-2021 16:22:26 | |
transformers | 12,466 | closed | fixed typo in flax-projects readme | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-01-2021 14:39:21 | 07-01-2021 14:39:21 | |
transformers | 12,465 | closed | Added talk details | final confirmed changes to DeepMind talk | 07-01-2021 14:05:22 | 07-01-2021 14:05:22 | |
transformers | 12,464 | closed | Fix training_args.py barrier for torch_xla | torch_xla currently has its own synchronization primitives, so use
xm.rendezvous(tag) instead. | 07-01-2021 14:03:01 | 07-01-2021 14:03:01 | |
transformers | 12,463 | closed | Add TPU README | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds intructions on how to access a TPU.
@patil-suraj - can you check whether it works as expected? Can you follow the steps of the PR? I've sent you two mails to @huggingface.co
@avital - could you also review? :-)
| 07-01-2021 13:30:35 | 07-01-2021 13:30:35 | LGTM!
One thing -- In the email you sent to others, or in this README, or both we should say that "please only SSH into machines where you're collaborating on a project. If there are any concerns about access, we may look at the audit logs to see who accessed which TPUs" or something like that. |
transformers | 12,462 | closed | Skip ProphetNet test | cc @patrickvonplaten | 07-01-2021 13:30:34 | 07-01-2021 13:30:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,461 | closed | Fixing bug with param count without embeddings | The model param number count utility (`model.num_parameters`) had an `exclude_embeddings` flag that did not actually exclude embedding counts. This PR fixes this bug so that the flag behaves properly. | 07-01-2021 13:29:25 | 07-01-2021 13:29:25 | |
transformers | 12,460 | open | Extractive summarization pipeline | # 🚀 Feature request
An extractive summarization pipeline similar to the one for abstractive summarization.
A central place for researchers to upload new models for others to use, without having to run the code from various git repo's.
Currently, extractive summarization is the only safe choice for producing textual summaries in practices. Therefore, it seems relevant for Huggingface to include a pipeline for this task.
This has previously been brought up here: https://github.com/huggingface/transformers/issues/4332, but the issue remains closed which is unfortunate, as I think it would be a great feature.
## Motivation
The current abstractive summarization pipeline is certainly very useful, and a great feature for all working on NLG tasks.
However, given the significant problems with factual consistency in asbtractive summaries (see in example: https://arxiv.org/abs/2104.13346, https://arxiv.org/abs/2104.14839), abstractive summaries are still very risky to use in practice, as even the state of the art models are riddled with factual errors.
Any thoughts on this? :)
| 07-01-2021 13:02:14 | 07-01-2021 13:02:14 | I'd be down to work on this!<|||||>If we have models on the Hub that are trained to perform this, then it would be fun to have support for it.
WDYT @Narsil @patil-suraj ?<|||||>Seems like a good idea to me.
Since it's performing the same task from a user's perspective and models can only do 1 type of summary, I think we should aim to keep a single pipeline + `task` for this and decide which one to use based on `AutoModelForXXX` class.
In then end user's then don't need to get the difference between the two, only the model's performance will be the judge and they don't have to understand the lower level difference. Also they can switch from one to the other with extremely low effort,
We already have an example for doing this in the `AutomaticSpeechRecognitionPipeline`.<|||||>@LysandreJik I looked at the hub and also the existing model definitions and I couldn't find much related to extractive summarization. Before we have this pipeline, don't we need things like `AutoModelForExtractiveSummarization`?
@Narsil I understand that we can rely on `AutoModelForXXX` for selecting the right model type for a given pretrained model. But can we have different preprocessor and post processor in the pipeline based on some sort of identifier in the pretrained model? Extractive and abstractive summarization has completely different preprocessor and post processor.<|||||>It's the same for `ASRPipeline`. the `if` can be located both in `preprocessing` and `postprocessing` and in `forward` without any issues.
There are actually ways we could imagine splitting things into subclasses, but that makes hacking slightly harder (when user send their own custom model) because that's a new layer they have to figure out. (which is the subclass for me and how does it work ?)
That's why it's not done atm. But if the code becomes a very silly list of `if` in every method, then we could definitely revisit that choice.
The best way forward imo is to have a starting implementation with actual code so we can discuss further (maybe some preprocessing can actually be shared or at least arguments named aligned and so on)
|
transformers | 12,459 | closed | Fix to keep up with the changes made in fairseq's RobertaModel | # What does this PR do?
- This PR fixes broken model conversion script.
- Current script seems to be outdated with the latest release and main branch fairseq.
- `RobertaModel.from_pretrained` now returns RobertaHubInterface, which seems to be different from the assumption of the current implementation.
- cf. https://github.com/pytorch/fairseq/blob/v0.10.2/fairseq/models/roberta/model.py#L256
- This PR also make this script to use the positional encoding size from the model, not the hardcoded value.
Current implementation causes errors like:
```
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 177, in <module>
args.roberta_checkpoint_path, args.pytorch_dump_folder_path, args.classification_head
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 59, in convert_roberta_checkpoint_to_pytorch
hidden_size=roberta.args.encoder_embed_dim,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1131, in __getattr__
type(self).__name__, name))
AttributeError: 'RobertaHubInterface' object has no attribute 'args'
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- There is no mention of this script in the documentation.
- [ ] Did you write any new necessary tests?
- There is no test of this script in the repository.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Ping to the contributors of this script:
- @LysandreJik
- @sgugger
| 07-01-2021 12:57:49 | 07-01-2021 12:57:49 | I just ran the script as it is currently with the lastest pypi available fairseq and it converted the model successfully:
```
torch.Size([1, 11, 50265]) torch.Size([1, 11, 50265])
max_absolute_diff = 0.0
Do both models output the same tensors? 🔥
Saving model to here
Configuration saved in here/config.json
Model weights saved in here/pytorch_model.bin
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,458 | closed | [Wav2Vec2, Hubert] Fix ctc loss test | # What does this PR do?
This PR fixes the `ctc_loss` test. The problem is that PyTorch ctc_loss implementation https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html seems to use too many approximation to ensure deterministic behavior for a test. So in the PR I make the test much less aggressive by simply checking that the loss can be computed.
We've trained 250 models during the Wav2Vec2 sprint using this function so I think we can be quite certain that it works as expected.
Not an ideal solution, but I don't really see how to make this "sum == mean" test really work since it's so flaky | 07-01-2021 11:35:54 | 07-01-2021 11:35:54 | |
transformers | 12,457 | closed | gpt2 causal mask to skip learning context input? (beginner question) | Suppose we use gpt2 to train a text generation with context or translation with input sequence.
[google translate<sep> google Übersetzer].
Can I ask how we can change the casual mask to let the training only from the token generation after the context input (\<en-de>\)? Like,
[google translate\<en-de\>] to [google]
[google translate\<en-de\> google] to [Übersetzer]
instead of
[google] to [google translate]
I saw we currently added the casual mask here.
```
causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].bool()
```
https://github.com/huggingface/transformers/blob/b655f16d4eb405d286821bdb68b46b8a989f9c04/src/transformers/models/gpt2/modeling_gpt2.py#L186
Note using the default DataCollatorForLanguageModeling can work for the above task. I am thinking how we can let the training focus on text generation after the context input.
Can we use the special mask tokens to achieve it? I am a bit doubtful on it. Since it is used for mlm from the code
https://github.com/huggingface/transformers/blob/b655f16d4eb405d286821bdb68b46b8a989f9c04/src/transformers/data/data_collator.py#L350
```
# If special token mask has been preprocessed, pop it from the dict.
special_tokens_mask = batch.pop("special_tokens_mask", None)
if self.mlm:
batch["input_ids"], batch["labels"] = self.mask_tokens(
batch["input_ids"], special_tokens_mask=special_tokens_mask
)
else:
labels = batch["input_ids"].clone()
if self.tokenizer.pad_token_id is not None:
labels[labels == self.tokenizer.pad_token_id] = -100
batch["labels"] = labels
return batch
```
or should we mask all context token input (before \<en-de\>)as -100 in labels so that they would be ignored in the below loss function?
```
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
```
Thanks a lot. cc @patrickvonplaten | 07-01-2021 11:32:24 | 07-01-2021 11:32:24 | |
transformers | 12,456 | closed | convert_graph_to_onnx.py failing to run on Wav2Vec2 models | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Information
Model I am using (Bert, XLNet ...): facebook/wav2vec2-large-960h
The problem arises when using:
* [X ] the official example scripts: (give details below)
I am trying the next command:
python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --quantize --model facebook/wav2vec2-large-960h wav2vec2_convert.onnx
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
run:
python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --quantize --model facebook/wav2vec2-large-960h wav2vec2_convert.onnx
I get the next error:
python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --quantize --model facebook/wav2vec2-large-960h wav2vec2_convert.onnx
/home/ptuser/anaconda3/envs/gong_env/lib/python3.6/site-packages/torchaudio/backend/utils.py:54: UserWarning: "sox" backend is being deprecated. The default backend will be changed to "sox_io" backend in 0.8.0 and "sox" backend will be removed in 0.9.0. Please migrate to "sox_io" backend. Please refer to https://github.com/pytorch/audio/issues/903 for the detail.
'"sox" backend is being deprecated. '
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: facebook/wav2vec2-large-960h, tokenizer: facebook/wav2vec2-large-960h)
Some weights of the model checkpoint at facebook/wav2vec2-large-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight']
- This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-large-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Error while converting the model: __init__() got an unexpected keyword argument 'feature_extractor'
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-01-2021 10:48:30 | 07-01-2021 10:48:30 | Hi, Wav2Vec2 is not supported by the ONNX converter. Please check out the following PR which offers the possibility to convert existing models to ONNX by defining an architecture: https://github.com/huggingface/transformers/pull/11786. You can see the docs (WIP) for that PR [here](https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html).<|||||>Will be happy if Wav2Vec2 will be supported in the future...<|||||>Think someone solved it here: https://github.com/huggingface/transformers/issues/10004<|||||>Thanks Patrick |
transformers | 12,455 | closed | Setting global tokens in BigBirdModel | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.8.0-55-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@vasudevgupta7 , @patrickvonplaten, @patil-suraj
## Information
Model I am using: BigBirdModel
Hi, I cannot find the way to set the global tokens for BigBirdModel as in BigBird-ITC. I hope I only missed something. The official paper says (Appendix E.2, paragraph "Natural Questions") "For BIGBIRD-ITC, we make the first 128 tokens as global.".
For e.g. longformer, setting global tokens to attend to is possible via `global_attention_mask` model parameter. However I found no option how to enforce the global attention on certain tokens for bigbird. This is crucial for e.g. all QA tasks.
Is the explicit setting of global tokens supported?
| 07-01-2021 10:34:22 | 07-01-2021 10:34:22 | So after closely inspecting the code I ran into following comment in method `bigbird_block_sparse_attention`.
There I found
```
# BigBird block-sparse attention as suggested in paper
# ITC:
# global tokens: 2 x block_size
# window tokens: 3 x block_size
# random tokens: num_rand_tokens x block_size
# Note:
# 1) Currently, ETC is not supported.
# 2) Window size is fixed to 3 blocks & it can be changed only by
# changing `block_size`.
# 3) Number of global blocks are fixed (2 blocks here) & global tokens can be
# controlled only by `block_size`.
```
I suppose that answers my 'issue'. |
transformers | 12,454 | closed | (WIP) Add FNet with flax template | # What does this PR do?
This is a demo for #12441
Fixes #12411
| 07-01-2021 10:08:23 | 07-01-2021 10:08:23 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,453 | closed | Update CANINE test | # What does this PR do?
Tiny fix in the tests of CANINE, namely replace nielsr by the Google namespace.
@LysandreJik | 07-01-2021 07:34:58 | 07-01-2021 07:34:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale |
transformers | 12,452 | closed | Comment fast GPU TF tests | cc @stas00 @sgugger | 07-01-2021 07:19:05 | 07-01-2021 07:19:05 | |
transformers | 12,451 | closed | Wiki content part : XLM-RoBERTa, "xlm-roberta-base" | In the wiki page for XLM-R
https://huggingface.co/transformers/model_doc/xlmroberta.html
the model name should be 'xlm-roberta-base' instead of 'roberta-base'
@sgugger | 07-01-2021 06:58:36 | 07-01-2021 06:58:36 | Yes, that's because those models are direct subclasses of Roberta, so they have the same documentation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,450 | closed | Finetuned model generates incomprehensible text when used while in memory but works fine when loaded via saved checkpoints. | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-1051-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @LysandreJik @stas00
Models:
- gpt2-M, L, XL
Model I am using (Bert, XLNet ...): - GPT2-M, L, XL
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] Autoregressive text generation (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Set the training arguments via `TrainingArguments`
2. Load the pretrained model via `GPT2LMHeadModel.from_pretrained('gpt2')`
3. Initialize the `Trainer` with 'TrainingArguments`
4. run `Trainer.train()`
5. After training has been completed, use the in-memory model to generate texts.
# Concerned snippets
```python
self.training_args = TrainingArguments(
"gpt2_model", deepspeed = self.ds_config,
do_train = True,
per_device_train_batch_size = 2,
num_train_epochs = 10,
logging_strategy = 'epoch',
save_strategy = 'epoch',
fp16=True
)
self.trainer = Trainer(
model=self.model,
args=self.training_args,
train_dataset=self.dataset["train"],
)
self.trainer.train()
def generate(self, prompt):
ids = self.tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
max_length = ids.shape[1] + 400
gen_tokens = self.trainer.model.generate(
ids,
do_sample=True,
min_length=max_length,
max_length=max_length,
temperature=0.9,
use_cache=True,
)
gen_text = self.tokenizer.batch_decode(gen_tokens)[0]
return gen_text
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The in-memory model obtained after training should generate text on the dataset it was finetuned on, but it generates incomprehensible text.
# Generated Output

| 07-01-2021 06:00:08 | 07-01-2021 06:00:08 | Pinging @sgugger as using the `Trainer`<|||||>I think you forgot to put the model in evaluation mode maybe?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,449 | closed | Pass `model_kwargs` when loading a model in `pipeline()` | # What does this PR do?
Fixes #12448
When working on this, I considered adding a test, but none of the pipeline tests even use the `model_kwargs` parameter. I could add a test to one of the `test_pipelines_*.py` files to ensure that a local cache is used as a way of testing, but I'm not sure if that is the best option.
The parameter is indirectly used in `test_onnx.py`, but there is no verification that the parameter actually works properly. (Given that those tests pass without it working, maybe those tests might not need to use it, anyway.)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
pipelines: @LysandreJik | 07-01-2021 02:31:32 | 07-01-2021 02:31:32 | I've written a test to ensure the `model_kwargs` parameter is being properly passed when loading a model.<|||||>Sorry about the failing style checks! I thought I ran them, but I guess not. They're hopefully good now. |
transformers | 12,448 | closed | Instantiating a model from `pipeline()` ignores `model_kwargs` parameter | ## Environment info
- `transformers` version: 4.8.1
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
This should be a one-line fix, so I will be submitting a PR shortly.
## Information
Model I am using: `gpt2` (not model-specific issue, though)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Ensure the model cache is already populated with the correct model by running the following code:
```python
from transformers import AutoModelForCausalLM
_ = AutoModelForCausalLM.from_pretrained("gpt2", cache_dir="model_cache")
```
2. Put the following code in `test.py`:
```python
from transformers import pipeline
_ = pipeline("text-generation", model="gpt2", model_kwargs={"cache_dir": "model_cache"})
```
3. Run `time TRANSFORMERS_OFFLINE=1 python test.py` to force the cache to be hit
4. See that the following exception is returned:
```console
Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
Traceback (most recent call last):
File "test.py", line 3, in <module>
_ = pipeline("text-generation", model="gpt2", model_kwargs={"cache_dir": "model_cache"})
File "venv/lib/python3.7/site-packages/transformers/pipelines/__init__.py", line 409, in pipeline
model, model_classes=model_classes, config=config, framework=framework, revision=revision, task=task
File "venv/lib/python3.7/site-packages/transformers/pipelines/base.py", line 136, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "venv/lib/python3.7/site-packages/transformers/utils/dummy_tf_objects.py", line 991, in from_pretrained
requires_backends(cls, ["tf"])
File "venv/lib/python3.7/site-packages/transformers/file_utils.py", line 612, in requires_backends
raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends]))
ImportError:
TFGPT2LMHeadModel requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the
installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
```
(I edited the stack trace to remove the parts of the path outside the virtual environment.)
## Expected behavior
There should be no output because the model should be loaded from the cache without issues. | 07-01-2021 02:30:54 | 07-01-2021 02:30:54 | You could do it like so:
```py
from transformers import AutoModelForCausalLM, pipeline, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2", cache_dir="model_cache")
tokenizer = AutoTokenizer.from_pretrained("gpt2", cache_dir="model_cache")
_ = pipeline("text-generation", model=model, tokenizer=tokenizer)
```<|||||>While that's a good temporary workaround (I'm currently using a different one), I was hoping for a longer term solution so [`pipeline()`](https://huggingface.co/transformers/v4.8.0/main_classes/pipelines.html#transformers.pipeline) works as the docs say:
> **model_kwargs** – Additional dictionary of keyword arguments passed along to the model’s `from_pretrained(..., **model_kwargs)` function.
`model_kwags` actually used to work properly, at least when the `framework` parameter was set, but #12025 broke it. #12449 should fix it, although it doesn't address the issue that #12025 broke this behavior without any tests failing.<|||||>Great, thank you for bringing me up to speed! Indeed, I hadn't realized this was a regression - your PR looks great, let's discuss over there. |
transformers | 12,447 | closed | [Flax community event] How to use hub during training | # What does this PR do?
This PR adds a section on how to use the hub during training. I will talk about his in more detail during my talk tomorrow.. | 06-30-2021 18:43:34 | 06-30-2021 18:43:34 | nice writeup @patrickvonplaten |
transformers | 12,446 | closed | [roberta] fix lm_head.decoder.weight ignore_key handling | This PR fixes https://github.com/huggingface/transformers/issues/12426 where `RobertaForMaskedLM` and `RobertaForCausalLM` do:
```
lm_head.decoder.weight = embeddings.word_embeddings.weight
```
and thus `lm_head.decoder.weight` shouldn't be saved or expected to be loaded unless `config.tie_word_embeddings` is `False`.
So this PR:
- adds `_keys_to_ignore_on_save = [r"lm_head.decoder.weight", r"lm_head.decoder.bias"]` - note also added `lm_head.decoder.bias`
- adds `lm_head.decoder.weight` to `_keys_to_ignore_on_load_missing`
- adds a test. For now "private" but may become common down the road if many other models need this feature.
- tweaks `test_save_load_keys_to_ignore_on_save` to be more debug-friendly and not dump the whole `state_dict` - ouch.
Fixes: https://github.com/huggingface/transformers/issues/12426
As discussed in the Issue other models may have the same issue. So let's follow this pattern of making a model "model" and then ask the community to help identify and fix other models.
@LysandreJik
| 06-30-2021 18:34:37 | 06-30-2021 18:34:37 | |
transformers | 12,445 | closed | Deit | # 🚀 Feature request
The Deit model from torchhub is scriptable, however the pertained version in HF repo, "facebook/deit-base-distilled-patch16-224" is the distilled one which is not scriptable.
This can be tried out using the code below
```
import torch
model = torch.hub.load('facebookresearch/deit:main', 'deit_base_patch16_224', pretrained=True)
ts_model = torch.jit.script(model)
```
vs
```
feature_extractor = DeiTFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-224')
model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-224')
ts = torch.jit.script(model)
```
## Motivation
Torchscripted models can boost the inference speed and it would benefit the community having vision models that are scriptable.
| 06-30-2021 18:11:51 | 06-30-2021 18:11:51 | Are you interested in working on this?
Basically, to make DeiT scriptable, you need to set `test_torchscript` of the model test as seen [here](https://github.com/huggingface/transformers/blob/0d1f67e651220bffef1441fa7589620e426ba958/tests/test_modeling_deit.py#L160) to `True`, then run the tests (running `pytest tests/test_modeling_deit.py` from the root of the repo), and then fix the modeling file (`modeling_deit.py`) to make the failing tests pass.
We could then do the same for `modeling_vit.py`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,444 | closed | Cannot load model saved with AutoModelForMaskedLM.from_pretrained if state_dict = True | Hello,
It seems that setting the `state_dict=True` flag in a model's `.save_pretrained` method breaks the load process for `AutoModelForMaskedLM.from_pretrained`
The following code works
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained('distilroberta-base')
model.save_pretrained(
save_directory = './deleteme',
save_config = True,
#state_dict = True,
push_to_hub = False,
)
model = AutoModelForMaskedLM.from_pretrained('./deleteme')
```
However, the following code does not
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained('distilroberta-base')
model.save_pretrained(
save_directory = './deleteme',
save_config = True,
state_dict = True,
push_to_hub = False,
)
model = AutoModelForMaskedLM.from_pretrained('./deleteme')
```
with error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-3ab36ee4e5d3> in <module>
8 push_to_hub = False,
9 )
---> 10 model = AutoModelForMaskedLM.from_pretrained('./deleteme')
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
393 if type(config) in cls._model_mapping.keys():
394 model_class = _get_model_class(config, cls._model_mapping)
--> 395 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
396 raise ValueError(
397 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1217 )
1218
-> 1219 model, missing_keys, unexpected_keys, error_msgs = cls._load_state_dict_into_model(
1220 model, state_dict, pretrained_model_name_or_path, _fast_init=_fast_init
1221 )
~/anaconda3/envs/pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py in _load_state_dict_into_model(cls, model, state_dict, pretrained_model_name_or_path, _fast_init)
1243 old_keys = []
1244 new_keys = []
-> 1245 for key in state_dict.keys():
1246 new_key = None
1247 if "gamma" in key:
AttributeError: 'bool' object has no attribute 'keys'
```
Environment:
```
>>> transformers.__version__
'4.8.2'
>>> torch.__version__
'1.8.0+cu111'
```
Fixes tried (but same error):
- tried with transformers 4.8.1
- tried with absolute paths (rather than relative)
- tried also saving configuration with:
```
from transformers import AutoConfig
config = AutoConfig.from_pretrained('distilroberta-base')
config.save_pretrained('./deleteme')
``` | 06-30-2021 16:44:49 | 06-30-2021 16:44:49 | Hello! You can find the documentation regarding `from_pretrained` here: https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained
Namely, regarding the state dict:
```
state_dict (Dict[str, torch.Tensor], optional) –
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case
though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
```
It accepts a state dict, not a boolean.
What are you trying to do by passing it `state_dict=True`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,443 | open | [Wav2Vec2] Better names for internal classes | Wav2Vec2's classes have too many names, *e.g.*: FlaxWav2Vec2EncoderLayerStableLayerNormCollection.
We should make those names easier (reminder for myself @patrickvonplaten to do this) | 06-30-2021 16:28:39 | 06-30-2021 16:28:39 | This also concerns a badly chosen duplicate of `Wav2Vec2FeatureExtractor` as part of `modeling_wav2vec2.py` since it's also the "FeatureExtractor" in `feature_extraction_wav2Vec2.py` -> should be corrected as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,442 | closed | Add to talks section | Added DeepMind details, switched time slots Ben Wang and DeepMind | 06-30-2021 14:40:10 | 06-30-2021 14:40:10 | |
transformers | 12,441 | closed | Add template for adding flax models | # What does this PR do?
Fixes #12440
## From Patrick
@LysandreJik @sgugger Previously both the PT and TF encoder-decoder templates tests weren't run because the test name didn't match the "*template*" regex - I've fixed that here as pointed out in the comment below.
@cccntu added both FlaxBERT and FlaxBART templates and we've added two tests to make sure those work as expected.
The PR should be good for review now :-) | 06-30-2021 13:13:55 | 06-30-2021 13:13:55 | It's mostly done. I can successfully create a new model with flax. See #12454
TODO:
* Finish modeling_flax for encoder-decoder architecture
* Tests are mapped from tf to np, need fixes.
* Refactor coockie-cutter workflow, make flax blend in with pytorch, tensorflow.
* ??<|||||>I think only the test for template is failing now, can someone take a quick look? Thanks!
@patil-suraj @patrickvonplaten @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe I should rebase and open another PR?<|||||>Sorry for being so late on this one! I'll take a look tomorrow!<|||||>yaaaay - finally all the tests are green! @cccntu - amazing work. I've done a lot of small fixes that are quite time-consuming and tedious to make everything pass, but the main work was all done by you - thanks a mille!<|||||>Thank you very much @patrickvonplaten! 🤗
This is my biggest open source contribution so far, so I really appreciate you help fixing so many stuff and writing the test! (I merely copied the tf version and did some simple substitution.)
I am curious how do you run the template tests? It was confusing to run them locally because the it would overwrite the tracked files and leave unused files, making the work tree messy. I guess `commit -> test -> clean, reset --hard` would work, but I always fear that I would accidentally delete the wrong thing. |
transformers | 12,440 | closed | cookiecutter template for adding flax model | # 🚀 Feature request
Add cookiecutter template for adding flax model.
## Motivation
There is no cookiecutter template for adding flax model.
## Your contribution
I am trying to add a flax model (#12411), I think it's a good opportunity to create a template at the same time. Can I work on this? Any suggestions?
@patil-suraj @patrickvonplaten
| 06-30-2021 12:42:41 | 06-30-2021 12:42:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,439 | open | Expand text-generation pipeline support for other causal models e.g., BigBirdForCausalLM | # 🚀 Feature request
Tried using the text generation pipeline (TextGenerationPipeline) with BigBirdForCausalLM but seems like the pipeline currently only supports a limited number of models. Is there a reason for this? Is there a workaround short of implementing the pipeline myself? Thank you.
| 06-30-2021 12:31:45 | 06-30-2021 12:31:45 | |
transformers | 12,438 | closed | IndexError: index out of bound, MLM+XLA (pre-training) | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.13
- JaxLib version: 0.1.66
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False (Only `TPU` cores)
### Who can help
Not sure who might be the most appropriate person
## Information
Model I am using (Bert, XLNet ...): `BigBird` (MLM)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
This is an error with the `MLM` script (PyTorch) for attempting to pre-train BigBird on TPUs over XLA. The dataset in question is a custom dataset, and the model config and tokenizer has been initialized appropriately.
This is a continuation of [this unanswered](https://discuss.huggingface.co/t/indexerror-index-out-of-bounds/2859) Forum post that faces the same error.
Command used to run the script:-
```py
%%bash
python xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" \
--model_type="big_bird" \
--config_name="./config" \
--tokenizer_name="./tokenizer" \
--train_file="./dataset.txt" \
--validation_file="./val.txt" \
--line_by_line="True" \
--max_seq_length="16000" \
--weight_decay="0.01" \
--per_device_train_batch_size="1" \
--per_device_eval_batch_size="1" \
--learning_rate="3e-4" \
--tpu_num_cores='8' \
--warmup_steps="1000" \
--overwrite_output_dir \
--pad_to_max_length \
--num_train_epochs="5" \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--do_train \
--do_eval \
--logging_steps="50" \
--evaluation_strategy="steps" \
--eval_accumulation_steps='10' \
--report_to="tensorboard" \
--logging_dir='./logs' \
--save_strategy="epoch" \
--load_best_model_at_end='True' \
--metric_for_best_model='validation' \
--preprocessing_num_workers='15'
```
I am facing two errors to be precise,
```py
Exception in device=TPU:0: Default process group has not been initialized, please make sure to call init_process_group.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1006, in main_process_first
yield
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in map
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in <listcomp>
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2664, in shard
writer_batch_size=writer_batch_size,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2254, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2170, in _new_dataset_with_indices
fingerprint=fingerprint,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 297, in __init__
self._indices.column(0)[0].type
File "pyarrow/table.pxi", line 162, in pyarrow.lib.ChunkedArray.__getitem__
File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index
IndexError: index out of bounds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/run_mlm.py", line 529, in _mp_fn
main()
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1011, in main_process_first
torch.distributed.barrier()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2523, in barrier
default_pg = _get_default_group()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
```
I haven't modified the script to call the `init_process_group` yet, focusing on the earlier error of index out of bounds. Clearly, the problem is arising from my own dataset - which was working before however. Interestingly, we get it when its in the tokenizing stage.
At some point, when constructing the arrow dataset its failing. I have no idea about Apache Arrow, so I can't debug further :sweat_smile:
As for the dataset to use, A few simple lines of code with random numbers would be more than enough to reproduce the dataset.
```py
!touch dataset.txt
import random
f = open('./dataset.txt', 'w')
for lines in range(50):
f.write(' '.join(m for m in [str(random.randint(0, 40000)) for i in range(16000)]) + '\n') #16000 words/(numbers) in one line, with random numbers from 0-40000 only.
f.close()
```
Can anyone give me some guidance on where should I start to investigate the error and some possible leads as to the origin?
Any ideas how I can solve it?
| 06-30-2021 12:22:31 | 06-30-2021 12:22:31 | Maybe @lhoestq has an idea for the error in `datasets`<|||||>@lhoestq Any possible leads as to who can solve this bug?<|||||>This is the full traceback BTW, If it may help things going. I am also willing to create a reproducible Colab if you guys want:-
````js
06/28/2021 17:23:13 - WARNING - run_mlm - Process rank: -1, device: xla:0, n_gpu: 0distributed training: False, 16-bits training: False
06/28/2021 17:23:13 - WARNING - datasets.builder - Using custom data configuration default-e8bc7b301aa1b353
06/28/2021 17:23:13 - WARNING - datasets.builder - Reusing dataset text (/root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)
Downloading and preparing dataset text/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5...
Dataset text downloaded and prepared to /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5. Subsequent calls will reuse this data.
WARNING:root:TPU has started up successfully with version pytorch-1.9
WARNING:root:TPU has started up successfully with version pytorch-1.9
WARNING:run_mlm:Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
INFO:run_mlm:Training/evaluation parameters TrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.98,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=10,
eval_steps=50,
evaluation_strategy=IntervalStrategy.STEPS,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
greater_is_better=True,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.0003,
length_column_name=length,
load_best_model_at_end=True,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=./logs,
logging_first_step=False,
logging_steps=50,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=validation,
mp_parameters=,
no_cuda=False,
num_train_epochs=5.0,
output_dir=./results,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=1,
per_device_train_batch_size=1,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=results,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=./results,
save_steps=500,
save_strategy=IntervalStrategy.EPOCH,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=8,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=1000,
weight_decay=0.01,
)
WARNING:datasets.builder:Using custom data configuration default-e8bc7b301aa1b353
INFO:datasets.utils.filelock:Lock 139795201622480 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
INFO:datasets.utils.filelock:Lock 139795201622480 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
INFO:datasets.utils.filelock:Lock 139795201622864 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
INFO:datasets.builder:Generating dataset text (/root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)
100%|██████████| 2/2 [00:00<00:00, 2330.17it/s]
INFO:datasets.utils.download_manager:Downloading took 0.0 min
INFO:datasets.utils.download_manager:Checksum Computation took 0.0 min
100%|██████████| 2/2 [00:00<00:00, 920.91it/s]
INFO:datasets.utils.info_utils:Unable to verify checksums.
INFO:datasets.builder:Generating split train
INFO:datasets.arrow_writer:Done writing 8 examples in 172 bytes /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-train.arrow.
INFO:datasets.builder:Generating split validation
INFO:datasets.arrow_writer:Done writing 8 examples in 172 bytes /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete/text-validation.arrow.
INFO:datasets.utils.info_utils:Unable to verify splits sizes.
INFO:datasets.utils.filelock:Lock 139795201625808 acquired on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock
INFO:datasets.utils.filelock:Lock 139795201625808 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.incomplete.lock
INFO:datasets.utils.filelock:Lock 139795201622864 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_text_default-e8bc7b301aa1b353_0.0.0_e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5.lock
INFO:datasets.builder:Constructing Dataset for split train, validation, from /root/.cache/huggingface/datasets/text/default-e8bc7b301aa1b353/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5
100%|██████████| 2/2 [00:00<00:00, 458.74it/s]
[INFO|configuration_utils.py:528] 2021-06-28 17:23:13,619 >> loading configuration file ./config/config.json
[INFO|configuration_utils.py:566] 2021-06-28 17:23:13,619 >> Model config BigBirdConfig {
"architectures": [
"BigBirdForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"attention_type": "block_sparse",
"block_size": 64,
"bos_token_id": 1,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu_new",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 16000,
"model_type": "big_bird",
"num_attention_heads": 4,
"num_hidden_layers": 4,
"num_random_blocks": 3,
"pad_token_id": 0,
"rescale_embeddings": false,
"sep_token_id": 66,
"transformers_version": "4.9.0.dev0",
"type_vocab_size": 2,
"use_bias": true,
"use_cache": true,
"vocab_size": 40000
}
[INFO|tokenization_utils_base.py:1651] 2021-06-28 17:23:13,620 >> Didn't find file ./tokenizer/spiece.model. We won't load it.
[INFO|tokenization_utils_base.py:1651] 2021-06-28 17:23:13,620 >> Didn't find file ./tokenizer/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file None
[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/tokenizer.json
[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file None
[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/special_tokens_map.json
[INFO|tokenization_utils_base.py:1715] 2021-06-28 17:23:13,620 >> loading file ./tokenizer/tokenizer_config.json
INFO:run_mlm:Training new model from scratch
Exception in device=TPU:6: Default process group has not been initialized, please make sure to call init_process_group.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/run_mlm.py", line 529, in _mp_fn
main()
File "/content/run_mlm.py", line 386, in main
with training_args.main_process_first(desc="dataset map tokenization"):
File "/usr/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1005, in main_process_first
torch.distributed.barrier()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2523, in barrier
default_pg = _get_default_group()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 1 indices in 8 bytes .
INFO:datasets.arrow_writer:Done writing 0 indices in 0 bytes .
Exception in device=TPU:0: Default process group has not been initialized, please make sure to call init_process_group.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1006, in main_process_first
yield
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in map
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in <dictcomp>
for k, dataset in self.items()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in map
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in <listcomp>
for rank in range(num_proc)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2664, in shard
writer_batch_size=writer_batch_size,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 186, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2254, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2170, in _new_dataset_with_indices
fingerprint=fingerprint,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 297, in __init__
self._indices.column(0)[0].type
File "pyarrow/table.pxi", line 162, in pyarrow.lib.ChunkedArray.__getitem__
File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index
IndexError: index out of bounds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/run_mlm.py", line 529, in _mp_fn
main()
File "/content/run_mlm.py", line 393, in main
desc="Running tokenizer on dataset line_by_line",
File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1011, in main_process_first
torch.distributed.barrier()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2523, in barrier
default_pg = _get_default_group()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
Traceback (most recent call last):
File "xla_spawn.py", line 85, in <module>
main()
File "xla_spawn.py", line 81, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
start_method=start_method)
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 144, in join
exit_code=exitcode
torch.multiprocessing.spawn.ProcessExitedException: process 6 terminated with exit code 17
```<|||||>Hi !
This might be because `num_proc` is set to a value higher than the size of the dataset (so you end up with an empty dataset in one process).
This has recently been solved by this PR https://github.com/huggingface/datasets/pull/2566. There will be a new release of `datasets` today to make this fix available. In the meantime, you can try using a bigger dataset or reduce the number of data processing workers.<|||||>Hmmm...my dataset is about 25k sequences, which I cut down to 15k to save memory :thinking: so the `num_proc` shouldn't pose any issue. Right now, following up on your suggestion I ve set it to the default.
Anyways, following up with the suggestion made by @LysandreJik, it seems that there might be some inconsistency while creating the dataset - putting it at a `max_length` of `512` and a few other flags for grad accumulation seems that it can train properly.
> Could you try this out for me: set the max_seq_length value to something low, like 512 or 256. Does it still crash then?
For such lower values, it definitely doesn't crash which means you might be right. I would look to double-check my dataset generation process, but it still irks me why I can't see `max_seq_length` in the accepted TrainingArguments. Also, even if there aren't enough tokens to generate the require `16k` limit, why doesn't `pad_to_max_length` flag act here in this case, and pad till the max length?<|||||>In such case, should I crop long sequences and pad smaller sequences manually - or is this supposed to be done automatically by the dataset processing part of the script?<|||||>> it still irks me why I can't see max_seq_length in the accepted TrainingArguments.
`max_seq_length` isn't a `TrainingArguments`, it's a `DataTrainingArguments`. The difference is that the former is used by the `Trainer`, while the latter is only used by the script to do pre/post-processing, and is [not passed to the `Trainer`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py#L470).
> why doesn't pad_to_max_length flag act here in this case, and pad till the max length?
I'm thinking the issue is happening earlier than the `pad_to_max_length` flax is consumed. I can reproduce with the following:
```bash
echo "This is a random sentence" > small_file.txt
python ~/transformers/examples/pytorch/language-modeling/run_mlm.py \
--output_dir=output_dir \
--model_name_or_path=google/bigbird-roberta-base \
--train_file=small_file.txt \
--do_train
```
The error comes from the dataset map that is calling the [`group_text`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py#L414-L426) method. This method tries to put all of the tokenized examples in the `result` dictionary, but drops the small remainder. As we don't have enough data to complete a single sequence, then this method returns an empty result:
```
{'attention_mask': [], 'input_ids': [], 'special_tokens_mask': []}
```
@sgugger can chime in if my approach is wrong, but the following modifications to the `group_texts` method seems to do the trick:
```diff
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
- total_length = (total_length // max_seq_length) * max_seq_length
+ truncated_total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
- result = {
- k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
- for k, t in concatenated_examples.items()
- }
+ if total_length == 0:
+ result = {
+ k: [t[i : i + max_seq_length] for i in range(0, truncated_total_length, max_seq_length)]
+ for k, t in concatenated_examples.items()
+ }
+ else:
+ result = {
+ k: [t[i: i + max_seq_length] for i in range(0, total_length, max_seq_length)]
+ for k, t in concatenated_examples.items()
+ }
return result
``` <|||||>That clears up a lot of things @LysandreJik! Thanx a ton :rocket: :cake: :1st_place_medal:
~~Just a minor peek, When running the scripts it apparently doesn't log anything to the Colab's cell output. tried using different logging levels and setting to defaults to no avail~~ (no bother, simply bash piped it to a file to save time and `tail -f` to view updates to file in real-time)<|||||>I don't understand your diff @LysandreJik . If `total_length==0` then `truncated_total_length` is also 0. I think you meant something more like this maybe?
```diff
- total_length = (total_length // max_seq_length) * max_seq_length
+ if total_length >= max_seq_length:
+ total_length = (total_length // max_seq_length) * max_seq_length
```<|||||>Ah I think I did a typo when copying the code, my local code has the following:
`if truncated_total_length != 0:` instead of `if total_length == 0:`.
This way, if the truncated total length is equal to 0 (like in this case), then it will use the `total_length` (which is of 7) to create the example.
If the truncated total length is not 0, then it will use this value to create the example; which was the case before.
Feel free to modify as you wish so that it's clearer for you!
<|||||>Yes, then it's equivalent to my suggestion. Thanks!<|||||>@LysandreJik I may be misunderstanding how argument parsing works, but for flags like `evaluation_strategy`, it doesn't seem that the script parses it at all? I have a logging problem (https://discuss.huggingface.co/t/no-step-wise-logging-for-xla-mlm-scripts-in-colab-jupyter/8134) which seems to ignore the arguments/fails to override them. I am getting log of loss only at the start of epoch (`0.19`) somewhere again (`epoch-1.89`) and never again, when set for 5 epochs.
This seems strange, nor can I judge my models as to how they are performing. any ideas? |
transformers | 12,437 | closed | Add test for a WordLevel tokenizer model | # What does this PR do?
In this PR I propose to add a test for the feature developed in PR #12361. Unless I'm mistaken, no language model tested currently uses a tokenizer that would use the WordLevel model.
The tokenizer created for this test is hosted here: [https://hf.co/robot-test/dummy-tokenizer-wordlevel](https://huggingface.co/robot-test/dummy-tokenizer-wordlevel)
| 06-30-2021 11:26:15 | 06-30-2021 11:26:15 | |
transformers | 12,436 | open | Add DEBERTA-base model for usage in EncoderDecoderModel. | # 🚀 Feature request
Add DEBERTA-base model as an option for creating an EncoderDecoderModel.
## Motivation
Currently only BERT and RoBERTa models can be transformed to a Seq2Seq model via EncoderDecoder class, and for those of use developing DeBERTa models from scratch it would be wonderful to be able to generate a Seq2Seq model from them. Also, the Deberta-base model works much better than BERT and RoBERTa.
## Your contribution
| 06-30-2021 11:18:45 | 06-30-2021 11:18:45 | Great idea! Do you want to take a stab at it?<|||||>Hi @alexvaca0,
This is an interesting feature,
But I was curious that Deberta, Bert, and Roberta are encoder-based models so there is no decoder part right? I checked their model class and I could not find the Decoder / EncoderDecoder class!
Can you please give more insight into it? <|||||>That's right, there's no decoder in those models, but there is a class in Transformers, EncoderDecoderModel, that enables to create encoder-decoder architectures from encoder-only architectures :)
Perfect, let me have a look at it and see if I can code that adaptation @LysandreJik<|||||>Great! If you run into any blockers, feel free to ping us. If you want to add the possibility for DeBERTa to be a decoder, you'll probably need to add the cross attention layers.
cc @patrickvonplaten and @patil-suraj which have extensive experience with enc-dec models.<|||||>Hey, is this feature being worked on by someone? If not then I can pick it up! @LysandreJik <|||||>Would be great if you could pick it up @manish-p-gupta :-) <|||||>Great!. Any specific things I should go through before taking it up? I'm familiar with the Code of conduct and contributing guidelines. I'll also open a draft PR to carry on the discussions there. Let me know if you think I need to look at anything else. @patrickvonplaten <|||||>@ArthurZucker has been working with DeBERTa models recently and can likely help and give advice!<|||||>Yes! Feel free to ping me for an early review if you have any doubts |
transformers | 12,435 | closed | Using huggingface Pipeline in industry | Hi,
I was wondering if there is anything to be aware of when using your sentiment-analysis pipeline for industry projects at work. Are there any limitations to what I can or cannot do?
Thank you for your always amazing service.
Lasse | 06-30-2021 11:16:36 | 06-30-2021 11:16:36 | The pipeline is a simple wrapper over the model and tokenizer. When using a pipeline in production you should be aware of:
- The pipeline is doing simple pre and post-processing reflecting the model's behavior. If you want to understand how the pipeline behaves, you should understand how the model behaves as the pipeline isn't doing anything fancy (true for sentiment analysis, less true for token classification or QA which have very specific post-processing).
- Pipelines are very simple to use, but they remain an abstraction over the model and tokenizer. If you're looking for performance and you have a very specific use-case, you will get on par or better performance when using the model and tokenizer directly.
Does that answer your question?<|||||>I think my question was minded for whether the pipeline is free to use in industry, and or whether there is any security issues with using it in an automated work process for industry projects? Not so much minded for the bias of the model.
Thanks in advance
Lasse<|||||>For industry usage including security guidance etc I would recommend getting in touch with our Expert Acceleration Program at https://huggingface.co/support
Cheers<|||||>Perfect, I will try and do that. Thanks for the help
|
transformers | 12,434 | closed | TPU not initialized when running official `run_mlm_flax.py` example. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@avital @marcvanzee
## Information
I am setting up a new TPU VM according to the [Cloud TPU VM JAX quickstart](https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm) and the following the installation steps as described here: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries to install `flax`, `jax` `transformers`, and `datasets`.
Then, when running a simple example using the [`run_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_mlm_flax.py) script, I'm encounting an error/ warning:
```
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: "cuda". Available platform names are: TPU Interpreter Host
```
=> I am now unsure whether the code actually runs on TPU or instead on CPU.
## To reproduce
The problem can be easily reproduced by:
1. sshing into a TPU, *e.g.* `patrick-test` (Flax, JAX, & Transformers should already be installed)
If one goes into `patrick-test` the libraries are already installed - on an "newly" created TPU VM, one can follow [these](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries) steps to install the relevant libraries.
2. Going to home folder
```
cd ~/
```
3. creating a new dir:
```
mkdir test && cd test
```
4. cloning a dummy repo into it
```
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/patrickvonplaten/norwegian-roberta-als
```
4. Linking the `run_mlm_flax.py` script
```
ln -s $(realpath ~/transformers/examples/flax/language-modeling/run_mlm_flax.py) ./
```
5. Running the following command (which should show the above warning/error again):
```
./run_mlm_flax.py \
--output_dir="norwegian-roberta-als" \
--model_type="roberta" \
--config_name="norwegian-roberta-als" \
--tokenizer_name="norwegian-roberta-als" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_als" \
--max_seq_length="128" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--overwrite_output_dir \
--num_train_epochs="3"
```
=>
You should see a console print that says:
```
[10:15:48] - INFO - absl - Starting the local TPU driver.
[10:15:48] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
[10:15:48] - INFO - absl - Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: "cuda". Available platform names are: TPU Host Interpreter
```
## Expected behavior
I think this warning / error should not be displayed and the TPU should be correctly configured.
| 06-30-2021 10:19:15 | 06-30-2021 10:19:15 | Given that when running this command I'm getting ~30 training iterations per second, I'm assuming though that it's just a warning and not an error.<|||||>I am getting the same errors here. Also noting that the parameters from TrainingArguments are written at the start of the run_mlm_script, I see a few weird settings, like:
```
no_cuda=False,
tpu_num_cores=None,
```
I am not getting a stable report on iterations per second, so it is hard to see if this is a going well. Progress is still at 0%. Occationally, I am also getting error messages like this written to the screen roughly each minute:
```
tcmalloc: large alloc 25737314304 bytes == 0x770512000 @ 0x7ff6c92be680 0x7ff6c92df824 0x7ff6c92dfb8a 0x7ff49fbb6417 0x7ff49a9c43d0 0x7ff49a9d1ef4 0x7ff49a9d4e77 0x7ff49a9261dd 0x7ff49a6a0563 0x7ff49a68e460 0x5f5b29 0x5f66f6 0x50ad17 0x570296 0x56951a 0x5f60b3 0x5f6b6b 0x664e8d 0x5f556e 0x56ca9e 0x56951a 0x5f60b3 0x5f54e7 0x56ca9e 0x5f5ed6 0x56b3fe 0x5f5ed6 0x56b3fe 0x56951a 0x5f60b3 0x5f54e7
```
It might also be mentioned that I am initially getting "Missing XLA configuration". I had to manually set the environment variable: `export XRT_TPU_CONFIG="localservice;0;localhost:51011".` for the script to run. I am not sure if this really does the right thing. Maybe also the tpu driver needs to be specified?
<|||||>...and ... I see this warning, tat does not look good:
`[10:28:02] - WARNING - absl - No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)`<|||||>If of interest, this is my notes on installing this. It is mainly based on Patricks tutorial:
https://github.com/NBAiLab/notram/blob/master/guides/flax.md<|||||>The simplest way to see if you're running on TPU is to call `jax.devices()`. E.g. you may see:
```
[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0),
TpuDevice(id=1, process_index=0, coords=(0,0,0), core_on_chip=1),
TpuDevice(id=2, process_index=0, coords=(1,0,0), core_on_chip=0),
TpuDevice(id=3, process_index=0, coords=(1,0,0), core_on_chip=1),
TpuDevice(id=4, process_index=0, coords=(0,1,0), core_on_chip=0),
TpuDevice(id=5, process_index=0, coords=(0,1,0), core_on_chip=1),
TpuDevice(id=6, process_index=0, coords=(1,1,0), core_on_chip=0),
TpuDevice(id=7, process_index=0, coords=(1,1,0), core_on_chip=1)]
```<|||||>@avital. "import jax;jax.devices()" gives me exactly the same response.
I am also able to make simple calculations on the TPU. The problem seem only to be related to the run_mlm_flax-script.<|||||>@avital. I also found this log file. It seems to think that the TPU is busy.
```
E0630 13:24:28.491443 20778 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.491648 20778 tensor_node.cc:436] [0000:00:04.0 PE0 C0 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.491660 20778 driver.cc:806] [0000:00:04.0 PE0 C0 MC-1] tensor node 0 open failed: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.491678 20778 driver.cc:194] [0000:00:04.0 PE0 C0 MC-1] Device has failed. Status:FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.492048 20777 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.492093 20777 tensor_node.cc:436] [0000:00:05.0 PE0 C1 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.492100 20777 driver.cc:806] [0000:00:05.0 PE0 C1 MC-1] tensor node 0 open failed: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.492110 20777 driver.cc:194] [0000:00:05.0 PE0 C1 MC-1] Device has failed. Status:FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.492996 20778 driver.cc:165] [0000:00:04.0 PE0 C0 MC-1] Transitioned to State::FAILED, dumping core
E0630 13:24:28.493215 20777 driver.cc:165] [0000:00:05.0 PE0 C1 MC-1] Transitioned to State::FAILED, dumping core
E0630 13:24:28.494112 20786 kernel_dma_mapper.cc:88] Error setting number simples with FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
E0630 13:24:28.494225 20786 tensor_node.cc:436] [0000:00:07.0 PE0 C3 MC-1 TN0] Failed to set number of simple DMA addresses: FAILED_PRECONDITION: ioctl failed [type.googleapis.com/util.ErrorSpacePayload='util::PosixErrorSpace::Device or resource busy']
```
<|||||>I am now able to get this to run, and think I understand why this is happening.
The initial error I am seeing is: "RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration"
I have been getting around this error by setting "export XRT_TPU_CONFIG="localservice;0;localhost:51011". This is probably the right way of doing it on torch system, but it also leads to torch stealing the TPU from Jax, and letting the run_mlm_flax.py train on CPU.
However, it seems like there is a function in transformers/training_args.py called "is_torch_tpu_available". If this returns TRUE, it also asks for the XRT_TPU_CONFIG. I am really not sure why it is returning TRUE on my system but it might be because the VM I am using have other preinstalled software.
Lots of ways of fixing this of course. You guys probably know the best way. <|||||>I've seen the same `INFO` message from `absl`:

But training seems to work (batch size 128 as from the mlm flax documentation)<|||||>Closing this issue now as it is expected<|||||>I am having the exact same issue described here. I see that @peregilk found a work around, but he/she hasn't shared what it was.
Could you describe how you overcame this issue? @peregilk <|||||>@erensezener I think a lot has changed in the code here since this was written. I am linking to my internal notes above. I have repeated that one several times, and know it gets a working system up and running.
Just a wild guess: Have you tried setting ```export USE_TORCH=False```<|||||>> Just a wild guess: Have you tried setting `export USE_TORCH=False`
This solves the issue indeed! Thank you, you saved me many more hours of debugging :)
|
transformers | 12,433 | closed | Added to talks section | Added one more confirmed speaker, zoom links and gcal event links | 06-30-2021 09:42:01 | 06-30-2021 09:42:01 | |
transformers | 12,432 | closed | fix typo in mt5 configuration docstring | default vocab_size value is written incorrectly in docstring. this pr updates the mt5 configuration docstring. | 06-30-2021 09:37:30 | 06-30-2021 09:37:30 | Thank you! |
transformers | 12,431 | closed | how to continue pre-train custom data | 1.problem
I want to pretrain my specific data, reset the numbers of the encoders or layers.
can you offer an interface to load our custom data which is like json or other type to
pretrain. the LMASK, NSP,PE is packaged in the interface.
| 06-30-2021 08:51:08 | 06-30-2021 08:51:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,430 | closed | Distilling zero-shot classification: Assertion `srcIndex < srcSelectDimSize` failed. | ## Environment info
tokenizers 0.10.2
torch 1.8.1+cu111
transformers 4.5.1
datasets 1.6.1
IPython 7.19.0
jupyter_client 6.1.7
jupyter_core 4.6.3
jupyterlab 2.2.6
notebook 6.1.4
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.17763-SP0
## Issue
I'm trying to train a distill_classifier on a custom dataset with 40.000 rows of text, 10 labels.
```
!python ./distill_classifier.py \
--overwrite_output_dir \
--data_file email.txt \
--class_names_file class_names.txt \
--hypothesis_template "This text is about {}." \
--student_name_or_path distilbert-base-uncased \
--output_dir ./distilbert-base-uncased-student-large
```
At first, I got an error regarding max_length, I changed the script by adding truncation=True to tokenizer, and it resulted in another error. Strange thing is that I'm able to train it on a small custom dataset (~200 rows of text data, 10 labels) but with bigger data, it doesn't work so well. The output:
```
[INFO|configuration_utils.py:491] 2021-06-30 10:00:54,363 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at C:\Users\d/.cache\huggingface\transformers\fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:527] 2021-06-30 10:00:54,364 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
06/30/2021 10:00:53 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 3distributed training: False, 16-bits training: False
06/30/2021 10:00:53 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-notino-student-large', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs\\Jun30_10-00-53_dcvmdwhanl03', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-notino-student-large', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, mp_parameters='')
06/30/2021 10:00:53 - INFO - __main__ - Generating predictions from zero-shot teacher model
06/30/2021 10:03:10 - INFO - __main__ - Initializing student model
06/30/2021 10:03:58 - INFO - __main__ - Training student model on teacher predictions
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|modeling_utils.py:1052] 2021-06-30 10:00:54,790 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at C:\Users\o/.cache\huggingface\transformers\63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0
[WARNING|modeling_utils.py:1159] 2021-06-30 10:01:01,810 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:1176] 2021-06-30 10:01:01,810 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training.
[INFO|configuration_utils.py:491] 2021-06-30 10:01:04,545 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb
[INFO|configuration_utils.py:527] 2021-06-30 10:01:04,546 >> Model config RobertaConfig {
"_num_labels": 3,
"architectures": [
"RobertaForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"initializer_range": 0.02,
"intermediate_size": 4096,
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
},
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 50265
}
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at C:\Users\o/.cache\huggingface\transformers\64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at C:\Users\o/.cache\huggingface\transformers\425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at C:\Users\o/.cache\huggingface\transformers\d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:01:07,087 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None
0%| | 0/147 [00:00<?, ?it/s]
...
100%|##########| 147/147 [02:03<00:00, 1.19it/s]
[INFO|configuration_utils.py:491] 2021-06-30 10:03:10,735 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:527] 2021-06-30 10:03:10,736 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10"
},
"initializer_range": 0.02,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_2": 2,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.5.1",
"vocab_size": 30522
}
[INFO|modeling_utils.py:1052] 2021-06-30 10:03:11,160 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at C:\Users\o/.cache\huggingface\transformers\9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a
[WARNING|modeling_utils.py:1159] 2021-06-30 10:03:12,327 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1170] 2021-06-30 10:03:12,328 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|configuration_utils.py:491] 2021-06-30 10:03:12,750 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at C:\Users\o/.cache\huggingface\transformers\23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361
[INFO|configuration_utils.py:527] 2021-06-30 10:03:12,750 >> Model config DistilBertConfig {
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"transformers_version": "4.5.1",
"vocab_size": 30522
}
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at C:\Users\o/.cache\huggingface\transformers\0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at C:\Users\o/.cache\huggingface\transformers\75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1707] 2021-06-30 10:03:14,877 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at C:\Users\o/.cache\huggingface\transformers\8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
0%| | 0/1282 [00:00<?, ?ex/s][WARNING|tokenization_utils_base.py:3136] 2021-06-30 10:03:14,926 >> Token indices sequence length is longer than the specified maximum sequence length for this model (527 > 512). Running this sequence through the model will result in indexing errors
20%|#9 | 250/1282 [00:00<00:00, 2475.26ex/s]
38%|###7 | 484/1282 [00:00<00:00, 2433.06ex/s]
57%|#####6 | 729/1282 [00:00<00:00, 2438.11ex/s]
78%|#######7 | 999/1282 [00:00<00:00, 2504.20ex/s]
94%|#########4| 1206/1282 [00:00<00:00, 2355.94ex/s]
100%|##########| 1282/1282 [00:00<00:00, 2387.33ex/s]
[INFO|trainer.py:490] 2021-06-30 10:03:58,139 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text.
[INFO|trainer.py:1013] 2021-06-30 10:03:58,433 >> ***** Running training *****
[INFO|trainer.py:1014] 2021-06-30 10:03:58,439 >> Num examples = 1282
[INFO|trainer.py:1015] 2021-06-30 10:03:58,444 >> Num Epochs = 1
[INFO|trainer.py:1016] 2021-06-30 10:03:58,449 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1017] 2021-06-30 10:03:58,455 >> Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:1018] 2021-06-30 10:03:58,461 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1019] 2021-06-30 10:03:58,466 >> Total optimization steps = 14
[INFO|integrations.py:586] 2021-06-30 10:03:59,225 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: dbs700 (use `wandb login --relogin` to force relogin)
wandb: wandb version 0.10.33 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.10.30
wandb: Syncing run ./distilbert-base-uncased-notino-student-large
wandb: View project at https://wandb.ai/dbs700/huggingface
wandb: View run at https://wandb.ai/dbs700/huggingface/runs/3aegl7qi
wandb: Run data is saved locally in C:\Users\o\wandb\run-20210630_100419-3aegl7qi
wandb: Run `wandb offline` to turn off syncing.
0%| | 0/14 [00:00<?, ?it/s]C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [251,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
```
## Who can help
@LysandreJik
@sgugger
@joeddav | 06-30-2021 08:37:19 | 06-30-2021 08:37:19 | Found a solution by reducing the input size of the text, and limiting it by the model's max_position_embeddings. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,429 | closed | Add default bos_token and eos_token for tokenizer of deberta_v2 | # What does this PR do?
Add default bos_token and eos_token for tokenizer of deberta_v2
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@n1t0, @LysandreJik
| 06-30-2021 07:52:35 | 06-30-2021 07:52:35 | |
transformers | 12,428 | closed | [DeBerta V2] The vocab size of DeBerta V2 is incorrect | ### Who can help
@LysandreJik
## Information
I am using Deberta V2. The document and vocab of the tokenizer shows that its vocab size is 128K. But the vocab size in its [configuration](https://huggingface.co/microsoft/deberta-v2-xlarge/resolve/main/config.json) is 128100 which is 100 larger.

It will make embedding not work as expected.
## Expected behavior
May be we should modify the vocab size in configuration?
| 06-30-2021 07:01:08 | 06-30-2021 07:01:08 | Maybe @BigBird01 can chime in :)<|||||>@LysandreJik I emailed to the author. He leave some space to add customized tokens for downstream tasks and this is by design.<|||||>v2/v3: 128000 + 100 (special tokens, unused tokens, e.g. reservation for PostOCR customized tokens) ---> 128100 in total
https://huggingface.co/transformers/v4.9.2/model_doc/deberta_v2.html |
transformers | 12,427 | closed | [DeepSpeed] Convert from fp16 to fp32 issue zero_to_fp32.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-4.4.0-1128-aws-x86_64-with-debian-stretch-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: using deepspeed v 0.4.0
### Who can help
@stas00 thanks for opening new doors by showing how to train large transformer models.
## Information
Model I am using (Bert, XLNet ...): t5-3b
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I trained a t5-3b model on A100-40GB single host using deepspeed using tips from https://github.com/huggingface/transformers/issues/9996 and huggingface docs. After a checkpoint was saved, I used the provided `zero_to_fp32.py`
Steps to reproduce the behavior:
1. Using deepspeed **zero2 config** to train a **t5-3b** model.
2. Tried converting the deepspeed saved fp16 checkpoint (checkpoint-60000) to fp32
2. I went into the checkpoint-60000 dir and ran the provided command `python zero_to_fp32.py global_step60001 pytorch_model_fp32.bin` this is based on deepspeed documentation.
3. however I get the crash shown below.
```python
python zero_to_fp32.py global_step60001 pytorch_model_fp32.bin
Processing zero checkpoint 'global_step60001'
Detected checkpoint of type zero stage 2, world_size: 8
Traceback (most recent call last):
File "zero_to_fp32.py", line 170, in <module>
convert_zero_chkpt_to_fp32_consolid_state_dict(args.checkpoint_dir, args.output_file)
File "zero_to_fp32.py", line 128, in convert_zero_chkpt_to_fp32_consolid_state_dict
unpartitioned_numel).view(shape)
RuntimeError: start (2499259392) + length (16777216) exceeds dimension size (2499738752).
```
contents inside `global_step60001` folder
```
total 34G
-rw-rw-r-- 1 ubuntu ubuntu 5.4G Jun 28 06:15 mp_rank_00_model_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_4_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_2_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_7_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_3_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_6_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_0_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 28 06:15 zero_pp_rank_1_mp_rank_00_optim_states.pt
```
Oddly, I see one file not present "rank_2" I am assuming each GPU saves its optimizer state. But I have not modified any code to cause this issue. Please help!
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Since I was running the provided code `zero_to_fp32.py` I would expect this to create the `fp32` model binary which does not happen due to the cryptic crash.
Happy to provide more information.
| 06-30-2021 06:47:50 | 06-30-2021 06:47:50 | Thank you for this excellent report, @srikar2097
This is fascinating!
Where is `zero_pp_rank_5_mp_rank_00_optim_states.pt`? I presume this was an 8-gpu run, so we are missing one rank.
Could you re-run again and see if it was a fluke? Clearly one process failed to save its optimizer states.
<|||||>@stas00 When I looked at what got saved in the previous check-point (previous to what I shared in my original report), this is what I see:
```
ll ./checkpoint-50000/global_step50001/
total 34G
-rw-rw-r-- 1 ubuntu ubuntu 5.4G Jun 27 01:27 mp_rank_00_model_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_3_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_1_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_7_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_4_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_6_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_0_mp_rank_00_optim_states.pt
-rw-rw-r-- 1 ubuntu ubuntu 4.0G Jun 27 01:27 zero_pp_rank_2_mp_rank_00_optim_states.pt
```
As you can see here file not present is "rank_5". I went back to earlier checkpoints and consistently `zero_pp_rank_5_mp_rank_00_optim_states.pt` is missing. I did monitor GPU usage during the training and nothing stuck me as odd.
Let me report back with how the re-run goes.<|||||>First please file an issue with Deepspeed, since it's Deepspeed that saves these files. https://github.com/microsoft/DeepSpeed/issues
Are you running on all 8 gpus? can you validate in nvidia-smi? It should be the case since it reports:
```
Detected checkpoint of type zero stage 2, world_size: 8
```
Then I'd do debug logging (obviously switching to much more frequent save_steps, so it only takes one minute to log). I'd either:
* change this log into print so it's printed on each gpu
https://github.com/microsoft/DeepSpeed/blob/a029239812e15cf35334514449ed3127b915780a/deepspeed/runtime/engine.py#L1989
* or if you use `transformers` master and the HF trainer you can set ` --log_level info --log_level_replica info` and it'll log this info on each gpu w/o you needing to touch the deepspeed code.
For example with the above setting, on 2 gpus I get:
```
[2021-06-30 12:43:06,647] [INFO] [engine.py:1990:_save_zero_checkpoint] zero checkpoint saved output_dir/checkpoint-2/global_step2/zero_pp_rank_1_mp_rank_00_optim_states.pt
[2021-06-30 12:43:07,320] [INFO] [engine.py:1990:_save_zero_checkpoint] zero checkpoint saved output_dir/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt
```
So we know both files got saved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,426 | closed | [roberta] lm_head.decoder save/load needs fixing | ```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("distilroberta-base")
print('decoder.weight' in dict(model.lm_head.named_parameters()).keys())
print('dense.weight' in dict(model.lm_head.named_parameters()).keys())
print('lm_head.decoder.weight' in dict(model.named_parameters()).keys())
print('lm_head.dense.weight' in dict(model.named_parameters()).keys())
```
gives:
```
True
True
False
True
```
So if we query `lm_head` we can see `lm_head.decoder.weight`, however it's not visible to the whole model via `parameters()` (named or not).
The problem comes from `tie_weights`:
```
output_embeddings = self.get_output_embeddings()
if output_embeddings is not None and self.config.tie_word_embeddings:
self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())
```
which essentially does:
```
lm_head.decoder.weight = embeddings.word_embeddings.weight
```
which disconnects it from the module list.
This is inconsistent behavior.
But the main issue is that `lm_head.decoder.weight` is saved in the `save_pretrained` and then is expected to be there on `torch.load` but since it's tied frameworks like deepspeed won't save it.
So if `config.tie_word_embeddings` is `True`, it shouldn't save that key and not expect to load it.
Please correct me if I have missed something obvious.
@LysandreJik, @sgugger
| 06-30-2021 05:09:43 | 06-30-2021 05:09:43 | Indeed, the key should be added to the keys to ignore on save, and expected missing keys.
Thanks for your investigation!<|||||>What if someone sets `config.tie_word_embeddings=False`. Should the save/expected keys be dynamically adjusted in `__init__`?
or do it the other way around - have `lm_head.decoder.weight` on the list and instead remove it in `__init__` if `config.tie_word_embeddings=False` as the former is the most likely behavior.<|||||>Yes, I think the second proposition makes a lot of sense! Great catch.
We'll need to check, but I wouldn't be surprised if other models have their lm head on the list - and nothing set in the `__init__` to prevent the save/expected keys from interacting with that layer. If so, it's not a high priority issue as no one has brought it up - but it is an issue nonetheless that potentially affects several models. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.