repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,811 | closed | adapt mbart and generate for Mbart50 | # What does this PR do?
This PR adapts `MBartForConditionalGeneration` and `generate` for mbart-50 models.
There are two main differences between mbart-50 and existing mbart-cc25 models
1. for mbart-50 both source and target language text begin with the `<language token>`, in mbart-cc25 `<language_token>` is used as suffix token.
2. Also the `decoder_input_ids` begin with `[eos] [tgt_lang_token] ...`, so for generation we need to use `eos` as the `decoder_start_token_id` and force the `tgt_lang_token` as the first generated token.
This PR
1. adds `MBart50Tokenizer` which encodes the text as described above, IMO adding a new tokenizer makes sense as it'll make it explicit that mbart-50 encodes the text differently.
2. introduces two new `generate` arguments and `LogitsProcessor`
- `forced_bos_token_id` and `forced_eos_token_id`, to force a specific start and end token. This is particularly useful for
many to many and one to many translation models, so we can pass different language tokens as `forced_bos_token_id` to
`generate`,
- `ForcedBosTokenLogitsProcessor` and `ForcedEosTokenLogitsProcessor`
3. Remove `adjust_logits_during_generation` method from all models (except from `Marian`) and handle that use case using the newly introduced logits processors.
4. remove the `force_bos_token_to_be_generated` argument from `BartConfig`
For `Marian` we still need to keep the `adjust_logits_during_generation` method to force the model to not generate pad token. Adding the pad token to `bad_words_ids` does not resolve this issue, the score of `pad_token_id` needs to be set to `-inf` before calling `log_softmax`.
Below is an example of mbart-50 model using `forced_bos_token_id`
```python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
article_hi = "เคธเคเคฏเฅเคเฅเคค เคฐเคพเคทเฅเคเฅเคฐ เคเฅ เคชเฅเคฐเคฎเฅเค เคเคพ เคเคนเคจเคพ เคนเฅ เคเคฟ เคธเฅเคฐเคฟเคฏเคพ เคฎเฅเค เคเฅเค เคธเฅเคจเฅเคฏ เคธเคฎเคพเคงเคพเคจ เคจเคนเฅเค เคนเฅ"
article_ar = "ุงูุฃู
ูู ุงูุนุงู
ููุฃู
ู
ุงูู
ุชุญุฏุฉ ูููู ุฅูู ูุง ููุฌุฏ ุญู ุนุณูุฑู ูู ุณูุฑูุง."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-50-large-many-to-many")
tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-50-large-many-to-many")
# translate Hindi to French
encoded_hi = tokenizer.prepare_seq2seq_batch(src_texts=article_hi, src_lang="hi_IN", return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie."
# translate Arabic to English
encoded_ar = tokenizer.prepare_seq2seq_batch(src_texts=article_ar, src_lang="ar_AR", return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
```
TODOs:
- [x] Make sure all generation related slow integration tests pass for affected models
- [x] BART
- [x] mBART (one test is failing, but it's failing on master as well, so not related to this PR)
- [x] Blender
- [x] FSMT
- [x] Marian
- [x] Pegasus
- [x] Generation integration test
- [x] add tests for `ForcedBosTokenLogitsProcessor` and `ForcedEosTokenLogitsProcessor`
- [x] document mBART-50
- [x] Add model cards ([all mbart-50 models](https://huggingface.co/models?filter=mbart-50))
- [x] add the forced params to `facebook/bart-large-cnn`'s config on the hub
- [ ] notebook explaining how to use the one-to-many and many-to-many translation models
Fixes #7060 | 01-26-2021 18:29:45 | 01-26-2021 18:29:45 | I like the idea of using `LogitsProcessor`, my only concern is now each time the user wants to use a different `max_length` they would need to pass the `forced_pos_id_pairs= {max_length: eos_token_id}`.
Also IMO `prefix_token` sounds more intuitive than `forced_pos_id_pairs`. So I think we could add `ForceTokenProcessor ` and keep both `prefix_token` and `forced_pos_id_pairs` arguments.
And if `prefix_token` is passed or `config.force_bos_token_to_be_generated` is `True` we set
```python
forced_pos_id_pairs = {2: prefix_token or bos_token_id, max_length: eos_token_id}
```
this would avoid breaking change.
<|||||>Putting our discussion with @patil-suraj down here for everyone to see. @patil-suraj brought up a good point that `forced_pos_id_pairs` is not super user friendly and might be hard to read and people usually never force "in the middle" tokens to be generated. So I like the proposed approach making two LogitsProcessor better as well => we should therefore make
a `ForcedBosTokenLogitsProcessor` that takes a `token` as input and always forces the first token to be generated and a `ForcedEosTokenLogitsProcessor` that also takes a `token` as input and forces this token to be generated at `max_length`.
As discussed we should delete all `adjust_logits` functionality and also get rid of Bart's `config.force_eos_to_be_generated` parameter while keeping full backwards compatibility as discussed.<|||||>Thanks a lot for making this work @patil-suraj!
Will review the PR now. I saw that you changed the config for `facebook/bart-large-cnn` online which is nice, but should not delete: `force_bos_token_to_be_generated` from the config or it won't be backwards compatible (previous transformer versions need to still use this param). Also it seems like you accidently uploaded a `.ipynb_checkpoints` folder: force_bos_token_to_be_generated<|||||>> We have to make sure that all slow tests for Bart, MBart, Pegasus, Marian, Blenderbot, BlenderbotSmall, FSMT, RAG pass for both PT and TF. I think some of the RAGTokenGeneration could have been broken here
all slow tests are passing for PT, will run the RAG tests now. Also, there is no `force_bos_token_id_to_be_generated` parameter in `RagConfig`, it's in the `generator` config if the `generator` is BART, and `BartConfig` already handles this.
> Not sure whether we want to do inheritance for the MBart50Tokenizer
No strong opinion here will remove the inheritance.
Regarding the `prepare_seq2seq_batch` method:
These checkpoints are mostly intended for multilingual fine-tuning/translation, in this case, it's actually nice to be able to pass lang_id directly for encoding rather than setting `src_lang` and `tgt_lang` on tokenizer each time we have a new language pair.
If we have fixed src and target language then in that case `prepare_seq2seq_batch` definitely doesn't make much sense. |
transformers | 9,810 | closed | Can I use a smaller base model than allenai/led-base-16384 for LED? | Hello, I'm trying to fine tune my own Longformer Encoder Decoder following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n). However, I was wondering if there was a way to consider a base model like
`allenai/longformer-base-4096`
instead of
`led-base-16384`?
When I try doing
```python
led = AutoModelForSeq2SeqLM.from_pretrained(
"allenai/longformer-base-4096",
config="roberta-base",
gradient_checkpointing=True,
use_cache=False,
)
```
but that gives me
```
led = AutoModelForSeq2SeqLM.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/modeling_auto.py", line 1221, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.longformer.configuration_longformer.LongformerConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of LEDConfig, BlenderbotSmallConfig, MT5Config, T5Config, PegasusConfig, MarianConfig, MBartConfig, BlenderbotConfig, BartConfig, FSMTConfig, EncoderDecoderConfig, XLMProphetNetConfig, ProphetNetConfig.
```
my `encoder_max_length` is only 2048 since I'm not planning on feeding transcripts of 8k word to summarize but rather transcripts of 2k words. So using the smaller base model would work great. I'm basically trying to replicate the fine tuning for this
https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16
^but I would want to use that model more as a checkpoint to fine tune further on domain specific data. Hence why I'm trying to create:
1.) a model fine tuned on cnn dailymail data(basically replicate the [longformer2roberta](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) but a version that we can fine tune further)
2.) using part 1 as a checkpoint, fine tune that on domain specific transcript+summary data
@patrickvonplaten or others in the community, I'd greatly appreciate any advice on this | 01-26-2021 17:55:53 | 01-26-2021 17:55:53 | I'd suggest to still use `allenai/led-base-16384` and just pad the input to a maximum length of only `2048`. Also you could think about reducing `config.attention_window` of the model's config to something like 512 or 256 to make your model more efficient for `2048` input<|||||>E.g. in this notebook: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing, I just pad the max length of every input to `8192` and use `allenai/led-base-16384` which works very well!<|||||>Sounds great, I'll look into setting`led.config.attention_window=512` instead of 1024 and the `max_input_length=2048` for the encoder. Thank you for your feedback! |
transformers | 9,809 | closed | Fix fine-tuning translation scripts | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
In the [seq2seq README.md](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#new-script) there are some errors. This PR fixes typos that cause the following problem:
```
Traceback (most recent call last):
File "transformers/examples/seq2seq/run_seq2seq.py", line 536, in <module>
main()
File "transformers/examples/seq2seq/run_seq2seq.py", line 419, in main
load_from_cache_file=not data_args.overwrite_cache,
File ".../lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File ".../lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "transformers/examples/seq2seq/run_seq2seq.py", line 388, in preprocess_function
inputs = [ex[source_lang] for ex in examples["translation"]]
File "transformers/examples/seq2seq/run_seq2seq.py", line 388, in <listcomp>
inputs = [ex[source_lang] for ex in examples["translation"]]
KeyError: 'en-XX'
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
<!--
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -->
## Who can review?
@sgugger, @patil-suraj
| 01-26-2021 16:23:14 | 01-26-2021 16:23:14 | |
transformers | 9,808 | closed | Adding a test to prevent late failure in the Table question answering pipeline. | # What does this PR do?
- If table is empty then the line that contain `answer[0]` will fail.
- This PR add a check to prevent `answer[0]`.
- Also adds an early check for presence of `table` and `query` to
prevent late failure and give better error message.
- Adds a few tests to make sure these errors are correctly raised.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-26-2021 15:41:46 | 01-26-2021 15:41:46 | |
transformers | 9,807 | closed | Partial local tokenizer load | This PR aims to allow partial loading of a cached tokenizer.
Fixes #9147 which explains the issue in a lot of detail.
Currently, if we download a tokenizer from the hub using the `from_pretrained` method:
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google/bert_uncased_L-2_H-128_A-2", local_files_only=True)
```
It caches the files to be reused later. Reloading the tokenizer while specifying `local_files_only=True`
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google/bert_uncased_L-2_H-128_A-2", local_files_only=True)
```
results in a failure as it tries to fetch all of the tokenizer files, even those that are necessary. It currently fails with a hard error.
This PR changes that error to an info log, and prints a single log containing all the files that were not loaded. I put it as an `info` and not as a `warning` or an `error`, because the situation where this is an actual issue is imo very rare; it is a real issue only when the initial `from_pretrained` managed to obtain only some of the necessary files, i.e., when the download was interrupted.
Running the last snippet results in the following warning:
```
Can't load following files from cache: ['added_tokens_file', 'special_tokens_map_file', 'tokenizer_config_file', 'tokenizer_file'] and cannot check if these files are necessary for the tokenizer to operate.
``` | 01-26-2021 15:03:09 | 01-26-2021 15:03:09 | |
transformers | 9,806 | closed | Add a test for TF mixed precision | # What does this PR do?
This PR adds a test to check if our TF models are float16 compliant or not. It also helps me to detect what are those that have to be fixed. | 01-26-2021 14:26:28 | 01-26-2021 14:26:28 | I don't think that the tests that fails are related to this PR. |
transformers | 9,805 | closed | Commit the last step on world_process_zero in WandbCallback | # What does this PR do?
Only commit the last step on the first process (`is_world_process_zero == True`), to avoid calling `wandb.log()` without a prior call to `wandb.init()` (the latter being only called on the first process) with DDP.
Fixes (at least partially) #9623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@borisdayma @sgugger
| 01-26-2021 14:10:43 | 01-26-2021 14:10:43 | Would it make sense to move [those 2 lines](https://github.com/huggingface/transformers/blob/0dd939bf1e01594eadf21b40e5cdb07001233cbf/src/transformers/integrations.py#L573-L574) to the init method instead of setting `self._log_model` to False?<|||||>This should be equivalent indeed, and probably easier to read<|||||>Looks great! |
transformers | 9,804 | closed | Finetuning ProphetNet with Seq2SeqTrainer fails. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Ubuntu 18
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1 (YES)
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik @patrickvonplaten @sgugger
## Information
When trying to fine tune ProphetNet in a summarization task (with transformers/examples/seq2seq/finetune_trainer.py), the model crashes just after performing the evaluation. This script has worked fine with Bart, Pegasus and T5, the other 3 models I've tried. The error trace is the following:
```{python}
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.:24, 2.57it/s]
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
{'loss': 8.933700561523438, 'learning_rate': 2.992816091954023e-05, 'epoch': 0.04782400765184122}
Traceback (most recent call last):
File "finetune_trainer.py", line 498, in <module>
main()
File "finetune_trainer.py", line 426, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 853, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 923, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1352, in evaluate
metric_key_prefix=metric_key_prefix,
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1469, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer_seq2seq.py", line 175, in prediction_step
model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1574, in prediction_step
outputs = model(**inputs)
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1769, in forward
return_dict=return_dict,
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1667, in forward
return_dict=return_dict,
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1365, in forward
) = self.compute_buffered_relative_buckets(position_ids)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1496, in compute_buffered_relative_buckets
position_ids = torch.arange(1, self.max_target_positions).to(position_ids.device).repeat(1, 1)
RuntimeError: CUDA error: device-side assert triggered
0%| | 25/10440 [02:19<16:08:03, 5.58s/it]
```
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
It arises when using official script for training Seq2Seq models.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
A dataset with texts and their summaries.
## To reproduce
Steps to reproduce the behavior:
1. Run script transformers/examples/seq2seq/finetune_trainer.py with any script you want, passing as argument the model for prophetnet. More concretely, call the script the following way:
```{bash}
python finetune_trainer.py --learning_rate=3e-5 --task summarization \
--do_train --do_eval --evaluation_strategy steps --model_name_or_path microsoft/prophetnet-large-uncased \
--data_dir mydatadir --output_dir myoutputdir \
--per_device_train_batch_size 8 --per_device_eval_batch_size 16 \
--eval_accumulation_steps 8 --gradient_accumulation_steps 8 --num_train_epochs=20 --eval_beams=1 \
--load_best_model_at_end --save_steps 25 --logging_steps 25 --fp16 \
--overwrite_output_dir
```
## Expected behavior
It should not crash when training ProphetNet, as it doesn't crash for Bart, Pegasus or T5... | 01-26-2021 11:59:12 | 01-26-2021 11:59:12 | Hey @alexvaca0,
Thanks for your issue. We have started to create a more general script called `run_seq2seq.py` with which fine-tuning ProphetNet should work rather easily.
Could you try to pull current master and do:
```
python examples/seq2seq/run_seq2seq.py --learning_rate=3e-5 --task summarization --do_train --do_eval --evaluation_strategy steps --model_name_or_path microsoft/prophetnet-large-uncased --output_dir myoutputdir --per_device_train_batch_size 8 --per_device_eval_batch_size 16 --eval_accumulation_steps 8 --gradient_accumulation_steps 8 --num_train_epochs=20 --eval_beams=1 --load_best_model_at_end --save_steps 25 --logging_steps 25 --fp16 --overwrite_output_dir --dataset_name cnn_dailymail --dataset_config_name 3.0.0
```
for the cnn/dailymail dataset *e.g.*.
Please let me know how it goes, I'm very interested in ProphetNet fine-tuning results.<|||||>Thank you very much for your quick response! @patrickvonplaten As soon as I can, I'll try that command to check if the new script run_seq2seq.py works fine with ProphetNet. When I have results/errors I'll let you know.
<|||||>I've tried to run the script you said @patrickvonplaten , but it returns the following error when evaluating:
```{python}
All the weights of ProphetNetForConditionalGeneration were initialized from the model checkpoint at microsoft/prophetnet-large-uncased.
If your task is similar to the task the model of the checkpoint was trained on, you can already use ProphetNetForConditionalGeneration for predictions without further training.
Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-2def39d5bd2a9c76/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-7e4959c336c61e5a.arrow
Loading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-2def39d5bd2a9c76/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-b898db3404de8043.arrow
The following columns in the training set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.
The following columns in the evaluation set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.
***** Running training *****
Num examples = 33451
Num Epochs = 20
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 128
Gradient Accumulation steps = 16
Total optimization steps = 5220
{'loss': 5.5221, 'learning_rate': 4.760536398467433e-05, 'epoch': 0.96}
5% 250/5220 [16:57<5:41:20, 4.12s/it]***** Running Evaluation *****
Num examples = 2697
Batch size = 16
0% 0/169 [00:00<?, ?it/s]
1% 2/169 [00:00<00:10, 16.33it/s]
2% 3/169 [00:00<00:12, 13.45it/s]
3% 5/169 [00:00<00:14, 11.34it/s]
4% 6/169 [00:00<00:15, 10.62it/s]
4% 7/169 [00:00<00:21, 7.53it/s]
5% 8/169 [00:00<00:21, 7.48it/s]
5% 9/169 [00:01<00:21, 7.54it/s]
6% 10/169 [00:01<00:24, 6.46it/s]
7% 11/169 [00:01<00:22, 7.01it/s]
7% 12/169 [00:01<00:21, 7.40it/s]
8% 13/169 [00:01<00:20, 7.47it/s]
8% 14/169 [00:01<00:19, 8.06it/s]
9% 15/169 [00:01<00:18, 8.48it/s]
9% 16/169 [00:01<00:19, 7.72it/s]
10% 17/169 [00:02<00:18, 8.19it/s]
11% 18/169 [00:02<00:19, 7.60it/s]
11% 19/169 [00:02<00:19, 7.77it/s]
12% 20/169 [00:02<00:19, 7.60it/s]
12% 21/169 [00:02<00:20, 7.31it/s]
13% 22/169 [00:02<00:18, 7.79it/s]
14% 23/169 [00:02<00:19, 7.36it/s]
14% 24/169 [00:03<00:18, 7.76it/s]
15% 25/169 [00:03<00:18, 7.77it/s]
15% 26/169 [00:03<00:18, 7.93it/s]
16% 27/169 [00:03<00:17, 8.29it/s]
17% 28/169 [00:03<00:18, 7.82it/s]
17% 29/169 [00:03<00:22, 6.14it/s]
18% 30/169 [00:03<00:24, 5.79it/s]
18% 31/169 [00:04<00:22, 6.04it/s]
19% 32/169 [00:04<00:21, 6.28it/s]
20% 34/169 [00:04<00:18, 7.11it/s]
21% 35/169 [00:04<00:17, 7.67it/s]
21% 36/169 [00:04<00:16, 7.98it/s]
22% 37/169 [00:04<00:15, 8.27it/s]
22% 38/169 [00:04<00:17, 7.38it/s]
23% 39/169 [00:05<00:20, 6.40it/s]
24% 40/169 [00:05<00:18, 7.00it/s]
25% 42/169 [00:05<00:16, 7.65it/s]
26% 44/169 [00:05<00:15, 8.00it/s]
27% 45/169 [00:05<00:14, 8.46it/s]
28% 47/169 [00:06<00:14, 8.53it/s]
28% 48/169 [00:06<00:13, 8.65it/s]
29% 49/169 [00:06<00:15, 7.98it/s]
30% 50/169 [00:06<00:14, 8.33it/s]
30% 51/169 [00:06<00:15, 7.67it/s]
31% 52/169 [00:06<00:14, 7.95it/s]
31% 53/169 [00:06<00:16, 7.03it/s]
32% 54/169 [00:07<00:21, 5.43it/s]
33% 55/169 [00:07<00:18, 6.27it/s]
33% 56/169 [00:07<00:16, 6.98it/s]
34% 57/169 [00:07<00:14, 7.61it/s]
34% 58/169 [00:07<00:13, 7.94it/s]
35% 59/169 [00:07<00:13, 8.30it/s]
36% 60/169 [00:07<00:12, 8.71it/s]
36% 61/169 [00:07<00:12, 8.69it/s]
37% 62/169 [00:08<00:12, 8.65it/s]
37% 63/169 [00:08<00:13, 7.87it/s]
38% 64/169 [00:08<00:13, 7.93it/s]
39% 66/169 [00:08<00:12, 8.46it/s]
40% 67/169 [00:08<00:13, 7.43it/s]
40% 68/169 [00:08<00:13, 7.74it/s]
41% 69/169 [00:08<00:12, 8.10it/s]
41% 70/169 [00:08<00:11, 8.41it/s]
42% 71/169 [00:09<00:11, 8.79it/s]
43% 72/169 [00:09<00:10, 9.06it/s]
43% 73/169 [00:09<00:10, 9.22it/s]
44% 74/169 [00:09<00:10, 9.02it/s]
45% 76/169 [00:09<00:10, 8.95it/s]
46% 77/169 [00:09<00:11, 8.09it/s]
46% 78/169 [00:09<00:10, 8.39it/s]
47% 79/169 [00:10<00:10, 8.45it/s]
47% 80/169 [00:10<00:10, 8.63it/s]
48% 81/169 [00:10<00:10, 8.44it/s]
49% 82/169 [00:10<00:12, 7.12it/s]
49% 83/169 [00:10<00:11, 7.58it/s]
50% 84/169 [00:10<00:13, 6.09it/s]
50% 85/169 [00:10<00:12, 6.88it/s]
51% 86/169 [00:11<00:11, 7.25it/s]
51% 87/169 [00:11<00:10, 7.80it/s]
52% 88/169 [00:11<00:09, 8.32it/s]
53% 89/169 [00:11<00:09, 8.67it/s]
53% 90/169 [00:11<00:08, 8.86it/s]
54% 91/169 [00:11<00:08, 8.92it/s]
54% 92/169 [00:11<00:08, 8.57it/s]
55% 93/169 [00:11<00:08, 8.56it/s]
56% 94/169 [00:11<00:08, 8.76it/s]
56% 95/169 [00:12<00:08, 8.68it/s]
57% 96/169 [00:12<00:08, 8.58it/s]
57% 97/169 [00:12<00:09, 7.23it/s]
58% 98/169 [00:12<00:09, 7.33it/s]
59% 99/169 [00:12<00:08, 7.89it/s]
59% 100/169 [00:12<00:08, 8.21it/s]
60% 101/169 [00:12<00:09, 7.45it/s]
60% 102/169 [00:12<00:08, 7.87it/s]
61% 103/169 [00:13<00:08, 7.92it/s]
62% 104/169 [00:13<00:08, 8.08it/s]
62% 105/169 [00:13<00:08, 7.98it/s]
63% 106/169 [00:13<00:08, 7.09it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "transformers/examples/seq2seq/run_seq2seq.py", line 541, in <module>
main()
File "transformers/examples/seq2seq/run_seq2seq.py", line 503, in main
train_result = trainer.train(model_path=model_path)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py", line 924, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py", line 999, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py", line 1447, in evaluate
metric_key_prefix=metric_key_prefix,
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py", line 1564, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer_seq2seq.py", line 175, in prediction_step
model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py", line 1670, in prediction_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1772, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1656, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1223, in forward
hidden_states = inputs_embeds + position_embeddings
RuntimeError: CUDA error: device-side assert triggered
5% 250/5220 [17:12<5:42:05, 4.13s/it]
```
I've run it with --no_cuda and there are no errors, it works properly in that setting. Therefore it must be a cuda-related issue. I've tried disabling fp16 and the error persists.<|||||>I confirm that with t5 it works, therefore it's prophetnet-related.<|||||>Who is in charge of developing ProphetNet code? @patrickvonplaten @sgugger <|||||>Hey @alexvaca0, thanks for trying out the script! I'm quite sure that this is an indexing error that occers because a data sample is too large for the model to handle. It should be easy to fix by simply adding:
```
--max_source_length 512
```
to the command above. Could you try this and let me know if it works? :-)<|||||>@patrickvonplaten Great! That was it, the sequence length!
Actually, I'm trying to fine-tune ProphetNet in a Summarization task, in which models like T5, BART etc achieve eval losses of around 0.5-0.6 (approx), but with ProphetNet I'm not able to go below 5, and the eval loss doesn't actually decrease over training, it seems like it's diverging. I've tried using the same parameters as with BART and T5, and also with the parameters of the paper (https://arxiv.org/pdf/2001.04063.pdf) for CNN/DailyMail, that is batch size 512, learning rate 1e-04 with warmup steps 1000 (in my case I use less due to training data size).
Any recommendations/suggestions? ProphetNet was expected to work similarly to BART but its performance is much worse until now...<|||||>I don't know if this warning provides some extra info: The following columns in the training set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.
@patrickvonplaten <|||||>> @patrickvonplaten Great! That was it, the sequence length!
>
> Actually, I'm trying to fine-tune ProphetNet in a Summarization task, in which models like T5, BART etc achieve eval losses of around 0.5-0.6 (approx), but with ProphetNet I'm not able to go below 5, and the eval loss doesn't actually decrease over training, it seems like it's diverging. I've tried using the same parameters as with BART and T5, and also with the parameters of the paper (https://arxiv.org/pdf/2001.04063.pdf) for CNN/DailyMail, that is batch size 512, learning rate 1e-04 with warmup steps 1000 (in my case I use less due to training data size).
>
> Any recommendations/suggestions? ProphetNet was expected to work similarly to BART but its performance is much worse until now...
Interesting! Could you share the exact command you used here? Also pinging @qiweizhen - do you know what could be a problem for this? Are we sure that the n-gram loss is correctly implemented?<|||||>```{bash}
python transformers/examples/seq2seq/run_seq2seq.py \
--model_name_or_path microsoft/prophetnet-large-uncased \
--do_eval --do_train \
--task summarization \
--train_file train_df.csv \
--validation_file val_df.csv \
--output_dir prophetnet_0201 \
--overwrite_output_dir \
--per_device_train_batch_size=8 \
--per_device_eval_batch_size=16 \
--eval_accumulation_steps=10 \
--text_column text \
--max_source_length 364 \
--summary_column summary \
--max_target_length 60 \
--val_max_target_length 60 --evaluation_strategy steps \
--gradient_accumulation_steps 64 --num_train_epochs=20 --eval_beams=1 \
--load_best_model_at_end --save_steps 75 --logging_steps 75 --learning_rate 1e-04 --warmup_steps 200
```
This is the command I'm using. After trying some modifications I observe the same: no progress is made in evaluation, and almost no progress in training (loss 5.1 after almost 7 epochs), so it seems there may be some issue with ProphetNet implementation...
@patrickvonplaten @qiweizhen <|||||>I got the same problem that the training crashed in run_seq2seq.py. BTW, I guess that it is related to sequence lengths in configuration and training sets.
```bash
python ./run_seq2seq.py \
--model_name_or_path sshleifer/student_marian_en_ro_6_1 \
--do_train \
--do_eval \
--task translation_en_to_ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--source_lang en_XX \
--target_lang ro_RO\
--output_dir ~/tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
transformers version: 4.4.0.dev0
Platform: Ubuntu 16.04.7
Python version: 3.8.5
PyTorch version (GPU?): 1.7.1 (YES)
Tensorflow version (GPU?):
Using GPU in script?: YES
Using distributed or parallel set-up in script?: Yes, it detects 6 GPUs.
Error:
> /opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu/opt/conda/conda-bld/pytorch_1607369981906/work/at en/src/ATen/native/cuda/Indexing.cu:658/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: index SelectLargeIndex:658: indexSelectLargeIndex: block: [264,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [264,0,0], thr ead: [1,0: block: [267,0,0: indexSelectLargeIndex], thread: [32,0: block: [263,0,0,0,0], thread: [96,0,0] Assertion `srcIndex < srcSele ctDimSize` failed.
] Assertion `srcIndex < srcSelectDimSize] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [263,0/opt/con da/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu` failed.
,0:658/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu], thread: [97: indexSelectLargeIndex:658,0: block: [264: indexSelectLargeIndex,0,0: block: [267] Assertion `srcIndex < srcSelectDimSize,0,0` failed.
], thread: [2,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu,0], thread: [33:658,0,0: indexSele ctLargeIndex] Assertion `srcIndex < srcSelectDimSize,0: block: [263` failed.
] Assertion `srcIndex < srcSelectDimSize,0` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [267,0,0], thr ead: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [267,0,0], thr ead: [35,0,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu,0] Assertion `srcIndex < srcSelectDim Size:658], thread: [98` failed.
: indexSelectLargeIndex,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu: block: [264,0:658,0] As sertion `srcIndex < srcSelectDimSize: indexSelectLargeIndex,0` failed.
: block: [267], thread: [3/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIn dex: block: [263,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [263,0,0,0,0], thread: [100,0,0,0], thread: [36] Assertion `srcIndex < srcSelectDimSize,0,0` failed.
/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [264,0,0], thr ead: [4,0] Assertion `srcIndex < srcSelectDimSize,0] Assertion `srcIndex < srcSelectDimSize` failed.
,0` failed.
<|||||>Any updates on ProphetNet loss?? @patrickvonplaten <|||||>I have some more information on this. After training for 20 epochs, it learns almost nothing. Most interestingly, its outputs doesn't change when inputs change, that is, it always predicts the same. Predictions are like a mix of different summaries, getting elements from different types of summarizable texts, but it's the same for all... This brings me to think that in some way the network is constructed so that the output layer must always output the same thing, as if it must improve on all batches at the same time, I don't know if I'm explaining myself. It's clear that it is learning "something", in the sense that the summaries are clearly taken from my corpus style, but it's kind of learning to make the same summary for all texts. Since I'm using the same script as for other models, I guess there is some error in the network implementation...<|||||>Hey @alexvaca0,
I think I can reproduce your error. My training loss is also not improving after quite some time - will look into it!<|||||>Okay perfect! Please let me know when the issue is solved.<|||||>The original author @qiweizhen of the model was so nice to say he'll take a look. @qiweizhen - feel free to directly post any potential bugs in this PR.<|||||>Any updates on this? @qiweizhen @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Ping<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>If prophetnet is not going to be fixed, then I think it should be removed from the library, as it is worthless having it here without being able to use it.<|||||>The model works fine in inference - it's the training that seems to be buggy. @qiweizhen - do you think we could take a look at ProphetNet together? <|||||>> The model works fine in inference - it's the training that seems to be buggy. @qiweizhen - do you think we could take a look at ProphetNet together?
> If prophetnet is not going to be fixed, then I think it should be removed from the library, as it is worthless having it here without being able to use it.
Sorry. Will fix it as soon as possible.
<|||||>It's strange that I can get correct inference / forward results with beam search, but as you pointed out, the model has non-convergence problem. I try to load the pretrained checkpoint and finetuned checkpoint to carry out further fine-tuning, all of their loss is optimized to 7.x and keeps that loss. With the finetuned checkpoint plus further fine-tuning, the results are still reasonable but a bit worse. I suspect the most part of the model is frozen and only a small part is trainable but I failed to find this bug. I also tried overfitting experiments and the model still can not converge. I will try 1) old Transformers version and 2) fairseq model to compare the intermediate hidden states with the latest Transformers prophetnet model to localize the bug this weekend.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Has the problem of ProphetNet non-convergence been solved? I want to fine tune it based on its checkpoint.<|||||>I think the code to compute the loss may be wrong.
This is the code to compute the loss in [`ProphetNetForConditionalGeneration`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/modeling_prophetnet.py#L1968):
```python
predicting_streams = outputs[1].view(batch_size, self.config.ngram, sequence_length, -1)
predict_logits = self.lm_head(predicting_streams)
...
loss = None
if labels is not None:
loss = self._compute_loss(predict_logits, labels)
```
The shape of `predicting_streams` is `(batch_size, ngram, sequence_length, hidden_size)`.
The shape of `predict_logits` is `(batch_size, ngram, sequence_length, vocab_size)`.
The shape of `labels` is `(batch_size, sequence_length)`.
Then pass `predict_logits` and `labels` to `_compute_loss`, the code of [`_compute_loss`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/modeling_prophetnet.py#L2001) is:
```python
def _compute_loss(self, logits, labels, ignore_index=-100):
expend_targets = labels.new_zeros(self.config.ngram, labels.size(0), labels.size(1)).fill_(ignore_index)
for i in range(self.config.ngram):
if i > 0 and self.disable_ngram_loss:
break
expend_targets[i, :, :] = labels
lprobs = nn.functional.log_softmax(
logits.view(-1, logits.size(-1)),
dim=-1,
dtype=torch.float32,
)
loss = nn.functional.nll_loss(lprobs, expend_targets.view(-1), reduction="mean")
...
return loss
```
The shape of `expend_targets` is `(ngram, batch_size, sequence_length)`, the shape of `expend_targets.view(-1)` is `(ngram * batch_size * sequence_length)`, .
The shape of `lprobs` is `(batch_size * ngram * sequence_length, vocab_size)`.
Then computing the `nll_loss` of `lprobs` and `expend_targets` leads to the mismatch.<|||||>@patrickvonplaten <|||||>This is the code of the prophetnet [hub](https://github.com/microsoft/ProphetNet/blob/master/ProphetNet_En/prophetnet/ngram_criterions.py#L36).
You can see [line 62](https://github.com/microsoft/ProphetNet/blob/master/ProphetNet_En/prophetnet/ngram_criterions.py#L62) that the shape of `logits` is `(ngram * batch_size * sequence_length, vocab_size)`.<|||||>Hey @StevenTang1998,
Thanks a lot for taking a closer look here! Would you be interested in opening a PR to fix it?<|||||>OK, I will open a PR after the test is successful.<|||||>I have opened a [pr](https://github.com/huggingface/transformers/pull/13132).
I did a test, but I am a unfamiliar with the submitted specifications. Please review it.<|||||>@alexvaca0,
I think the fix by @StevenTang1998 may have fixed the training bug of ProphetNet. It would be amazing if you could give it a try with current master :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,803 | open | convert_graph_to_onnx.convert broken for model bart-large / wmt19-en-de | @stas00's edit on top:
I currently don't have the know-how in this domain, so if there are members of the community with ONNX experience and this issue resonates with you, please don't hesitate to comment if you'd like to work on resolving this. Thank you very much!
------------------------
## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
- ONNX Version: 1.5.2 (ONNX custom build w CUDA 11)
### Who can help
@stas00 (based on his suggestion to open a new issue in #9722 and run this with bart)
@patrickvonplaten (based on link of @stas00 in #9722)
@mfuntowicz (based on link of @stas00 in #9722)
@LysandreJik (based on link of @stas00 in #9722)
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large & facebook/wmt19-en-de
The problem arises when using:
* [X] the official example scripts: transformers.convert_graph_to_onnx.convert
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## Description
Initially, I was about to use the export for facebook/wmt19-en-de via ONNX for our deployment. Yet, it turns out that the exported models do not work properly. It seems, that there are several things broken for the export of this model type.
## To reproduce
### 1. Testing facebook/wmt19-en-de
```
import torch
import transformers
import numpy as np
import onnxruntime as rt
from pathlib import Path
from transformers import convert_graph_to_onnx
print(rt.__version__)
opt = rt.SessionOptions()
model_name = "facebook/wmt19-en-de"
pipeline_name = "translation_en_to_de"
model_pth = Path("encoder/en_de_trans.onnx")
if model_pth.exists():
model_pth.unlink()
nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
convert_graph_to_onnx.convert(
framework="pt",
model=model_name,
output=model_pth,
opset=12,
tokenizer=model_name,
use_external_format= False,
pipeline_name= pipeline_name,
)
sess = rt.InferenceSession(str(model_pth), opt)
spans = [
"My name is Bert", # passes facebook/wmt19-en-de
"My name is Bert and" # fails facebook/wmt19-en-de
]
for span in spans:
model_input = nlp.tokenizer.encode_plus(span)
model_input = {name : np.atleast_2d(value) for name, value in model_input.items()}
out = nlp.model(**nlp.tokenizer(span, return_tensors="pt"))
trans_1 = out[0].detach().cpu().numpy()
trans_2 = out[1].detach().cpu().numpy()
onnx_1, onnx_2 = sess.run(None, model_input)
assert np.allclose(trans_1, onnx_1, atol=1e-5)
assert np.allclose(trans_2, onnx_2, atol=1e-5)
```
Will raise the following exception:
```
Some weights of FSMTModel were not initialized from the model checkpoint at facebook/wmt19-en-de and are newly initialized: ['model.encoder.embed_positions.weight', 'model.decoder.embed_positions.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
ONNX opset version set to: 12
Loading pipeline (model: facebook/wmt19-en-de, tokenizer: facebook/wmt19-en-de)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Found output output_1 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
**[skipped warnings for brevity...]**
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-2-f4eec5b0ac5f> in <module>
51 trans_1 = out[0].detach().cpu().numpy()
52 trans_2 = out[1].detach().cpu().numpy()
---> 53 onnx_1, onnx_2 = sess.run(None, model_input)
54 assert np.allclose(trans_1, onnx_1, atol=1e-5)
55 assert np.allclose(trans_2, onnx_2, atol=1e-5)
~/anaconda3/envs/dev/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
122 output_names = [output.name for output in self._outputs_meta]
123 try:
--> 124 return self._sess.run(output_names, input_feed, run_options)
125 except C.EPFail as err:
126 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_74' Status Message: /data/shared/packages/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,6}, requested shape:{5}
```
As stated in #9722, I'd assume that some dynamic shape of was not inferred properly/not passed to the dynamic_shapes of torch.onnx.export. But thats just a quick guess, which I find when I build my own ONNX models. Important: The first string passes the assertions, the second one doesn't.
### 2. Testing facebook/bart-large (feature extraction)
@stas00 suggested to re-test the behavior with the underlying BART model. Now, say we run the same script with the following parameters:
```
model_name = "facebook/bart-large"
pipeline_name = "feature-extraction"
model_pth = Path("generator/bart.onnx")
```
Raises
```
ONNX opset version set to: 12
Loading pipeline (model: facebook/bart-large, tokenizer: facebook/bart-large)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
**[skipped output axes for brevity...]**
Found output output_13 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py:1111: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output_1
warnings.warn('No names were found for specified dynamic axes of provided input.'
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-3362f5ef6ea8> in <module>
30 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
31
---> 32 convert_graph_to_onnx.convert(
33 framework="pt",
34 model=model_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
--> 279 export(
280 nlp.model,
281 model_args,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
223
224 from torch.onnx import utils
--> 225 return utils.export(model, args, f, export_params, verbose, training,
226 input_names, output_names, aten, export_raw_ir,
227 operator_export_type, opset_version, _retain_param_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
83 else:
84 operator_export_type = OperatorExportTypes.ONNX
---> 85 _export(model, args, f, export_params, verbose, training, input_names, output_names,
86 operator_export_type=operator_export_type, opset_version=opset_version,
87 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes)
627 if dynamic_axes is None:
628 dynamic_axes = {}
--> 629 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
630
631 graph, params_dict, torch_out = \
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1115 for i, x in enumerate(value):
1116 if not isinstance(x, int):
-> 1117 raise ValueError("The type of axis index is expected to be an integer")
1118 if x in value_dict:
1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
ValueError: The type of axis index is expected to be an integer
```
### 3. Testing facebook/bart-large (text-generation)
```
model_name = "facebook/bart-large"
pipeline_name = "text-generation"
model_pth = Path("generator/bart.onnx")
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-d6fa1456dc0e> in <module>
28 model_pth.unlink()
29
---> 30 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
31
32 convert_graph_to_onnx.convert(
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
403 )
404
--> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs)
406 if task == "translation" and model.config.task_specific_params:
407 for key in model.config.task_specific_params:
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1040 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1041 )
-> 1042 raise ValueError(
1043 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
1044 "Model type should be one of {}.".format(
ValueError: Unrecognized configuration class <class 'transformers.models.bart.configuration_bart.BartConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.
```
### 4. Testing facebook/bart-large (fill-mask)
```
model_name = "facebook/bart-large"
pipeline_name = "fill-mask"
model_pth = Path("generator/bart.onnx")
```
```
ONNX opset version set to: 12
Loading pipeline (model: facebook/bart-large, tokenizer: facebook/bart-large)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
**[skipped for brevity]**
Found output output_13 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d55ec01c8b87> in <module>
34 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
35
---> 36 convert_graph_to_onnx.convert(
37 framework="pt",
38 model=model_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
--> 279 export(
280 nlp.model,
281 model_args,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
223
224 from torch.onnx import utils
--> 225 return utils.export(model, args, f, export_params, verbose, training,
226 input_names, output_names, aten, export_raw_ir,
227 operator_export_type, opset_version, _retain_param_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
83 else:
84 operator_export_type = OperatorExportTypes.ONNX
---> 85 _export(model, args, f, export_params, verbose, training, input_names, output_names,
86 operator_export_type=operator_export_type, opset_version=opset_version,
87 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes)
627 if dynamic_axes is None:
628 dynamic_axes = {}
--> 629 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
630
631 graph, params_dict, torch_out = \
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1115 for i, x in enumerate(value):
1116 if not isinstance(x, int):
-> 1117 raise ValueError("The type of axis index is expected to be an integer")
1118 if x in value_dict:
1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
ValueError: The type of axis index is expected to be an integer
```
## Expected behavior
1 & 2 & 4 point into the direction, that something is wrong with inferring the dynamic shapes, if I am right. 3 just popped up while I was testing the other pipelines.
In all cases, the export & usage should work properly. | 01-26-2021 10:35:14 | 01-26-2021 10:35:14 | Thank you very much, @oborchers for opening a new ticket and re-testing with other models and verifying that this problem is project-wide.
I hope @mfuntowicz gets a chance to have a look at it, or tag someone else who understands this sub-domain.<|||||>Hi @mfuntowicz @stas00 , is this a known issue with GPT2 as well? Please let me know if there is a workaround.
I was considering to convert ```gpt2``` or ```gpt2-medium``` to ONNX using the notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb).
On executing the line of code below:
```convert(framework="pt", model="gpt2-medium", output=Path("onnx/gpt2-medium.onnx"), opset=11)```
I get this error:
```~/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1115 for i, x in enumerate(value):
1116 if not isinstance(x, int):
-> 1117 raise ValueError("The type of axis index is expected to be an integer")
1118 if x in value_dict:
1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
ValueError: The type of axis index is expected to be an integer```<|||||>I recently stumbled upon this issue myself. Specifically case 2. The same error appears for `facebook/bart-large`, `facebook/bart-large-cnn`, `IlyaGusev/mbart_ru_sum_gazeta`. The main issue here is that for some outputs the tokenizer/model gives not a tensor, but rather a **tuple of tensors**, which is then converted into a list of shape dicts.
`torch.onnx._validate_dynamic_axes` (line 1193 in the latest release) expects a dict (and does nothing) or a list of ints for `dynamic_axes` (and mocks up some axes names), however (for the reason above) it gets a __list of dicts ([map int -> string])__
```python3
for key, value in dynamic_axes.items():
if key not in valid_names:
warnings.warn("Provided key {} for dynamic axes is not a valid input/output name".format(key))
if isinstance(value, list):
warnings.warn('No names were found for specified dynamic axes of provided input.'
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
value_dict = {}
for i, x in enumerate(value):
if not isinstance(x, int):
raise ValueError("The type of axis index is expected to be an integer")
if x in value_dict:
warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
.format(x, key))
else:
value_dict[x] = str(key) + '_dynamic_axes_' + str(i + 1)
dynamic_axes[key] = value_dict
```
I will keep digging into that, but the core question here is why Bart and related models return tuple of tensors (for outputs 1 to 12; outputs 0 and 13 are fine)? Although, I'm not an expert in either transformers, pytorch or onnx, so I might be missing something.
On a slight tangent here, is there a specific reason why `summarization` pipeline is not in the supported pipeline types for this script? <|||||>any update?<|||||>any update?<|||||>any update? <|||||>We're currently working on a rework of the ONNX implementation within Transformers, which is available here: https://github.com/huggingface/transformers/pull/11786
Instead of offering a script to enable conversions for all models (which was not kept up to date with recent model releases), we're opting for a case-by-case approach, while offering the tools to convert models manually in a straightforward and simple manner; by creating `OnnxConfig` configuration objects to specify the input and output types of each model.
Please take a look at the PR and give us your feedback.<|||||>@LysandreJik: Thank you very much! I think this is an excellent way to go. Having converted a dozen models myself, we internally went for something similar, albeit not nearly as streamlined / sophisticated.
```
@attr.s(auto_attribs=True)
class TransformersONNXConfig(BaseConfig):
"""Provides the basic configuration for all models."""
base_model: str
trans_cfg: PretrainedConfig
input_names: List[str]
output_names: List[str]
dynamic_axes: Dict
model_args: Set[torch.tensor]
tokenizer: PreTrainedTokenizerFast
extra_args: Dict
```
and
```
def create_and_export_onnx_model(self):
"""Creates a new model if the current model does not exist and exports it."""
torch.onnx.export(
self.create_torch_model(),
self.cfg.model_args,
f=self.onnx_posix_pth,
input_names=self.cfg.input_names,
output_names=self.cfg.output_names,
dynamic_axes=self.cfg.dynamic_axes,
do_constant_folding=True,
use_external_data_format=False,
enable_onnx_checker=True,
opset_version=12,
)
```
Where the most important part is `self.create_torch_model`, as we regularly modify the basic torch model with custom layers down the line. Is support for such a feature planned? If not, is it considerable? As it would substantially easy conversion of custom models, such as the [sbert](https://www.sbert.net) ones.
Furthermore, would it make sense to make `OnnxConfig` a part of the `PreTrainedModel` config to enable support from the get-go?
And finally, I assume this leaves us with the export, so that for seq2seq models we need still need to re-write the `.generate` function? Or is it possible to add support for an ONNX model from your side (probably difficult, as it's a part of the pre-trained model already, which would require double loading the model)? <|||||>Thanks @oborchers for your comments and use-cases.
I will let @LysandreJik speak about a potential integration of the `OnnxConfig` within the `PreTrainedModel` config, my initial plan was to have 100% backward compatibility, this explain why I put this somewhere else _(currently)_.
Regarding `generate`, this is something that might require some investigations but I'm seeing good opportunities to have something within the ONNX graph with [the recent knobs released by Microsoft](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb) folks on the ONNXRuntime project _(cc @tianleiwu for visibility on this)_.
Still, for this initial rework of the ONNX exporting capabilities we focused on "model only", with the ability to extend to full pipelines in the future. Generation is definitively one of the hardest task to get within the graph, but also one where I can see the biggest benefits.<|||||>@mfuntowicz: Thank you for your feedback! Yes, I understand the point for the compatibility to the fullest. After all, it's not that difficult to get to the config if done once or twice.
Regarding the `.generate` function. Thanks for the link! Will look into this more! Yes, absolutely!!<|||||>> Hi @mfuntowicz @stas00 , is this a known issue with GPT2 as well? Please let me know if there is a workaround.
>
> I was considering to convert `gpt2` or `gpt2-medium` to ONNX using the notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb).
>
> On executing the line of code below: `convert(framework="pt", model="gpt2-medium", output=Path("onnx/gpt2-medium.onnx"), opset=11)`
>
> I get this error:
>
> ```python
> 1115 for i, x in enumerate(value):
> 1116 if not isinstance(x, int):
> -> 1117 raise ValueError("The type of axis index is expected to be an integer")
> 1118 if x in value_dict:
> 1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
>
> ValueError: The type of axis index is expected to be an integer```
> ```
Hello @mriganktiwari
Any update on this? I am still facing the issue with GPT-2. I have used the same code as yours. Please guide, thanks! |
transformers | 9,802 | closed | [trainer] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters | When running DDP:
```
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \
run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
```
I get:
> [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
but it's not possible to turn it off from the trainer. i.e. it's hardwired.
@sgugger | 01-26-2021 06:10:05 | 01-26-2021 06:10:05 | Edit: I am not sure why this param is always set to `True` except for gradient checkpointing. We can certainly make a training argument control its value to avoid hard-coding it. At least to experiment and benchmark whether it's best at True/False. |
transformers | 9,801 | closed | [trainer] a consistent way to limit the number of items | # ๐ Feature request
We have:
1. `finetune_trainer.py` has
```
n_train: Optional[int] = field(default=-1, metadata={"help": "# training examples. -1 means use all."})
n_val: Optional[int] = field(default=-1, metadata={"help": "# validation examples. -1 means use all."})
n_test: Optional[int] = field(default=-1, metadata={"help": "# test examples. -1 means use all."})
```
2. some other `run_` scripts use `--n_obs`
3. `--max_steps` in the main trainer - which works only on the train_dataset - no ability to limit items on eval_dataset
Requests/Questions:
1. How does one use `--max_steps` if one needs to use a different number of items for train and eval?
2. Can we have a consistent way across examples to do this same thing?
Thank you.
@sgugger | 01-26-2021 05:43:51 | 01-26-2021 05:43:51 | Mmm, which scripts use `n_obs`? I don't remember seeing this one the official maintained examples.
`--max_steps` is different from `n_train`/`n_val`/`n_test`: `--max_steps` runs training for `max_steps`, using the *full training set*. `--n_train` restrains the training set to its first `n_train` samples. The first has its place inside `Trainer` for obvious reason, the second is part of the processing of the training (or eval/test) dataset so I don't think this has its place in `Trainer`.
As for a consistent way to do this in all examples, it doesn't really matter in non seq2seq scripts as their evaluation runs quite fast. I imagine those arguments were introduces in the seq2seq script originally because its evaluation is super long. We can add them with a need-to basis on other datasets, but I haven't felt the need to do this.<|||||>> Mmm, which scripts use `n_obs`? I don't remember seeing this one the official maintained examples.
all `seq2seq/run_*py`
> `--max_steps` is different from `n_train`/`n_val`/`n_test`: `--max_steps` runs training for `max_steps`, using the _full training set_. `--n_train` restrains the training set to its first `n_train` samples. The first has its place inside `Trainer` for obvious reason, the second is part of the processing of the training (or eval/test) dataset so I don't think this has its place in `Trainer`.
right, so this confusion leads to an incorrect benchmark. that's what I thought last night but it was too late to see.
https://github.com/huggingface/transformers/issues/9371#issuecomment-767323420
We need a way to be able to truncate the dataset to an identical size and then compare say 1-gpu vs 2-gpu benchmark on the same total number of input objects.
So how do we currently do that with other scripts that aren't `finetune_trainer.py`?
> As for a consistent way to do this in all examples, it doesn't really matter in non seq2seq scripts as their evaluation runs quite fast. I imagine those arguments were introduces in the seq2seq script originally because its evaluation is super long. We can add them with a need-to basis on other datasets, but I haven't felt the need to do this.
fast? try `run_clm.py` on gpt2/wiki - it's multiple hours
e.g. see: https://github.com/huggingface/transformers/issues/9371#issuecomment-759074475<|||||>> all seq2seq/run_*py
Those are not official maintained examples except for the new `run_seq2seq`. No one has really touched them since Sam left and they are in need for cleanup ;-)
> fast? try run_clm.py on gpt2/wiki - it's multiple hours e.g. see: #9371 (comment)
You are pointing to a comment that does not contain any evaluation. So I stand by what I say. Evaluation on wikitext-2 runs in a couple of seconds.
> We need a way to be able to truncate the dataset to an identical size and then compare say 1-gpu vs 2-gpu benchmark on the same total number of input objects.
Like I said, if it's needed it can be added.
> So how do we currently do that with other scripts that aren't finetune_trainer.py?
By opening a PR adding this ;-)<|||||>Thank you for clarifying which is which, @sgugger
OK, so what should we call a new flag in HF Trainer that would be an equivalent of --n_train? or use the same?
Do you suggest it should be train-specific?<|||||>I think it should be in the scripts, not the Trainer, as it's part of the preprocessing. I don't think it should be train-specific, we can do eval/test like in the finetune_trainer script.<|||||>but then we have to change all the scripts. Why not have an option to truncate the dataset at trainer level and solve it at once for all scripts?<|||||>Because it doesn't have much to do with the Trainer itself IMO. It's like putting all the arguments of all the scripts about tokenization in the Trainer, it doesn't really make sense as the Trainer is supposed to take the lead after the data preprocessing.
Let's see if @LysandreJik and @patrickvonplaten think differently maybe?<|||||>This makes sense, then perhaps having a Trainer-subclass that all scripts can tap into?
Also may I suggest that `--max_steps` is an ambiguous argument as it tells the user nothing about whether this is per gpu or per the whole thing?<|||||>The documentation says number of training steps. I don't see how the number GPU intervenes here as a training step is the full combination of forward, backward (perhaps multiple times if gradient accumulation is activated) and optimizer step.
One training step can have a different number of training samples depending on the number of GPUs, but also depending on the batch size, gradient accumulation steps etc. This information is logged at the beginning of training (`logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_train_batch_size}")` in Trainer.train)<|||||>Right, so what you're saying is that `--max_steps` is just the wrong tool for the truncating job and we need an explicit `--use-that-many-total-train-records`.
Honestly, I have been staring at all these different trainer options for a long time now and I still get confused at which is which, and which are impacted by number of gpus and which aren't. Every time this happens I have to go through the source code to see how it's used and then I get it. To me some of these arg names are hard to make sense of in the multi-gpu vs single gpu env.
* `--per_device_train_batch_size` is loud and clear.
* `--max_steps` is not.
I propose we use `total` and `per_device` prefix for any cl arg that behaves differently depending on the number of gpus.<|||||>The problem is that this then is a breaking change. I'm not necessarily super fond of the name `max_steps` myself but I'm not sure it's worth going through the trouble of a deprecation cycle for this one.<|||||>Do you think it's actually used a lot?
I agree with avoiding break changes, but since we are trying to make the API intuitive, such changes in the long run will benefit a much larger community than the annoyance it'd cause to those who use it right now.
I think the main issue we have here is that all these proposals to renames happen dynamically. But instead I think it'd make sense for a group of us to sit down, review all the cl args and do a single adjustment. Surely, this won't guarantee that in the future we won't find we missed something, but it's definitely better than doing it a little bit at a time, which is much more annoying.
In some previous projects for such things we also had a back-compat mode, which ones enabled supported a whole bunch of old ways until the user was ready to make the shift to the new code. Surely a rename of a cl arg could be easily supported by such feature. So here, instead of a deprecation cycle per item the approach is to keep anything old around but only if it's loaded from a helper module. So that the main code remains clean of deprecated things. This was in a different programming environment where it was developer, so I will have to think how to do the same here.<|||||>Note that this is not just a CI arg rename, since `TrainingArguments` is also a public class users may very well directly use in their code (you need to instantiate one each time you use a `Trainer`). We can certainly have a discussion around the arguments and decide which one we want to rename, though it should be in a separate issue. We're starting to derail this one ;-)
And from the issues, I'd say that half the users use `num_train_epochs` and half use `max_steps` to control the length of their training, so it is used a lot.<|||||>Thank you for flagging that we are diverging from the topic at hand, @sgugger
As you suggested I opened a new one: https://github.com/huggingface/transformers/issues/9821
And thank you for confirming that these are used a lot.<|||||>> Because it doesn't have much to do with the Trainer itself IMO. It's like putting all the arguments of all the scripts about tokenization in the Trainer, it doesn't really make sense as the Trainer is supposed to take the lead after the data preprocessing.
>
> Let's see if @LysandreJik and @patrickvonplaten think differently maybe?
So for the benefit of reviewers, and to bring us back to the focus of this Issue. I proposed to have a cl arg that will truncate the dataset (train, others?) (total!) across all example scripts.
@sgugger, correctly suggested that perhaps this shouldn't belong to Trainer, and then I suggested that perhaps there should be a sub-class that does such nice little tweaks consistently across all example scripts, rather than manually replicating the same code and which often leads to scripts diverging.
Plus, @sgugger points out that `examples/seq2seq/run*.py` haven't yet been converted to the new way.<|||||>I always thought that `max_steps` defines the total number of weight update steps (which is then not really influenced by other parameters such as number of GPUs or `gradient_accumalation_steps` or whatever). To me it defines: "How often do I want to update my weights?" or am I wrong here?. Think the name is clear and does not need to be changed, the documentation could be updated with a sentence that makes clear that `max_steps` = number of weight updates. Also, I use this arg quite often when training and think it's important to keep.
I agree with @sgugger here that I think a `--max_num_train_samples` arg (or whatever the name) should not go into the trainer, but should be added to all examples scripts. It's actually incredibly easy to do this with `datasets`:
```python
ds = load_dataset("crime_and_punish", split="train")
ds = ds.select(range(arg.max_num_train_samples))
```
I'm totally fine with having this as another cl arg for the scripts, but don't think it's the responsibility of the `trainer`.<|||||>I agree with Sylvain and Patrick about `max_steps`.
And for controlling the number of examples, this should go in scripts than `Trainer`, as we do all the pre-processing in the scripts. We could add two arguments to `DataTrainingArguments` in every script.
`--max_train_samples` = number of training examples
`--max_val_samples` = number of validation examples
These args are already there in the new `run_seq2seq.py` script.
<|||||>Thank you for your input, guys. Your suggestions work for me.
> We could add two arguments to DataTrainingArguments in every script.
> --max_train_samples = number of training examples
> --max_val_samples = number of validation examples
>
> These args are already there in the new run_seq2seq.py script.
but not in other `run_*.py` scripts.
and then we have `test` too - at least in `finetune_trainer.py`
I proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that?
---------------------------
The other slight confusion across some scripts is `val` vs `eval` - it's inconsistent - some reports say `val` others `eval` - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?<|||||>> I proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that?
I don't think this is a good idea personally. The goal of the scripts is to provide examples for our users. Having examples that don't use the main object of the library is counterproductive. It's one other instance where we have to bear the burden of duplicate code to make the user experience easier IMO.
> The other slight confusion across some scripts is val vs eval - it's inconsistent - some reports say val others eval - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?
I think this is mostly `finetune_trainer` (and maybe `run_seq2seq2` since I may have copied some names) not using the same terminology as the other scripts in this instance. So those two scripts should get aligned with the rest on this matter. Again, let's keep the examples simple (I feel like I'm repeating this all day long but *they are just examples* we cannot have scripts that will solve every use case and trying to do so make them un-understandable for our users) and match train/eval/test with what is done (training/evaluation/predict).<|||||>> > I proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that?
>
> I don't think this is a good idea personally. The goal of the scripts is to provide examples for our users. Having examples that don't use the main object of the library is counterproductive. It's one other instance where we have to bear the burden of duplicate code to make the user experience easier IMO.
You're correct. I didn't think of that.
So we have a conflict here between example scripts and them being used for more than that.
I, for one, need a solid set of scripts to do:
1. integration validation
2. benchmarking
In the absence of these I have been heavily relying on the example scripts. And this is probably where the conflict is.
So I keep on bringing this up - should we have a set of scripts that are not examples, but real production work horses and we treat them as such? Perhaps they can have much less functionality but do it consistently across different domains and simple?
Perhaps, instead of `run_(foo|bar|tar).py` it's one script that can tap into any of these domains and then it can have a simple identical cl args. And all we change is model names and most other args are almost the same.
> > The other slight confusion across some scripts is val vs eval - it's inconsistent - some reports say val others eval - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?
>
> I think this is mostly `finetune_trainer` (and maybe `run_seq2seq2` since I may have copied some names) not using the same terminology as the other scripts in this instance. So those two scripts should get aligned with the rest on this matter. Again, let's keep the examples simple (I feel like I'm repeating this all day long but _they are just examples_ we cannot have scripts that will solve every use case and trying to do so make them un-understandable for our users) and match train/eval/test with what is done (training/evaluation/predict).
You're absolutely correct, please see my response in the comment above.
<|||||>> So I keep on bringing this up - should we have a set of scripts that are not examples, but real production work horses and we treat them as such? Perhaps they can have much less functionality but do it consistently across different domains and simple?
If the basic examples do not suffice, then yes, definitely.<|||||>But we are walking in circles. If these are examples and they are treated as examples, these aren't tools to be relied upon. I hope you can see the irony...
I need a solid tool that will not change its API, start doing all the benchmarks in it so that we could go back to benchmarks from 6 months or a year ago and be able to run those and re-check.
<|||||>I'm not sure why you say we are walking in circles. I just dais yes to having benchmark-specific scripts if the examples do not have all the functionality you need.<|||||>I see what you mean. But you asked a tricky question - can I figure out how to the use the example scripts to meet my needs - mostly yes - but then every time I ask for something that ensures consistency, you say - but the audience is wrong - it should be for users. And I say, yes, of course, you're right. And we end up nowhere. Do you see where the circle is?
Ideally there should be just one benchmarking tool that can handle any model (or at least the majority of them) and support the different tasks and it probably won't need all the possible flags the various scripts have. If that makes sense.
I was using `finetune_trainer.py` for many things, but then a user asks to validate/benchmark/integrate a model not supported by that script, so I go into that subdomain in examples and things aren't the same there. And I know we are trying to make the example scripts consistent, but the example of this Issue I know for a fact that when one manually copies the same feature across scripts they are bound to become inconsistent. At least that's the experience with transformers so far.
Complaining and frustration expression aside - perhaps we could start with one best script that you think is a good model and then making it non-examples and to start transforming it to support a multitude of tasks/models/features? Would that be a good way to move forward?
<|||||>The issue is derailing a bit as I think adding the `max_train_samples` etc to all scripts has been validated (and is useful to quickly test the example is running on the user data).
If you want to look at a benchkmarking script, I think a good starting point is `run_glue` for fine-tuning on text classification, `run_mlm` for language modeling. Those are more for BERT-like models than seq2seq models however. `finetune_trainer` is aimed at being deprecated and once `run_seq2seq` has all its features, it can be the one good script to be based on for all things seq2seq.<|||||>> The issue is derailing a bit as I think adding the `max_train_samples` etc to all scripts has been validated (and is useful to quickly test the example is running on the user data).
Excellent!
> If you want to look at a benchkmarking script, I think a good starting point is `run_glue` for fine-tuning on text classification, `run_mlm` for language modeling. Those are more for BERT-like models than seq2seq models however. `finetune_trainer` is aimed at being deprecated and once `run_seq2seq` has all its features, it can be the one good script to be based on for all things seq2seq.
I feel I'm not managing to successfully communicate the need here. I will let it go for now.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>This is getting resolved by https://github.com/huggingface/transformers/pull/10551
<|||||>> I always thought that `max_steps` defines the total number of weight update steps (which is then not really influenced by other parameters such as number of GPUs or `gradient_accumalation_steps` or whatever). To me it defines: "How often do I want to update my weights?" or am I wrong here?. Think the name is clear and does not need to be changed, the documentation could be updated with a sentence that makes clear that `max_steps` = number of weight updates. Also, I use this arg quite often when training and think it's important to keep.
>
> I agree with @sgugger here that I think a `--max_num_train_samples` arg (or whatever the name) should not go into the trainer, but should be added to all examples scripts. It's actually incredibly easy to do this with `datasets`:
>
> ```python
> ds = load_dataset("crime_and_punish", split="train")
> ds = ds.select(range(arg.max_num_train_samples))
> ```
>
> I'm totally fine with having this as another cl arg for the scripts, but don't think it's the responsibility of the `trainer`.
hi,I want to use the crime_and_punish dataset to do evaluation on model reformer,which task code should I use?<|||||>@LeopoldACC, it looks like you posted your question in a very unrelated discussion. Please try https://discuss.huggingface.co/. Thank you. |
transformers | 9,800 | closed | [traner] fix --lr_scheduler_type choices | This PR fixes:
```
$ python ./run_clm.py -h | grep lr_scheduler_type
[--lr_scheduler_type {SchedulerType.LINEAR,SchedulerType.COSINE,SchedulerType.COSINE_WITH_RESTARTS,SchedulerType.POLYNOMIAL,SchedulerType.CONSTANT,SchedulerType.CONSTANT_WITH_WARMUP}]
```
to:
```
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup}]
```
I'm not sure what the original intention was since the current suggestions do not work:
```
run_clm.py: error: argument --lr_scheduler_type: invalid SchedulerType value: 'SchedulerType.LINEAR'
```
I couldn't find any readily-available methods to do the same in the `enum` superclass: https://docs.python.org/3/library/enum.html
@sgugger | 01-26-2021 05:33:50 | 01-26-2021 05:33:50 | This fix works, but only for this particular enum. I'm wondering if it shouldn't be better to just change [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L83) in `HFArgumentParser`? For instance
```bash
$ python examples/text-classification/run_glue.py -h | grep evaluation_strategy
```
returns
```
--evaluation_strategy {EvaluationStrategy.NO,EvaluationStrategy.STEPS,EvaluationStrategy.EPOCH}
```
which probably does not work as well. (Or if does, we want to display "no"/"steps"/"epoch" here.)<|||||>That's an excellent suggestion, @sgugger - thank you! Please have a look at this variation.<|||||>Weird, somehow it broke `finetune_trainer.py`:
```
examples/seq2seq/test_finetune_trainer.py:84:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:76: in finetune_trainer_quick
output_dir = self.run_trainer(1, "12", MBART_TINY, 1, distributed, deepspeed, extra_args_str)
examples/seq2seq/test_finetune_trainer.py:210: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:160: in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
src/transformers/hf_argparser.py:150: in parse_args_into_dataclasses
namespace, remaining_args = self.parse_known_args(args=args)
/usr/local/lib/python3.6/argparse.py:1773: in parse_known_args
self.error(str(err))
/usr/local/lib/python3.6/argparse.py:2393: in error
self.exit(2, _('%(prog)s: error: %(message)s\n') % args)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = HfArgumentParser(prog='finetune_trainer.py', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)
status = 2
message = "finetune_trainer.py: error: argument --evaluation_strategy: invalid choice: <EvaluationStrategy.STEPS: 'steps'> (choose from 'no', 'steps', 'epoch')\n"
```
The test is just passing `--evaluation_strategy steps`
<|||||>Ok, tested locally and doing this does not work indeed (e.g. you can't launch any script with `--evaluation_strategy steps` or `--lr_scheduler_type linear`). To have it work, we have to change the type of the enum to `str`, but then the actual values of the dataclass are strings.
So there is no easy solution. I'm fine with leaving as is or also to remove the choices and expand the help to show the actual possible values, but it looks like it won't work as is.<|||||>Thank you for validating it.
How about moving my initial PR's `get_arg_names` to `ExplicitEnum` so any sub-class has access to it - perhaps a different name? And then we change just `meta["choices"]` in the arg parser to get these values?<|||||>No the initial PR doesn't work either (this is not caught by the tests since the test do not use `--lr_scheduler_type` in any of the example scripts). The field ends up being a `str` if you try on your side (and not a `SchedulerType` despite the cast in the post_init so then all tests comparing `self.args.lr_scheduler_type` to `SchedulerType.XXX` will fail.<|||||>Ah, my bad! I was testing on the `parse` method of `HfArgumentParser`, not `parse_into_dataclasses`. There is a way to make this work :-)<|||||>I'm all ears.
<|||||>The easy fix is to force `kwargs["type"] = type(kwargs["choices"][0])` for the Enum subclasses, after your line `kwargs["choices"] = [x.value for x in field.type]`, and let the dataclass set them back to their proper enum types in the postinit (as is done right now).
I even have a function that will automagically do the casting back after the init, which is the following:
```
for dtype in self.dataclass_types:
keys = {f.name for f in dataclasses.fields(dtype) if f.init}
keys_to_enum_types = {f.name: f.type for f in dataclasses.fields(dtype) if isinstance(f.type, type) and issubclass(f.type, Enum)}
inputs = {k: v for k, v in vars(namespace).items() if k in keys}
for k in keys:
if k in keys_to_enum_types:
inputs[k] = keys_to_enum_types[k](inputs[k])
delattr(namespace, k)
obj = dtype(**inputs)
outputs.append(obj)
```
in the `parse_into_dataclasses` method. The same would need to be done in the other special parse methods for consistency.<|||||>Please feel free to take over, rather than me being the middle person as you know what you're doing and I will learn from your work when it's done. Thank you!<|||||>Fabulous!
This is not a user facing code, correct? There are 3 pretty big identical chunks of code which can be refactored then.<|||||>Not sure we actually need them since the dataclasses need to re-cast those args to the enum type anyway for when someone is using `TrainingArguments` not in CLI. I was hoping to remove the lines
```
self.evaluation_strategy = EvaluationStrategy(self.evaluation_strategy)
self.lr_scheduler_type = SchedulerType(self.lr_scheduler_type)
```
by using those four lines, but they are still necessary. So we'll probably just remove those three blocks of identical code (let's see what @LysandreJik thinks!) |
transformers | 9,799 | closed | Authorize last version of tokenizer | # What does this PR do?
This PR bumps the version pinned in the setup to authorize the last version of tokenziers (which in particular contains fixes for the `run_qa` script). | 01-26-2021 01:20:21 | 01-26-2021 01:20:21 | Yeah I forgot to add this in a comment -> Talked to @n1t0 about it and he says those are unwanted breaking changes, so we will pin to the next patch release he is going to make to tokenizers. Leaving the PR open in the meantime!<|||||>Added a few more things to this PR:
- Use last tokenizers RC release
- Update the conversion from slow to fast tokenizers as described in https://github.com/huggingface/transformers/issues/9637
- Added a script to verify the conversion from slow to fast tokenizers looks good
- Fix some links to the hub<|||||>Regarding the masks, should that be applied to all SentencePiece-based tokenizers? Should it be added to XLNet/ALBERT/T5 as well?<|||||>Can't approve since this is my PR originally, but this looks good to me. We just need to make sure all special masks tokens are taken into account.<|||||>Most of the tests are unrelated (a rebase on `master` will make them pass), but these two aren't:
```
FAILED tests/test_tokenization_pegasus.py::PegasusTokenizationTest::test_mask_tokens_rust_pegasus
FAILED tests/test_tokenization_mbart.py::MBartTokenizationTest::test_embeded_special_tokens
=== 2 failed, 3352 passed, 3280 skipped, 2512 warnings in 765.06s (0:12:45) ====
``` |
transformers | 9,798 | closed | Smdistributed trainer | # What does this PR do?
This PR adds support for the variant of `torch.distributed` developed by AWS on SageMaker (`smdistributed`). It's been tested to work on the `run_glue` example.
The main steps are:
- to replace all operations from the `torch.distributed` module by their equivalent in `smdistributed.torch.distributed`.
- use their wrapper for the model instead of `DistributedDataParallel`. | 01-25-2021 22:53:29 | 01-25-2021 22:53:29 | Very cool integration.๐ ๐ ๐ฅ ๐ฅ
I'm doing some tests over the day for single-GPU and multi-GPU and let you know if I find something strange |
transformers | 9,797 | closed | Conversion of Electra checkpoint from official repo TF (pretrained on custom dataset) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest from pip install git+https://github.com/huggingface/transformers.git
- Platform: Colab
- Python version:
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): 1.15
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik if I am not wrong
-->
## Information
Model I am using (Bert, XLNet ...):
Electra from google repo
The problem arises when using:
* [ x] the official example scripts: (give details below) : Yes
The tasks I am working on is:
* conversion of Electra tf checkpoint train from the official repository on a custom dataset to pytorch
## To reproduce
Steps to reproduce the behavior:
1. install latest version of huggingface : pip install git+https://github.com/huggingface/transformers.git
2. train on official repo few steps Electra on a dataset
3. use the script to convert electra checkpoint : transformers/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py
```
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias
Skipping discriminator_predictions/dense/bias/adam_m ['discriminator_predictions', 'dense', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense/bias/adam_v ['discriminator_predictions', 'dense', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel
Skipping discriminator_predictions/dense/kernel/adam_m ['discriminator_predictions', 'dense', 'kernel', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense/kernel/adam_v ['discriminator_predictions', 'dense', 'kernel', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias
Skipping discriminator_predictions/dense_1/bias/adam_m ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense_1/bias/adam_v ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel
Skipping discriminator_predictions/dense_1/kernel/adam_m ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense_1/kernel/adam_v ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta
Skipping electra/embeddings/LayerNorm/beta/adam_m ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings/LayerNorm/beta/adam_v ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma
Skipping electra/embeddings/LayerNorm/gamma/adam_m ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings/LayerNorm/gamma/adam_v ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'position_embeddings'] electra/embeddings/position_embeddings
Skipping electra/embeddings/position_embeddings/adam_m ['electra', 'embeddings', 'position_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/position_embeddings/adam_v ['electra', 'embeddings', 'position_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'token_type_embeddings'] electra/embeddings/token_type_embeddings
Skipping electra/embeddings/token_type_embeddings/adam_m ['electra', 'embeddings', 'token_type_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/token_type_embeddings/adam_v ['electra', 'embeddings', 'token_type_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'word_embeddings'] electra/embeddings/word_embeddings
Skipping electra/embeddings/word_embeddings/adam_m ['electra', 'embeddings', 'word_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/word_embeddings/adam_v ['electra', 'embeddings', 'word_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings_project', 'bias'] electra/embeddings_project/bias
Skipping electra/embeddings_project/bias/adam_m ['electra', 'embeddings_project', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings_project/bias/adam_v ['electra', 'embeddings_project', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings_project', 'kernel'] electra/embeddings_project/kernel
```
Is it a normal message ? I remembered few months ago these message were not present. | 01-25-2021 22:51:13 | 01-25-2021 22:51:13 | You can see from the message that it's skipping the optimizer states. We're only saving the model, so it makes sense that the optimizer states are discarded :)<|||||>Thanks you for your reply !
I will close the issue then. |
transformers | 9,796 | closed | Improve pytorch examples for fp16 | # What does this PR do?
When fp16 is True in pytorch training examples, use padding to multiple of 8, if the currently used data collator allows, to speed up training. If no collator was used, add one using the padding option.
Fixes #9752
## Who can review?
Trainer: @sgugger | 01-25-2021 22:04:47 | 01-25-2021 22:04:47 | Thanks for addressing the comments, this is good to go IMO.
Pro-tip, if you edit your description to replace `Issue #9752` by `Fixes #9752`, the issue will be automatically closed when we merge this PR. |
transformers | 9,795 | closed | does LED use distributed training by default? | Hello, I'm currently fine tuning the `allenai/led-base-16384` model and following allow this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n) .The node that I'm using has a couple of V100-SXM2-16GB GPUs in it. That said, does `Seq2SeqTrainer` automatically use distributed training by default?
I noticed it says
```
the inner model is wrapped in ``DeepSpeed`` and then again in ``torch.nn.DistributedDataParallel``.
```
but I just wanted to triple check that the training job would distributed across GPUs. I'd greatly appreciate the feedback | 01-25-2021 21:48:42 | 01-25-2021 21:48:42 | Ah, I just noticed `Seq2SeqTrainingArguments` sets `local_rank=-1` by default |
transformers | 9,794 | closed | [Flaky Generation Tests] Make sure that no early stopping is happening for beam search | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The PR fixes the flaky ci which is probably due to early stopping in beam search with the top `num_return_sequences` beams being smaller than the
longest `num_beams` beam.
This PR should (hopefully) fix flaky CI failures, such as:
- https://app.circleci.com/pipelines/github/huggingface/transformers/18912/workflows/70862bd9-bc94-4f2d-9b07-b85be146c867/jobs/155831
- https://app.circleci.com/pipelines/github/huggingface/transformers/18910/workflows/284df8e3-373b-4168-bc3b-f7079c5aa17d/jobs/155797
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-25-2021 21:09:25 | 01-25-2021 21:09:25 | |
transformers | 9,793 | closed | Add the ability to skip runtime version check | # ๐ Feature request
Add the ability to skip version check / raise a warning instead of an error if the checks failed.
## Motivation
Currently, Transformers perform a runtime libraries version check and raise an error if some of the requirements has a version that is different from the specified.
While version check is a reasonable thing to do, an error seems like an overkill. As [I mentioned](https://github.com/huggingface/transformers/pull/8073#issuecomment-765632175) in #8073, this, for example, prevents from using the latest version of Tokenizers without modifying the source code.
## Your contribution
I suggest creating an environment variable (e.g. `TRANSFORMERS_VERSION_CHECK_STRICT`) that would control the reaction to a version mismatch. If it is set to `true`, the current behavior remains. If `false`, we log a warning instead of raising an error. Personally, I think that having this variable `false` by default will make it easier for the users to work in such cases.
What's your opinion in that? Probably we can find an even better solution together. | 01-25-2021 20:19:43 | 01-25-2021 20:19:43 | If anyone else comes here, until this is fixed you can find the version in `transformers/dependency_versions_check.py` - e.g. change `"tokenizers": "tokenizers==0.9.4"` to `"tokenizers": "tokenizers>=0.9.4"`<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,792 | closed | Strange start token in MT5 generation | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?):1.7.1
### Who can help
Text Generation: @patrickvonplaten @TevenLeScao
T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): MT5
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = MT5Tokenizer.from_pretrained("google/mt5-small")
text = 'summarize: Bidirectional Encoder Representations from Transformers is a Transformer-based machine learning technique for natural language processing pre-training developed by Google'
inputs = tokenizer([text], max_length=512, truncation=True, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'])
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in summary_ids])
```
The output I got is ['<extra_id_0>.']
## Expected behavior
I tried a few input texts. The generated output always start with <extra_id_0>, which doesn't happen in t5 generation. Anyone knows how to solve it?
| 01-25-2021 20:03:27 | 01-25-2021 20:03:27 | One more thing: this behavior still persists after I fine tuned the model on my own dataset<|||||>Hi @tomdzh
first of all, unlike the original T5, mT5 is not pre-trained on any supervised downstream task (like summarization, translation etc), so generation work without fine-tuning it.
Also, it would be hard to answer why it's happening in fine-tuned model without looking at any code.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,791 | closed | Fix broken links in the converting tf ckpt document | # What does this PR do?
This PR is to fix broken links in ["Converting TensorFlow Checkpoints"](https://huggingface.co/transformers/converting_tensorflow_models.html).
Advised by @LysandreJik in issue #9656, I updated the links.
I also refer to issue #8720 and it seems the issue is solved by this PR.
I think there are some outdated explanations.
As I discussed in issue #9657, I think it should be better to explain `from_pretrained()` instead of `torch.save()`.
Hence, I think the explanation below should be updated.
```
You can then disregard the TensorFlow
checkpoint (the three files starting with ``bert_model.ckpt``\ ) but be sure to keep the configuration file (\
``bert_config.json``\ ) and the vocabulary file (\ ``vocab.txt``\ ) as these are needed for the PyTorch model too.
```
I would be happy to get some advice on how to proceed with this PR.
Fixes #9656
Fixes #8720
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
albert, bert, XLM: @LysandreJik | 01-25-2021 19:40:58 | 01-25-2021 19:40:58 | I think the following comment of my own can be left as a PR for the future.
This change seems to require a modification of the `convert` code, and I think it may be necessary to separate it from this broken link issue.
I'm sorry for saying such a thing, even though the following is what I wrote.
> I think there are some outdated explanations.
> As I discussed in issue #9657, I think it should be better to explain `from_pretrained()` instead of `torch.save()`.
> Hence, I think the explanation below should be updated.
>
> ```
> You can then disregard the TensorFlow
> checkpoint (the three files starting with ``bert_model.ckpt``\ ) but be sure to keep the configuration file (\
> ``bert_config.json``\ ) and the vocabulary file (\ ``vocab.txt``\ ) as these are needed for the PyTorch model too.
> ```
>
I'll remove the WIP for now, but if you think this matter should be worked in this PR, please let me know. |
transformers | 9,790 | closed | RagTokenForGeneration: Fixed parameter name for logits_processor | # What does this PR do?
The parameter name for the beam_search and greedy_search functions of the GenerationMixin is (now?) logits_processor not pre_processor. This fix makes prefix_allowed_tokens_fn work (again?).
## Who can review?
Rag: @patrickvonplaten, @lhoestq
| 01-25-2021 19:31:07 | 01-25-2021 19:31:07 | |
transformers | 9,789 | closed | Allow RAG to output decoder cross-attentions | # What does this PR do?
This PR makes RAG output the generator model's decoder cross-attentions when `output_attentions=True`.
Motivation and context: before this PR, RAG's output objects had attributes for the generator's encoder self-attentions and decoder self-attentions, but no option for the encoder-decoder cross-attentions. So this simply allows cross-attentions to be extracted, as well as fixing a small bug where `output_attentions` wasn't being passed into the generator.
Fixes #9468
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Yes - #9468
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? - I don't believe any new tests are necessary. Existing tests pass.
## Who can review?
@patrickvonplaten, @lhoestq
| 01-25-2021 19:02:46 | 01-25-2021 19:02:46 | @lhoestq Thanks for the suggestions! All the CI checks pass now. |
transformers | 9,788 | closed | Clean TF Bert | # What does this PR do?
This PR aims to clean the code base of BERT and the other models that depends of it because of the `#Copied from...`. It also clean the template accordingly to the same changes applied in BERT.
The other models will receive the same type of cleaning, but each model will have its own PR. | 01-25-2021 18:01:06 | 01-25-2021 18:01:06 | I fully rework the keywords addition part to keep only those that seemed the most meaningful. |
transformers | 9,787 | closed | Fix model parallel definition in superclass | The `model_parallel` attribute should be obtainable from every class instance that has the `is_parallelized` class attribute. Otherwise the following line in the trainer crashes:
https://github.com/huggingface/transformers/blob/626116b7d76efef5137c3b4a92e64e3bb57a6882/src/transformers/trainer.py#L244
cc @stas00 @alexorona | 01-25-2021 13:29:16 | 01-25-2021 13:29:16 | |
transformers | 9,786 | closed | Truncated Translations with mT5 model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2
- Platform: GCP
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Using GPU in script?: V100
- Using distributed or parallel set-up in script?: using pytorch-lightning `dp` setup
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): I'm trying to fine-tune mT5 for neural machine translation. I trained the model on 3M data points with a `max_seq_len` of `200`. But when I perform `model.generate` I'm getting truncated outputs.
The input sequence which I'm inferencing is not a very long input and no where close to 200 tokens.
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
I'm using my own dataset and training the model by wrapping the code around with pytorch-lightning.
I played around with the parameters in `generate` method but all combinations still endup giving truncated outputs
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behaviour
An example output generated by the model
Generated output:
`I hope that representatives from abroad will get some time to see Delhi's`
The quality of translation is good but the sentence ends abruptly.
Expected output:
`I hope that delegates from abroad will have some time to see the history and pride of Delhi.` (generated with google translate)
@patrickvonplaten tagging you for help as this is a T5 related model
Thanks in Advance
<!-- A clear and concise description of what you would expect to happen. -->
| 01-25-2021 12:21:53 | 01-25-2021 12:21:53 | I've read these similar issue
[#5654](https://github.com/huggingface/transformers/issues/5656)
[#7500](https://github.com/huggingface/transformers/issues/7500)
but couldn't get the information needed.
@sshleifer @patil-suraj Any idea why this is happening? (Sorry for tagging everyone, I've been facing this issue for a while, looking for answers ๐)<|||||>Hi @sumanthd17 ,
I had this issue with standard T5 in which it was ignoring the `min_length` flag. I fixed this simply by upping the `num_beams` to 4.
Could you test this and see if it's quick fix for you?<|||||>@FL33TW00D Thanks. Yeah the `min_length` flag is being ignored.
Thanks for the quick fix. I think we can close this issue for now, But it might be a good idea to know why the min_length is being ignored.<|||||>@sumanthd17
This blog post from Patrick explains why min_length may not be satisfied by the beam_search:
https://huggingface.co/blog/how-to-generate
Although I do agree perhaps it should throw a warning message when it is unable to satisfy the min_length flag with the current generation parameters. |
transformers | 9,785 | closed | GPT2 MNLI training using run_glue.py | Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accumulation_steps 32\
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir models/gpt2/mnli/
```
I get the following error,
```
"Asking to pad but the tokenizer does not have a padding token. "
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
@LysandreJik : does the trainer need to be modified? Or am I supposed to be using additional commands for training using GPT2? (I have been following [this example](https://huggingface.co/transformers/v2.0.0/examples.html) using BERT)
| 01-25-2021 11:18:08 | 01-25-2021 11:18:08 | As explained in the documentation: "`run_glue.py`: This script can fine-tune the following models: BERT, XLM, XLNet and RoBERTa."
=> GPT-2 is a Transformer decoder, which can learn to generate text in an autoregressive way. It is not aimed at GLUE tasks, which are sequence classification tasks. <|||||>Hi! Actually we've recently added `GPT2ForSequenceClassification` to enable support for sequence classification tasks (like GLUE). The support was added to enable some models such as EDIT: linked wrong model. Updated: [DialogRPT](https://huggingface.co/microsoft/DialogRPT-updown)!
However, as you have seen @nlp-student, the GPT-2 model isn't trainable out of the box with batch size > 1, as it has no padding token defined. Furthermore, MNLI is a three-way classification so you would need to set the number of labels appropriately.
I invite you to run the following script to create a model that you can then use with the `run_glue.py` script, initialized from the GPT-2 weights:
```py
from transformers import GPT2ForSequenceClassification, GPT2Tokenizer
model = GPT2ForSequenceClassification.from_pretrained("gpt2", num_labels=3)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Define a padding token
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = tokenizer.pad_token_id
# Save the model and tokenizer in a directory
model.save_pretrained("directory")
tokenizer.save_pretrained("directory")
```
Then you can launch the `run_glue.py` script by specifying that checkpoint.
However, as @NielsRogge pointed out, GPT-2 will not obtain as good results as a bi-directional encoders such as BERT.<|||||>Thank you so much for your responses. I will try that out.
Is GPT-2 generally expected to perform worse than BERT on sequence classification ?
I've seen a lot of examples of BERT finetuned for classification, but not really for GPT-2. Is it not common (or that useful) to finetune GPT-2 for classification?
Thank you for any help or direction!<|||||>> Is GPT-2 generally expected to perform worse than BERT on sequence classification ?
Normally, yes. The reason for this is that GPT-2 is not really designed for such a task, whereas BERT is, as it has a special [CLS] token for classification tasks. GPT-2 is designed to process text autoregressively (i.e. left to right) in order to generate new text, whereas BERT is designed to process all tokens at once (hence creating a bidirectional representation of all input tokens), which is useful for tasks like sequence classification or extractive question answering for example.
That said, you can still use GPT-2 to perform sequence classification. You can simply let GPT-2 process a sentence word by word from left to right, and then train it to predict the class of the sentence by placing a linear layer on top of the hidden representation of the final token of the sentence, which is done by looking at the [code](https://github.com/huggingface/transformers/blob/285c6262a84490270d2f1a1c06ee9ccfc1b60e8f/src/transformers/models/gpt2/modeling_gpt2.py#L1233) of `GPT2ForSequenceClassification`.
So maybe an interesting thing to do is compare `BERTForSequenceClassification` and `GPT2ForSequenceClassification` on the same dataset, and see which one performs best. <|||||>Many thanks for your detailed response! It was very helpful for my understanding.<|||||>You're welcome, I'm closing this issue! Feel free to reopen if you have other issues down the road.<|||||>> Hi! Actually we've recently added `GPT2ForSequenceClassification` to enable support for sequence classification tasks (like GLUE). The support was added to enable some models such as EDIT: linked wrong model. Updated: [DialogRPT](https://huggingface.co/microsoft/DialogRPT-updown)!
>
> However, as you have seen @nlp-student, the GPT-2 model isn't trainable out of the box with batch size > 1, as it has no padding token defined. Furthermore, MNLI is a three-way classification so you would need to set the number of labels appropriately.
>
> I invite you to run the following script to create a model that you can then use with the `run_glue.py` script, initialized from the GPT-2 weights:
>
> ```python
> from transformers import GPT2ForSequenceClassification, GPT2Tokenizer
>
> model = GPT2ForSequenceClassification.from_pretrained("gpt2", num_labels=3)
> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>
> # Define a padding token
> tokenizer.pad_token = tokenizer.eos_token
> model.config.pad_token_id = tokenizer.pad_token_id
>
> # Save the model and tokenizer in a directory
> model.save_pretrained("directory")
> tokenizer.save_pretrained("directory")
> ```
>
> Then you can launch the `run_glue.py` script by specifying that checkpoint.
>
> However, as @NielsRogge pointed out, GPT-2 will not obtain as good results as a bi-directional encoders such as BERT.
Hi, met the same error as the original post although but I've set the per_device_train_batch_size=1.
I was running run_swag.py with GPT2 (I added a class GPT2ForMultipleChoice referring to BertForMultipleChoice), could you offer some help with this problem?ใThanks a lot.<|||||>I follow the suggestion but got this error:
RuntimeError: Error(s) in loading state\_dict for GPT2ForSequenceClassification: size mismatch for score.weight: copying a param with shape torch.Size(\[100, 768\]) from checkpoint, the shape in current model is torch.Size(\[2, 768\]). You may consider adding \`ignore\_mismatched\_sizes=True\` in the model \`from\_pretrained\` method.
I then tried to flag `ignore_mismatched_sizes` as True like below but still, get the same error.ย
```python
from transformers import GPT2ForSequenceClassification, GPT2Tokenizer
model = GPT2ForSequenceClassification.from_pretrained("gpt2", num_labels=100, ignore_mismatched_sizes=True)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Define a padding token
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = tokenizer.pad_token_id
# Save the model and tokenizer in a directory
model.save_pretrained("directory")
tokenizer.save_pretrained("directory")
``` |
transformers | 9,784 | open | Translation Model in ONNX: Choosable Output Formats | # ๐ Feature request
I am requesting to provide an option to specify the output format for the `translation_xx_to_yy` export to ONNX models. Currently, the output of [convert_graph_to_onnx.convert](https://github.com/huggingface/transformers/blob/6a346f0358a40f89ec384d441233bf54cac44f6a/src/transformers/convert_graph_to_onnx.py#L330) will provide the raw tensors as output (working prototype code under #9722)
## Motivation
When putting the models into production it would be great if one could chose, whether one wants to have the actual tensors or the output-tokens returned when exporting a translation pipeline to ONNX. Thereby, one is not forced to do a custom re-implementation of the [model.generate](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101) function, which then uses the ONNX model instead of the torch one.
As for now, the part which is could be replaced by an ONNX inference session lives under the [model.generate](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L385) function. Using this in production would mean to keep a TranslationPipeline object with all corresponding model information and config plus an ONNX inference session.
## Your contribution
There may be multiple solutions to this problem:
1. User-specific re-implementation of model.generate (This is what Ill try to accomplish in the future)
2. Is it possible to rewrite the code under model.generate to full torch? Then it should be possible to create a custom model for all translation models, that just places this "generate layer" on top of it. I have provided an example [here](https://github.com/oborchers/sentence-transformers/blob/master/examples/onnx_inference/onnx_inference.ipynb) which adds a simple pooling layer on an already extant transformers model. (That would require more study from my side to develop a prototype and follows step 1)
3. Provide support for the [ort-customops](https://github.com/microsoft/ort-customops) library by Microsoft. Essentially, this enables ONNX to handle strings (but introduces dependency to a very experimental extension). For example, that way one can export the universal sentence encoder (including tokenizer) to ONNX. Example [here](https://github.com/onnx/tensorflow-onnx/issues/1260). I cannot provide anything useful here. | 01-25-2021 10:36:17 | 01-25-2021 10:36:17 | Hello, Thanks for you work on that @oborchers ! I also saw the notebook on SentenceTransformers and it helped a lot !
Any status about this feature ? I also need to run models in onnx but most of them need to call a `.generate` function which is for now not supported... (I could replicate all the generate code in nodejs but i'm sure there is a nicer solution)
Is there any fix, status update or hack ?
Thanks a lot in advance,
Have a great day. <|||||>Hi @lerezell! Welcome! Glad the notebook helped.
Not from my side, unfortunately. I had postponed the issue from our side, because we had more pressing stuff to work on. But lately, the need for this feature starts to become larger as well at our company. Does your solution of porting the `.generate` function work by placing it on top of the ONNX version?
**Edit:**
Just out of curiosity I went through the `.generate` code and it should be possible to place the existing `.generate` code on top of an `CausalLMOutput` model, very similar as done in the [notebook](https://github.com/oborchers/sentence-transformers/blob/master/examples/onnx_inference/onnx_inference.ipynb). This requires an extension of the forward method.
In an initial implementation, it should be perfectly sufficient to port just the `sample` section and see if it works. However, this does not necessarily apply to beam_search, which I haven't figured out how it works. And the raw implementation shouldn't be too complex, because one might strip away a set of the convenience functions/arguments.
Downsides of this are, that, there needs to be some way of defining the arguments of `.generate` at runtime for inference. For example, the `min_length` and `max_length` and `eos_token_id` parameter should be included in the `forward` method arguments, because otherwise they would be static and defined via configuration at runtime. This may be sensible for some applications, but requires re-exporting the model every-time those change, which isn't really a nice way of doing this. Or at least if I didn't miss something completely
Best regards and have a nice eastern<|||||>Hi @oborchers,
I still haven't implemented "my" solution as I wanted to know if there was any other solution than writing all the logic again.
I would rather not and exporting the logic in the forward (and then in the onnx model) seems to be the best solution.
For the `x_length` arguments, that a downside, passing them as optional in the forward method could do ?
I need to focus on other things right now but I definitely keep an eye open for that !
Have a great day
<|||||>Hi, any update on how to export full pipelines to onnx?
For now, we're still obliged to keep a custom/hugging face lib code to handle the "post output embeddings" logic....
Thanks in advance,
Have a great day<|||||>Hi @Ierezell!
Sorry for not coming back on the issue. To be honest, for our use case there are quite a few problems we've encountered in exporting full pipelines to ONNX:
- How to best deal with caching (`past_key_values`)
- Less than optimal performance when used with some generative models (https://github.com/microsoft/onnxruntime/issues/7238)
- The problem of batching requests on inference servers which is very difficult due to the dynamic dimensions of `past_key_values`
- Similar gains in inference time by using custom kernels (e.g. deepspeed inference) + regular pytorch
This blog post from Microsoft may help though:
- https://cloudblogs.microsoft.com/opensource/2021/06/30/journey-to-optimize-large-scale-transformer-model-inference-with-onnx-runtime/
- https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb<|||||>Hi @oborchers, thanks a lot for the feedback!
Onnx is nice to be able to change stack for me (javascript etc...) but in the light of what you're saying it will be better to keep my GPU inference server.
Thanks a lot,
Have a great day !
<|||||>Hi,
Is there an alternative to onnx that you'd recommend? The able to keep and manipulate past_key_values is the most crucial part that I cannot find for many inference optimizations.
Thank you! |
transformers | 9,783 | closed | Adding `skip_special_tokens=True` to FillMaskPipeline | # What does this PR do?
- It's backward incompatible.
- It makes for sense for pipelines to remove references to special_tokens
(all of the other pipelines do that).
- Keeping special tokens makes it hard for users to actually remove them
because all models have different tokens (`<s>`, `<cls>`,` [CLS]`, ....)
- It's actually closer to the docs specs :
```
- **sequence** (:obj:`str`) -- The corresponding input with the mask token prediction.
```
as the input does not include the special tokens.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Linked to : https://github.com/huggingface/transformers/issues/9518
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | 01-25-2021 10:08:22 | 01-25-2021 10:08:22 | If it's a bug users relied on, then it's a feature, not a bug.
Anyway I'll merge this.<|||||>Good point, but I don't think that's the case here. Do you think users rely on the previous behavior? If that's the case, then we should revert this PR until we change of major version. |
transformers | 9,782 | closed | Add BlenderbotSmallForCausalLM for EncoderDecoder | # What does this PR do?
Implementing BlenderbotSmallForCausalLM
Issue #9066
PR #9128
@patrickvonplaten
| 01-25-2021 09:30:27 | 01-25-2021 09:30:27 | Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.
Also we need a decoder-only test for this. |
transformers | 9,781 | closed | Implementation of BlenderbotForCausalLM | # What does this PR do?
implementing BlenderbotForCausalLM for EncoderDecoder use, like ProphetNetForCausalLM
Fixes # (issue)
#9066
#9128
@patrickvonplaten
| 01-25-2021 09:26:29 | 01-25-2021 09:26:29 | Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.
Also we need a decoder-only test for this. |
transformers | 9,780 | closed | Calculating Confidence score for Question Answering Models | For QA task (extractive QA) the pipeline provides/returns 4 values.
1. a probability score
2. start index
3. end index
4. extracted answer
(https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline)
But If I am using the model class directly (not using pipeline) like the code below then i am unable to find the probability score :
`
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForQuestionAnswering.from_pretrained(MODEL_NAME,return_dict=True)
_encoding = tokenizer.encode_plus(question, docText, return_tensors="pt",max_length=4096)
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask).values()
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))_
`
This answer object has only 3 values. extracted answer, start score and end score.I cant find any way or function to calculate the probability score. I want it because I want to sort multiple answers according to their probability/confidence score.
This is a duplicate of Issue #5768 but that issue has been marked close without any answer. | 01-25-2021 09:15:46 | 01-25-2021 09:15:46 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
train_encoding = tokenizer(X_train, truncation=True, padding=True, max_length=512, return_tensors="pt")
# Training the model with PyTorch
with torch.no_grad():
outputs = model(**train_encoding)
# Normalize logits and spans
start = F.softmax(outputs.start_logits, dim=1)
end = F.softmax(outputs.end_logits, dim=1)
# Getting the start and end index of the tensor to retrieve the answer
start_index = torch.argmax((start), dim=1)
end_index = torch.argmax((end), dim=1)
# Computing score
outer = np.matmul(np.expand_dims(start, -1), np.expand_dims(end, 1))
max_answer_len = 512
candidates = np.tril(np.triu(outer), max_answer_len - 1)
idx_sorts = [np.argmax(candidates[i].flatten()) for i in range(len(candidates))]
scores = [candidates[[i], start_indices[i], end_indices[i]][0] for i in range(len(candidates))]
# Extracting answer
tokenizer.decode(train_encoding["input_ids"][start_index:end_index], skip_special_tokens=True)
|
transformers | 9,779 | closed | I want to train a BART model for conditional text generation. I want to train the encoder and the decoder separately for a specific task. Can anyone help with the code? I am new to this. @patrickvonplaten | @## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-25-2021 09:03:17 | 01-25-2021 09:03:17 | Hi @Sai-Ashish
Please use the forum https://discuss.huggingface.co/ for asking such questions. Use issues to report bugs, feature requests, etc. Thanks!
Closing it<|||||>have you came up with solution? |
transformers | 9,778 | closed | Link Not Working |
@patrickvonplaten
I was looking at the MarianMT docs, and in the "Examples" section, the link for "Fine-Tune on GPU" and "Fine-Tune on GPU with pytorch-lightning" is broken.
Kindly look into this issue.
| 01-25-2021 08:57:25 | 01-25-2021 08:57:25 | Thanks for pointing it out, the files are now renames to `distil_marian_enro_teacher.sh`, `distil_marian_no_teacher.sh` and are available here
https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_enro_teacher.sh
https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh
Feel free to open a PR to fix the links if you want to contribute :)
Also, we now recommend using the new `Seq2SeqTrainer` for fine-tuning seq2seq models, https://github.com/huggingface/transformers/tree/master/examples/seq2seq<|||||>Thanks a lot. Also, there a way to train the MarianMT model on my dataset ?<|||||>Sure, the script let's you pass your own dataset as well, have a look at the readme.<|||||>Thanks a lot for the help. |
transformers | 9,777 | closed | padding='max_length' allowing more than max length | I compared my tokenized data with `pad_to_max_length=True` and `padding='max_length'`. My `max_length` was set to 160. However I noticed there were a couple of data that were tokenized more than 160 when I used `padding='max_length'` versus `pad_to_max_length=True` | 01-25-2021 07:17:57 | 01-25-2021 07:17:57 | Could you post a short code snippet so we can reproduce ?<|||||>> Could you post a short code snippet so we can reproduce ?
This is my code.
```
class GPReviewDataset(Dataset):
def __init__(self, reviews, targets, tokenizer, max_len):
self.reviews = reviews
self.targets = targets
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.reviews)
def __getitem__(self, item):
review = str(self.reviews[item])
target = self.targets[item]
encoding = self.tokenizer.encode_plus(text=review, max_length=self.max_len,
add_special_tokens=True, padding='max_length',
return_attention_mask=True,
return_token_type_ids=False, return_tensors='pt')
return {'review': review,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(target, dtype=torch.long)}
error_list = []
for i in range(len(free_df)):
if len(GPReviewDataset(free_df['content'].to_numpy(), free_df['score'].to_numpy(), tokenizer, 160).__getitem__(i)['attention_mask']) != 160:
error_list.append((i, len(GPReviewDataset(free_df['content'].to_numpy(), free_df['score'].to_numpy(), tokenizer, 160).__getitem__(i)['input_ids'])))
error_list
```
and the results
```
[(95, 184),
(948, 218),
(1025, 162),
(3679, 204),
(3680, 164),
(4150, 220),
(6139, 185),
(7139, 165),
(7201, 166),
(7237, 256),
(7381, 181),
(7599, 254),
(7600, 204),
(7679, 170),
(8111, 202),
(8378, 193),
(8773, 583),
(9041, 583),
(9321, 161),
(10466, 279)]
```<|||||>You should set your truncation parameter as well, otherwise it won't truncate the texts that are too long. See the docs about the tokenizer methods [here (look for `truncation`)](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__)<|||||>> You should set your truncation parameter as well, otherwise it won't truncate the texts that are too long. See the docs about the tokenizer methods [here (look for `truncation`)](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__)
Thank you! That worked. |
transformers | 9,776 | closed | Auto-resume training from checkpoint | # What does this PR do?
Feature suggested by @madlag.
In the examples scripts, change the current behavior so if checkpoints are present in the output_dir passed to the script command, the training resumes from there. If the `output_dir` exists, is nonempty but has no checkpoint, the same behavior as before is applied (error). If it exists with checkpoints inside, the last checkpoint is grabbed to resume training from there. If `--overwrite_output_dir` is passed, the folder is destroyed as before.
This avoids user having to pass `output_dir/checkpoint-xxx` as their model name or path to resume training from a checkpoint, which is a nice improvement. The bad surprise can be if you set that `output_dir` with a trained model you like by mistake, but at the same time, the training is resumed from the last checkpoint so shouldn't be too long (and will converge to the same model) and interrupting before the end will not erase the model inside the folder, so I think the pros outweigh the cons.
Tweak the `run_glue` example script for now, will expand it to all scripts if accepted. | 01-25-2021 01:49:39 | 01-25-2021 01:49:39 | I was having a problem getting this to actually resume training on my system, and I had to make three small changes to the new code in trainer_utils.py:
1. `checkpoints = [path for path in content if _re_checkpoint.search(path) is not None and os.path.isdir(path)]` was returning empty. I changed `os.path.isdir(path)` to `os.path.isdir(os.path.join(folder, path))` and now it returns a list of the checkpoint folders as expected.
2. Similarly, the `get_last_checkpoint` function was returning the basename of the checkpoint folder, not the full path, which seems to be expected based on the updates to the example scripts. I changed the last line of the function to `return os.path.join(folder, max(checkpoints, key=lambda x: int(_re_checkpoint.search(x).groups()[0])))`
3. After I made those update, it was resuming from the oldest checkpoint, not the newest. I noticed the checkpoint regex was only capturing the final digit in the directory name. I changed it to `_re_checkpoint = re.compile(r"^" + PREFIX_CHECKPOINT_DIR + r"\-(\d+)$")` with the `+` inside the capture group, and now `get_last_checkpoint` is giving me the newest checkpoint as expected.
I'm just a novice, so I'm not sure if those tweaks would break anything other systems. Does `os.listdir()` return full paths instead of basenames under some OS/python combinations?
But with these changes I'm able to resume an aborted training from within the same folder. <|||||>Oh, I went a bit too fast and all the issues you list are completely valid! You should make a PR with all your changes since you were the one to find the problems and fix them :-)<|||||>Got sucked back into work for my day job, but I'll try getting to that soonish.
Unrelated, do you guys have an office in DUMBO? I think we're in the same building (at least until we all started working from home)<|||||>DUMBO offices are at 20 Jay street, you should come and say hi for a coffee when the pandemic is over if you're around :-) <|||||>Yeah, I run a small animation studio up on the 10th floor. Turns out the fancy GPUs I have for animation are also good for messing around with machine learning :) And so my pandemic side project became teaching my computer to write poetry.
Someday when it's safe again I'll definitely stop by.<|||||>Haha small world @jncasey! I'm missing 20 Jay right now. What is your company's name? <|||||>We're called Mixtape Club. We moved into the building from Manhattan last January, and were only there for about 10 weeks before everyone started working from home. Now my business partner is the only one who goes in, since he can't bring our whole sound studio back to his apartment. |
transformers | 9,775 | closed | New API for TensorFlow saved models not compatible with T5 and MarianMT | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.2 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@jplu @patrickvonplaten
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): MarianMT and T5
I was trying to use the new Serving feature introduced in `4.2.0` (https://github.com/huggingface/transformers/pull/9419)
## To reproduce
Steps to reproduce the behavior:
1. `pip install tensorflow transformers sentencepiece`
2. Try to use `.save_pretrained("./serve_tf", saved_model=True)` with T5 or MarianMT
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code to reproduce the error with T5:
```python
import tensorflow as tf
from transformers import TFT5ForConditionalGeneration
TFT5ForConditionalGeneration.from_pretrained('t5-small')\
.save_pretrained("./serve_tf_t5", saved_model=True)
```
The error:
```
All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-c3e4b575f41e> in <module>()
4 from transformers import T5Tokenizer, TFT5ForConditionalGeneration
5
----> 6 TFT5ForConditionalGeneration.from_pretrained('t5-small').save_pretrained("./serve_tf_t5", saved_model=True)
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:126 signature_wrapper *
structured_outputs, signature_function.name, signature_key)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:174 _normalize_outputs **
.format(value, key, compat.as_str_any(function_name)))
ValueError: Got a dictionary containing non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:2' shape=(1, 6, 4, None, 8, None, 64) dtype=float32>,) for key past_key_values in the output of the function __inference_serving_25314 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
```
Code to reproduce the error with MarianMT:
```python
import tensorflow as tf
from transformers import TFMarianModel, TFMarianMTModel
# It seems `opus-mt-mt-en`, `opus-mt-en-zh` and `opus-mt-en-ROMANCE` are the only
# TF compatible models
TFMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-mt-en')\
.save_pretrained("./serve_tf_marian", saved_model=True)
```
The error:
```
All the layers of TFMarianMTModel were initialized from the model checkpoint at Helsinki-NLP/opus-mt-mt-en.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-28-0fe7915512e9> in <module>()
5 from transformers import TFMarianModel, TFMarianMTModel, MarianMTModel
6
----> 7 TFMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-mt-en', use_cache=False).save_pretrained("./serve_tf_marian", saved_model=True)
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:126 signature_wrapper *
structured_outputs, signature_function.name, signature_key)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:174 _normalize_outputs **
.format(value, key, compat.as_str_any(function_name)))
ValueError: Got a dictionary containing non-Tensor value (None,) for key past_key_values in the output of the function __inference_serving_109486 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
```
I have tried to see the `use_cache` to `False` hoping `past_key_values` won't be used as output layer but it didn't help.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Like other models such as BERT or ALBERT, these two should be saved to be served later via the TensorFlow Serving environment. | 01-24-2021 21:52:34 | 01-24-2021 21:52:34 | Pinging @jplu <|||||>Hello!!
Yes this is a known issue for the Seq2Seq models, that is already fixed in master.<|||||>Hi @jplu
I'll close the issue as the fix will be in the next release for sure.
Thanks again. |
transformers | 9,774 | closed | adding MarianForCausalLM for EncoderDecoder use | # What does this PR do?
adding MarianForCausalLM for EncoderDecoder use
Fixes #9066
PR #9128
@patrickvonplaten
| 01-24-2021 21:24:27 | 01-24-2021 21:24:27 | Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.
Also we need a test for this.<|||||>Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.
Also we need a decoder-only test for this. |
transformers | 9,773 | closed | RagRetriever question_hidden_states shape | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-60-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
RAG: @patrickvonplaten, @lhoestq
## Information
The shape of RagRetriever's argument question_hidden_states mentioned in the document is `(batch_size, vector_size)`.
However, when RagModel calls self.retriever in forward function, it passes question_encoder_last_hidden_state`(batch_size, seq_len, vector_size)` to RagRetriever. Maybe question_encoder_last_hidden_state should be replaced with question_encoder_pooler_output?
The problem arises when using:
* [v] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [v] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
maybe a small bug? | 01-24-2021 20:05:53 | 01-24-2021 20:05:53 | I'm pretty sure it's just a bad naming of that variable. It should be question_encoder_pooler_output.
Indeed this variable is computed as the first output of a `DPRQuestionEncoder`. The DPR encoders don't return the hidden states (they basically skip it) to return the pooler output (aka DPR embeddings) of shape `(batch_size, vector_size)` instead.
If you want to contribute, feel free to open a PR to rename this variable :) <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,772 | open | Add classes for Multi-label classification models? | # ๐ Feature request
It would be nice to add classes for Multi-label classification models to the library.
## Motivation
In my projects, I need to perform Multi-label classification. This problem setting is quite common in real-life modelling.
## Your contribution
As there isn't a class implemented for this in the library, I have implemented my own. I have modified, for example, BertForSequenceClassificationton to make it usable for multi-label modelling. Then, I train the model with the Trainer class as usual.
Is there any interest or plan to add this from the library maintainers? If so, I would be happy to collaborate or start working on a PR.
Thanks
| 01-24-2021 17:52:31 | 01-24-2021 17:52:31 | Hi @LysandreJik is this feature request being worked on by someone? If not, I would love to take it up. Thanks!<|||||>Hello!
There is existing support for this thanks to the `problem_type` configuration attribute for some Sequence classification models, see here: https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig
(search for `problem_type`)
It could be better documented though by having an example for each model that supports this/more complete documentation on the model themselves. Would you like to try your hand at it? |
transformers | 9,771 | closed | TF loss function output inconsistent with Pytorch one for multiple tasks | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.10.7-gentoo-x86_64-AMD_Ryzen_9_3950X_16-Core_Processor-with-glibc2.2.5
- Python version: 3.8.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@jplu,
## Information
Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I was converting the example of perplexity calculation of fixed-length models [perplexity calculation of fixed-length models](https://huggingface.co/transformers/perplexity.html) to Tensorflow, and ran into an inconsistency in the implementation of compute_loss, compared to the implementation in the Pytorch version of the model.
For Tensorflow, when calling a model with inputs and labels (model(input_ids = input_ids, labels = labels), there is no reduction being done on the output of SparseCategoricalCrossentropy loss function (i.e. it is called explicitly with reduction=tf.keras.losses.Reduction.NONE for all tasks), as defined in modeling_tf_utils.py, while for Pytorch, the loss function CrossEntropyLoss() is called with the standard reduction (just the mean), which seems a bit unexpected to me.
After modifying the code to do an explicit tf.math.reduce_mean on the outcome of the model, I was able to reproduce the Pytorch outcome exactly.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Tensorflow version:
`outputs = model(input_ids, labels = target_ids)`
`log_likelihood = tf.math.reduce_mean(outputs[0] * trg_len)`
Pytorch version:
`outputs = model(input_ids, labels=target_ids)`
`log_likelihood = outputs[0] * trg_len`
## Expected behavior
Outcome of TFGPT2LMHeadModel.call(input_ids=input_ids,labels=labels) to have same tensor shapes as outcome of GPT2LMHeadModel.call(input_ids=input_ids,labels=labels)
| 01-24-2021 17:04:46 | 01-24-2021 17:04:46 | Hello!
This is the expected behavior, if you want any reduction on the loss, you have to do it yourself on your side, not inside the respective compute_loss function.<|||||>Hi, thanks for the explanation.
I realized it was probably by design, it's just odd that it differs so much in behavior from the Pytorch version. Is there any plan to bring those more inline in this regards? Probably a breaking change, I don't have a clear overview of how much would break, even internally within the Transformers library.<|||||>Nothing planed to align this with Python and we won't. The reason is because when training with a distribute strategy, TensorFlow doesn't allow a reduction other than `None` or `Sum`. Knowing that we have our own custom trainer and we cannot apply the change you would like as it will make it fails for such cases.<|||||>Makes sense, didn't think about the incompatibility of the AUTO reduction with the distribute strategies and the custom trainer.
I'll try to make a small patch over the weekend with an update to the documentation in the docstrings, as it's currently not in line with the actual (and intended) output.<|||||>That would be awesome! Thanks! |
transformers | 9,770 | closed | TFBartForConditionalGeneration with labels padded with -100 gives Nan loss. | I am pretraining T5 and Bart.
I noticed that the padding token for ```labels``` of these models should be -100 for ```decoder_input_ids```.
I change the padding token for labels for T5(pytorch, tensorflow) and Bart(pytorch), and it works well.
But, For Bart(tensorflow) gives Nan loss.
Because of this, I also get a error message for pretraining:
```tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of -100 which is outside the valid range of [0, 50265). Label values: 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 ...........```
## Environment info
- `transformers` version: 4.2.2
- Platform: ubuntu 18.04
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: yes (colab)
- Using distributed or parallel set-up in script?: no
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBartForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
import tensorflow as tf
from transformers import BartTokenizer, TFBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-base")
inputs = tokenizer("My dog is <mask>", return_tensors='tf', truncation=True, max_length=16, padding="max_length")
labels_ids = tokenizer("My dog is cute", return_tensors='tf', truncation=True, max_length=16, padding="max_length").input_ids
## labels padding_token = 1
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
## labels padding_token = -100
labels_ids = tf.where(
labels_ids == 1, tf.fill(tf.shape(labels_ids), tf.constant(-100, dtype='int32')), labels_ids
)
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
```
Resurts:
```
tf.Tensor(
[[ 0 2387 2335 16 11962 2 1 1 1 1 1 1
1 1 1 1]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7073401e-05 7.9230859e-04 6.1941872e+00
1.1058841e+00], shape=(6,), dtype=float32)
tf.Tensor(
[[ 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 -100
-100 -100 -100 -100]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8755410e-05 3.7073401e-05 7.9242775e-04 6.1941872e+00
1.1058841e+00 nan nan nan nan
nan nan nan nan nan
nan], shape=(16,), dtype=float32)
``` | 01-24-2021 16:02:07 | 01-24-2021 16:02:07 | I found that TFBart models use ```padding token``` for masking ```decoder_input_ids``` instead of using ```-100 token```, which is different from T5 models. So this is not a bug, but a little confusing because some of the bart code and documents talk about ```-100 token```.
Also, losses from ```torch``` and ```tensorflow``` are different with the same dataset (text shown above).
I also directly convert ```pytorch_model.bin``` to ```tf_model.h5``` instead of using uploaded model, but they shows different losses.<|||||>Hi @kiyoungkim1
In both T5 and BART `decoder_input_values` can never contain -100, for both models the padding is done using pad token.
In labels, however, we usually replace pad tokens with -100 so as to not include them while computing the loss.
But what you pointed out is correct, TFBart uses pad token as the ignore index
https://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bart/modeling_tf_bart.py#L1340-L1348
Where as TFT5 ignores -100
https://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/modeling_tf_utils.py#L146-L152
not an expert on TF side, so @jplu , @patrickvonplaten will know more<|||||>What says @patil-suraj is correct, in T5 we expect the pad token id to always be -100 while in TF BART it can be any digit assigned to `config.pad_token_id`.<|||||>Great catch @kiyoungkim1!
It's not very consistent what we are doing here...TFBart should have never ignored the `pad_token_id` as a default setting, but -100 as all other models do.
To fix the problem, I think we should add a couple of lines that check if -100 are in the labels and if yes replaces them with the `pad_token_id` to have consistency with PyTorch's Bart. It would be a pretty big breaking change to just replace `pad_token_id` with -100 so I think the first option is the better one. @kiyoungkim1 if you feel like opening a PR to correct this behavior we would be more than happy :-) <|||||>We also plan to turn all the loss computation, not anymore as a method but as a layer, so it will be much easier to use, to configure and TensorFlow workflow compliant. |
transformers | 9,769 | closed | saving best model only using modelcheckpoint keras | while using model.fit for training a Transformer model, how to use `tf.keras.callbacks.ModelCheckpoint` with `'save_best_only=True'` to save only the best model?
`model.fit([input_ids_train, attention_mask_train], train_labels,
validation_data=([input_ids_valid, attention_mask_valid], valid_labels),
epochs=50, batch_size=32)`
| 01-24-2021 14:58:05 | 01-24-2021 14:58:05 | Have you taken a look at the [Keras docs](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint)?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>@LysandreJik Yes i looked the following doc and implemented it.
```
config = AutoConfig.from_pretrained(
PRETRAINED_MODEL_VERSION, num_labels=len(classes), label2id=label2id, id2label=id2label,
finetuning_task="text-classification")
model = TFAutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_VERSION, config=config)
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.Accuracy()
optimizer = tf.keras.optimizers.Adam(learning_rate=2e-6, epsilon=1e-08)
model.compile(loss=loss, optimizer=optimizer, metrics=[metric])
print(model.summary())
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, verbose=1)
chkpt_calllback = tf.keras.callbacks.ModelCheckpoint(MODEL_DIRECTORY+"{epoch:02d}",
monitor='val_loss', verbose=2,
save_best_only=True, save_weights_only=False)
model.fit([input_ids_train, attention_mask_train], train_labels,
validation_data=([input_ids_valid, attention_mask_valid], valid_labels),
epochs=1, batch_size=32, callbacks=[es_callback, chkpt_calllback])
```
but doing so saves the model checkpoint only and I'm not sure how to load these weights again into the **model**. As it only accepts **.h5** format. Facing difficulty in loading these weights
```
checkpoint
02.index
02.data-00000-of-00001
``` |
transformers | 9,768 | closed | PegasusForCausalLM, analog to `ProphetNetForCausalLM` | # What does this PR do?
this PR is implementing PegasusForCausalLM for EncoderDecoder use like ProphetNetForCausalLM and it follow work of BartForCausalLM #9128
issue #9066
for a organisation matter, each one is in its own PR. @patrickvonplaten
| 01-24-2021 14:16:43 | 01-24-2021 14:16:43 | Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.
Also we need a decoder-only test for this. |
transformers | 9,767 | closed | tensorflow training problem | import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from transformers import TFBartModel, BartConfig
from tensorflow import keras
npfile = np.load("train_dataset.npz")
inp_ids = npfile["arr_0"][:1000]
out_ids = npfile["arr_1"][:1000]
out_ids = pad_sequences(out_ids, padding="post", truncating="post", value=1, maxlen=inp_ids.shape[1])
config = BartConfig.from_json_file("config.json")
b_model = TFBartModel(config=config)
def models():
inp = keras.layers.Input(shape=[inp_ids.shape[1]], dtype="int32")
outputs = b_model(inp, training = True, use_cache = False)
logits = keras.layers.Dense(config.vocab_size, activation="softmax")(outputs[0])
return keras.models.Model(inp, logits)
model = models()
model.summary()
dataset = tf.data.Dataset.from_tensor_slices((tf.constant(inp_ids), tf.constant(out_ids))).shuffle(2000).batch(4)
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=["acc"])
model.fit(dataset, epochs=100)
model.save_weights("tf_model.h5")
I donโt think there is a problem with the code, but the loss just doesnโt go down. Therefore, I also specially reduced the training sample and changed it to 1000, but it still has no effect. I wonder if the bart network has no weight update, but just updated the self-defined self. What about the definition layer? | 01-24-2021 06:28:17 | 01-24-2021 06:28:17 | |
transformers | 9,766 | closed | [wip] [doc] Parallelism notes | Perhaps this will end up in a blog post and/or a new document, for now collecting notes. This is a work in progress. Please give me some time to write the bulk of it and then you'll be welcome to ask questions, add contributions, etc.
------------------------
## Parallelism overview
In the modern machine learning the various approaches to Parallelism are used to:
1. fit very large models onto limited hardware - e.g. t5-11b is 45GB in just model params
2. significantly speed up training - finish training that would take a year in hours
We will first discuss in depth various 1D parallelism techniques and their pros and cons and then look at how they can be combined into 2D and 3D parallelism to enable an even faster training and to support even bigger models.
While the main concepts most likely will apply to any other framework, this article is focused in pytorch-based implementations.
## Data Parallel
Most users with just 2 GPUs already enjoy the increased training speed up thanks to DataParallel (DP) and DistributedDataParallel (DDP) that are almost trivial to use.
## ZeRO Data Parallel
ZeRO-powered data parallelism (ZeRO-DP) is described on the following diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)

It can be difficult to wrap one's head around it, but in reality the concept is quite simple. This is just the usual DataParallel (DP), except, instead of replicating the full model params, gradients and optimizer states, each GPU stores only a slice of it. And then at run-time when the full layer params are needed just for the given layer, all GPUs synchronize to give each other parts that they miss - this is it.
Consider this simple model with 3 layers, where each layer has 3 params:
```
La | Lb | Lc
---|----|---
a0 | b0 | c0
a1 | b1 | c1
a2 | b2 | c2
```
Lx being the layer and we have 3 layers, and ax being the weights - 3 weights
If we have 3 GPUs, the Sharded DDP (= Zero DP) splits the model onto 3 GPUs like so:
```
GPU0:
La | Lb | Lc
---|----|---
a0 | b0 | c0
GPU1:
La | Lb | Lc
---|----|---
a1 | b1 | c1
GPU2:
La | Lb | Lc
---|----|---
a2 | b2 | c2
```
In a way this is horizontal slicing, if you imagine the typical DNN diagram. Vertical slicing is where one puts whole layer-groups on different GPUs. But it's just the starting point.
Now each of these GPUs will get the usual mini-batch as it works in DP:
```
x0 => GPU0
x1 => GPU1
x2 => GPU2
```
The inputs are unmodified - they think they are going to be processed by the normal model.
So the inputs first hit the first layer La.
Let's focus just on GPU0: x0 needs a0, a1, a2 params to do its forward path, but GPU0 has only a0 - so what it does is it gets sent a1 from GPU1 and a2 from GPU2. Now the forward step can happen.
In parallel GPU1 gets mini-batch x1 and it only has a1, but needs a0 and a2 params, so it gets those from GPU0 and GPU2.
Same happens to GPU2 that gets input x2. It gets a0 and a1 from GPU0 and GPU1.
As soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation.
The same is repeated at every other stage.
And the whole larger thing is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
To me this sounds like an efficient group backpacking weight distribution strategy:
1. person A carries the tent
2. person B carries the stove
3. person C carries the entertainment system
Now each night they all share what they have with others and get from others what the don't have, and in the morning they pack up their allocated type of gear and continue on their way. This is Sharded DDP / Zero DP.
Compare this strategy to the simple one where each person has to carry their own tent, stove and entertainment system, which would be far more inefficient. This is DataParallel in pytorch.
And I think pretty much everywhere I read Sharded == Partitioned, so I think those are synonyms in the context of distributed models.
If you pay close attention the way ZeRO partitions the model's data - it looks very similar to horizontal model parallelism which will be discussed later. This is because it partitions/shards each layer's data unlike vertical model parallelism which is discussed next.
Implementations:
- [DeepSpeed](https://www.deepspeed.ai/features/#the-zero-redundancy-optimizer) ZeRO-DP stages 1+2+3
- [Fairscale](https://github.com/facebookresearch/fairscale/#optimizer-state-sharding-zero) ZeRO-DP stages 1+2+3
- [`transformers` integration](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations)
## Naive Model Parallel (Vertical) and Pipeline Parallel
Naive Model Parallel (MP) is where one spreads groups of model layers across multiple GPUs. The mechanism is relatively simple - switch the desired layers `.to()` the desired devices and now whenever the data goes in and out those layers switch the data to the same device as the layer and leave the rest unmodified.
We refer to it as Vertical MP, because if you remember how most models are drawn, we slice the layers vertically. For example, if the following diagram shows an 8-layer model:
```
=================== ===================
| 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 |
=================== ===================
gpu0 gpu1
```
we just sliced it in 2 vertically, placing layers 0-3 onto gpu 0 and 4-7 to gpu 1.
Now while data travels from layer 0 to 1, 1 to 2 and 2 to 3 this is just the normal model. But when data needs to pass from layer 3 to layer 4 it needs to travel from gpu0 to gpu1 which introduces a communication overhead. If the participating GPUs are on the same node (e.g. same PC) this copying is pretty fast, but if the other gpus are on different nodes (e.g. another PC) the communication overhead could be significantly larger.
Then layers 4 to 5 to 6 to 7 are as a normal model would have and when the 7th layer completes we often need to send the data back to layer 0 where the labels are (or alternatively send the labels to the the last layer).
Problems:
- the main deficiency and why this one is called "naive", is that all but one GPU is idle at any given moment. So if 4 gpus are used - it's almost identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. Plus there is the overhead of copying the data between devices. So 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, except the latter will complete the training faster, since it doesn't have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (barely because of the scheduler and optimizer data)
- shared embeddings may need to get copied back and forth between GPUs.
Pipeline Parallel (PP) is almost identical to a naive MP, but it solves the idling problem to a degree, by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process.
The following illustration from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html) shows first the naive MP, the PP:

It's easy to see how PP has less dead zones where GPUs are idle.
PP introduces a new hyper-parameter to tune and it's `chunks` which defines how many pipeline stages are to be used. e.g. in the 2nd diagram of the image above you can see that `chunks=4`.
With `chunks=1` you end up with the naive MP. with a very large value you will find that the overhead of slicing the tensors will slow everything down. So one has to experiment to find the best value. It's also important to remember that to take advantage of the GPU, you need largish batches and ideally in multiples of 8.
So if the normal batch size `bs=64` and `chunks=8`, the each stage will receive a micro-batch of `8`. However if you're tight on memory in first place you may end up with a the normal `bs=8`, and then if you choose `chunks=4`, you will end up with `4` pipeline segments with a micro-batch of just `2` - which would be very inefficient. Also `bs=8` and `chunks=3` won't go too well together either, as you will end up with uneven micro-batches of `[3,3,2]`.
While the diagram shows that there is a bubble of "dead" time that can't be parallelized because the last `forward` stage has to wait for `backward` to complete the pipeline, the purpose of finding the best value for `chunks` is to enable a high concurrent GPU utilization across all participating GPUs.
Problems:
- have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a `nn.Sequential` sequence of the same, which may require changes to the design of the model.
- currently the Pipeline API is very restricted. If you had a bunch of python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have batch size as the very first dimension, since pipeline is going to chunk the normal batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693
- have to arrange each layer so that the output of one model becomes an input to the other model
Implementations:
- pytorch-1.8-to-be - no docs yet, but see [this](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)
- [fairscale](https://fairscale.readthedocs.io/en/latest/tutorials/pipe.html)
- [deepspeed](https://www.deepspeed.ai/tutorials/pipeline/)
Other approaches:
SageMaker introduces the concept of an [Interleaved Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html)

Here the bubble (idle time) is further minimized by prioritizing backward passes.
According to [the same document](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html), it might be able to automate the conversion to pipeline.
The only problem is that this is currently only available at AWS, so you can't run it on your own hardware.
## Model Parallel (Horizontal)
Megatron-LM
## 2D Parallelism
The following diagram from the DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/) demonstrates how one combines DP with PP.

Here it's important to see how DP rank 0 doesn't see gpu2 and DP rank 1 doesn't see gpu3. To DP there is just gpus 0 and 1 where it feeds data as if there were just 2 gpus. gpu 0 "secretly" offloads some of its load to gpu 2 using PP. and gpu 1 does the same by enlisting gpu 3 to its aid.
XXX: will update this section once I get it working
## 3D Parallelism
## FlexFlow
[FlexFlow](https://github.com/flexflow/FlexFlow) is also solving the parallelization problem in a slightly different approach.
Paper: ["Beyond Data and Model Parallelism for Deep Neural Networks" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358)
It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter.
1. Sample = Data Parallelism
2. Operator = part vertical Layer Parallelism, but it can split the layer too - more refined level
3. Attribute = horizontal Model Parallelism (Megatron-LM style)
4. Parameter = Sharded model params
and they are working on Pipeline Parallelism. I guess ZeRO-DP is Sample+Parameter in this context.

The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3) fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which parallelisation to use where.
On very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and fixed workload, since models with dynamic behavior may prefer different parallelization strategies across iterations.
So the promise is very attractive - it runs say a 30min simulation on the cluster of choice and it comes up with the best strategy to utilise this specific environment. If you add/remove/replace any parts it'll run and re-optimize the plan for that. And then you can train. A different setup will have its own custom optimization.
| 01-24-2021 06:04:46 | 01-24-2021 06:04:46 | |
transformers | 9,765 | closed | [wip] [pipeline parallel] t5 - experiment | This PR is not ready for reviews.
I'm putting it up primarily for those who want an early preview of a possible Pipeline solution. @PeterAJansen, you wanted to see if you could get it working with 4x 40GB rig and t5-11b. Please give it a try.
-------------------
## Intention
We want to replace the naive model parallel (MP) implementation with a more efficient pipeline parallel (PP) implementation, which takes advantage of all participating gpus, and not not having one gpu run and the rest idling which is the case with the naive MP.
To give you a visual from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html),

You will find a new argument `chunks`, which is how many pipeline stages you want to add, in the 2nd diagram of the image oabove you can see that `chunks=4`.
So with `chunks=1` you get the naive mp, but it'd be even slower than the naive MP because of the RPC overhead.
## Overview
Porting t5 to Pipeline Parallelism proved to be a study in hacking, due to the very restrictive original pipeline interface which only allows tensors or tuples of tensors as `input`/`output` arguments in `forward`, and in `transformers` we have a ton of very complex variables to pass to `forward` and return from it.
We are trying to change the Pipeline design to be much more user-friendly: https://github.com/pytorch/pytorch/pull/50693
This implementation tries to take advantage of 2 natural stacks, so I implemented it as 2 pipes:
```
T5ForConditionalGeneration->
T5Stack(encoder)->Pipe(Sequential([T5StackPipeSegment * 6])
T5Stack(decoder)->Pipe(Sequential([T5StackPipeSegment * 6])
```
6 for `t5-small`.
Please don't even bother looking at the code, it is one big hack which took many hours to come up with to make the pipeline work, so clearly it is not something very portable or readable.
## Setup
**important: you need pytorch-nightly to be able to use this.**
```
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
```
Just create another conda env not to mess up with your normal env, but pt-nightly is a solid peace of software, I use it all the time. here is a quick copy-n-paste of what you will need - just edit the location of the transformers checkout dir.
```
conda create -y -n py38-pt18 python=3.8
conda activate py38-pt18
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
git clone https://github.com/huggingface/transformers
cd transformers
gh pr checkout 9765 # or use whatever other method to checkout this PR
pip install -e .[dev]
pip install -r examples/_tests_requirements.txt
```
Down the road I will look at using also fairscale/deepspeed but for now pytorch is just more accessible and hopefully will be more flexible soon.
## Deployment: script
You can deploy PP directly via your own trainer/script, e.g. this is what I have been using while developing it:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
import transformers.models.t5.modeling_t5
import transformers.utils.logging
transformers.models.t5.modeling_t5.logger.setLevel(transformers.logging.INFO)
mname = "t5-large"
tokenizer = T5Tokenizer.from_pretrained(mname)
model = T5ForConditionalGeneration.from_pretrained(mname, return_dict=True)
model.to("cuda:0")
model.pipeline_enable(chunks=2, device_map=None)
texts = ["This is good", "This is bad", "This is really bad", "This is fantastic",]
texts = ["translate English to French: "+x for x in texts]
batch = tokenizer.prepare_seq2seq_batch(texts, return_tensors="pt")
batch.to("cuda:0")
outputs = model.generate(**batch)
for x in outputs:
decoded = tokenizer.decode(x, skip_special_tokens=True)
print(decoded)
model.pipeline_finalize()
```
## Deployment: HF Trainer
But you can also use HF trainer. I tweaked the trainer to activate PP with:
```
--pipeline "chunks=4"
```
This will let the program do the partitioning for you. But you can control the partitioning manually by passing:
```
--pipeline "chunks=4 device_map=0:0-3,1:3-12"
```
Here we basically pass the equivalent of a dict `{0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}` which btw, you can pass in your script as:
```
device_map = {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}
model.pipeline_enable(chunks=30, device_map=device_map)
```
The syntax is what you'd pass to `range`, so `device_map=0:0-3,1:3-12" is the same as:
```
device_map = {0: list(range(0, 3), 1: list(range(3, 12)}
```
the keys are the gpu ids.
The number of layers is at the moment just the depth of the encoder stack, so 12 for t5-base, 6 for t5-small, etc.
Later we should have a different way as well, where we define the desired balance, rather than the specific layers.
Since each `t5` model has a different number of blocks, the easiest way is to first run without the device map and then check the logger output which will show you which device map it's using. Then I recommend to re-balance it so that gpu0 has less layers than the remaining gpus.
## Benchmarks
example for 2x 24GB cards
```
export BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=src USE_TF=0 examples/seq2seq/run_seq2seq.py \
--model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \
--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \
--max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--warmup_steps 50 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " \
--max_train_samples 10 --max_val_samples 10 \
--pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
```
Performance-wise:
- prediction speed is terrible - just as bad as the naive MP we have in t5 and others
- training/eval w/o prediction is slightly slower 20-40% than the baseline with just one process - this is primarily due to data copying and the current quite inefficient implementation due to the Pipeline api restrictions.
- the key is to find the value for `chunks` so that there is enough in the pipe so that the gpus don't idle, but not too big as performance goes down. But I wasn't able to overcome 50%/gpu utilization, so it's not much more different from the naive implementation - don't know yet why - probably data copying takes most of the overhead.
- I think on 4 gpus it'd be good to try an experiment and put the encoder stack on gpu 0+1 and decoder on gpu 2+3, instead of copying data between 4 devices as it's happening now - this will require a more complex device map, that I designed for the Bart MP, which has separate encoder and decoder sub-maps. But then it'd affect the pipeline as half the gpus will surely idle while encoder is running - so not great either. We will have to experiment with real data once I have access to a rig with 4 gpus and see. That's why I don't think this is urgent to work on. But such change would be easy to do. We will have to do it anyway for other models whose stacks aren't necessarily symmetrical.
Here are some stats on 2x 24GB Titan RTX:
Baseline: (1gpu)
```
export BS=64 MODEL=t5-base; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src USE_TF=0 \
examples/seq2seq/run_seq2seq.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval \
--do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--val_max_target_length 128 --warmup_steps 50 --max_train_samples 1000 --max_val_samples 1000 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: "
train_runtime = 6.9149
eval_loss = 3.5492
eval_runtime = 3.2802
```
XXX: need to re-test with rebased code-base
Now with pipeline:
- can run much higher batch-size
- note, that I'm using a user-provided device map that has more layers on gpu 1, since gpu 0 needs much more RAM
```
# device_map=0:0-3,1:3-12 - so splitting 1:4
# {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}
```
```
export BS=160 MODEL=t5-base; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=src USE_TF=0 \
examples/seq2seq/run_seq2seq.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval \
--do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--val_max_target_length 128 --warmup_steps 50 --max_train_samples 1000 --max_val_samples 1000 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " \
--pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
```
XXX: need to re-test with rebased code-base
## Future
I'm also trying to instrument this feature with reporting that will help users to finetune chunks/device_map
This is the `model.pipeline_finalize()` call. Things I'm thinking that would be useful:
* [ ] gpu utilization stats (average/peak) - probably need to fire off a thread that samples pynvml gpu utilization, then calculates average + peak
* [ ] peak memory usage per device report that I added seems to be too low - I think it has to do with pipeline threads - need to sort it out
Any other ideas/requests/needs?
@PeterAJansen, please let me know if you managed to run this on your 4x gpu setup.
Next, I think I'm going to scratch the current implementation and try a new one afresh.
Also this PR should be good enough to try to figure out how to use with DeepSpeed, once I get access to 4 gpus (need at least 4 gpus to do 2D parallelism).
I did warn you not to look at the code.
I also removed big chunks of MP code for now as it was getting in the way with the noise, will restore it when I sorted this all out. | 01-24-2021 03:16:17 | 01-24-2021 03:16:17 | Thanks @stas00 , I am getting what looks like a torch error when I run this (I'm not sure if the "Failed to look up the IP address for the hostname" error is related -- I'm not able to find much on this except for an issue from a few days ago that mentions this: https://github.com/pytorch/pytorch/issues/50700 ):
```
git clone https://www.github.com/huggingface/transformers.git
cd transformers/
gh pr checkout 9765
conda create -y -n py38-pt18 python=3.8
conda activate py38-pt18
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
pip install -e .[dev]
pip install -r examples/_tests_requirements.txt
cd examples/seq2seq/
ln -s ~/github/transformers/examples/seq2seq/wmt_en_ro wmt_en_ro
export BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 1000 --n_val 1000 --pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
```
Output:
```
export BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 1000 --n_val 1000 --pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
01/25/2021 11:39:51 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False
01/25/2021 11:39:51 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='output_dir', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=160, per_device_eval_batch_size=160, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-06, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_steps=50, logging_dir='runs/Jan25_11-39-51_seahorse', logging_first_step=True, logging_steps=1000, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, pipeline='chunks=4 device_map=0:0-3,1:3-12', dataloader_drop_last=False, eval_steps=25000, dataloader_num_workers=4, past_index=-1, run_name='output_dir', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.1, adafactor=False, group_by_length=False, report_to=['tensorboard'], sortish_sampler=True, predict_with_generate=False)
[INFO|configuration_utils.py:445] 2021-01-25 11:39:51,546 >> loading configuration file https://huggingface.co/t5-base/resolve/main/config.json from cache at /home/pajansen/.cache/huggingface/transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637
[INFO|configuration_utils.py:481] 2021-01-25 11:39:51,547 >> Model config T5Config {
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 3072,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.3.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
[INFO|configuration_utils.py:445] 2021-01-25 11:39:51,740 >> loading configuration file https://huggingface.co/t5-base/resolve/main/config.json from cache at /home/pajansen/.cache/huggingface/transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637
[INFO|configuration_utils.py:481] 2021-01-25 11:39:51,741 >> Model config T5Config {
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 3072,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"transformers_version": "4.3.0.dev0",
"use_cache": true,
"vocab_size": 32128
}
[INFO|tokenization_utils_base.py:1766] 2021-01-25 11:39:52,522 >> loading file https://huggingface.co/t5-base/resolve/main/spiece.model from cache at /home/pajansen/.cache/huggingface/transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
[INFO|tokenization_utils_base.py:1766] 2021-01-25 11:39:52,523 >> loading file https://huggingface.co/t5-base/resolve/main/tokenizer.json from cache at /home/pajansen/.cache/huggingface/transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
[INFO|modeling_utils.py:1027] 2021-01-25 11:39:52,809 >> loading weights file https://huggingface.co/t5-base/resolve/main/pytorch_model.bin from cache at /home/pajansen/.cache/huggingface/transformers/ab4e948915b067f5cb6e5105f6f85044fd717b133f43240db67899a8fc7b29a2.26934c75adf19ceac3c268b721ba353356b7609c45f5627550326f275a2163b4
[INFO|modeling_utils.py:1143] 2021-01-25 11:39:58,232 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
[INFO|modeling_utils.py:1151] 2021-01-25 11:39:58,233 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
01/25/2021 11:39:58 - INFO - utils - setting model.config to task specific params for translation_en_to_ro:
{'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to Romanian: '}
01/25/2021 11:39:58 - INFO - utils - note: command line args may override some of these
[INFO|modeling_t5.py:1536] 2021-01-25 11:39:58,479 >> enabling pipeline with chunks=4
[INFO|modeling_t5.py:1545] 2021-01-25 11:39:58,479 >> using user-provided device_map
[INFO|modeling_t5.py:1563] 2021-01-25 11:39:58,479 >> using pipeline partitioning: {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}
[W ProcessGroupGloo.cpp:532] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1
01/25/2021 11:40:01 - INFO - __main__ - *** Train ***
[INFO|trainer.py:807] 2021-01-25 11:40:01,659 >> ***** Running training *****
[INFO|trainer.py:808] 2021-01-25 11:40:01,659 >> Num examples = 1000
[INFO|trainer.py:809] 2021-01-25 11:40:01,659 >> Num Epochs = 1
[INFO|trainer.py:810] 2021-01-25 11:40:01,659 >> Instantaneous batch size per device = 160
[INFO|trainer.py:811] 2021-01-25 11:40:01,659 >> Total train batch size (w. parallel, distributed & accumulation) = 160
[INFO|trainer.py:812] 2021-01-25 11:40:01,659 >> Gradient Accumulation steps = 1
[INFO|trainer.py:813] 2021-01-25 11:40:01,659 >> Total optimization steps = 7
2021-01-25 11:40:01.766436: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "./finetune_trainer.py", line 373, in <module>
main()
File "./finetune_trainer.py", line 303, in main
train_result = trainer.train(
File "/home/pajansen/stass-test1/transformers/src/transformers/trainer.py", line 904, in train
tr_loss += self.training_step(model, inputs)
File "/home/pajansen/stass-test1/transformers/src/transformers/trainer.py", line 1271, in training_step
loss = self.compute_loss(model, inputs)
File "/home/pajansen/stass-test1/transformers/src/transformers/trainer.py", line 1301, in compute_loss
outputs = model(**inputs)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py", line 1704, in forward
encoder_outputs = self.encoder(
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py", line 1088, in forward
outputs = block_pipe(inputs)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipe.py", line 362, in forward
self.pipeline.run(batches)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py", line 117, in run
self.compute(batches, schedule, skip_trackers)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py", line 257, in compute
raise exc_info[0].with_traceback(exc_info[1], exc_info[2])
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/worker.py", line 79, in worker
batch = task.compute()
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/worker.py", line 60, in compute
return self._compute()
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py", line 222, in compute
return batch.call(partition)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/microbatch.py", line 70, in call
return Batch(function(self.value))
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward
input = module(input)
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py", line 838, in forward
layer_outputs = self.layer_module(hidden_states,
File "/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'head_mask'
```
<|||||>It looks more like a warning as it recovers with a fallback, make sure you have:
```
$ cat /etc/hosts
127.0.0.1 localhost
```
It looks like I forgot to commit the last change. My apologies. Could you please update and try again?<|||||>Thanks -- appears to be working -- on t5-3B it spreads it evenly across the 4 A100s (13.0-13.3GB each with a batch size of 1). For t5-11B there's an out of memory error -- I suppose (naively) if 11b is ~3.7x larger than 3B then it would require ~49gb per card without some form of offloading?
<|||||>Thank you for confirming that you were able to use it with t5-3b on your 4 gpus.
Were you able to get a decent gpu utilization across the board? Or were they all under 25%?
--------------
Please make sure you read my notes on the balancing in OP and experiment with the device map so that all gpus get a balanced GPU memory usage. gpu0 is already busy with many things, so I'd try a spread of 2/4/4/4 parts or perhaps 1/3/3/3 in your definition of:
`--pipeline "chunks=4 device_map=0:0-3,1:3-12"`
in this example we have 1/3 parts balance between gpu 0 and 1. i.e. 3 times more layers for gpu 1.
Of course, it needs to be adjusted to 4 gpus and I don't remember how many encoder blocks t5-11b has, but as I mentioned if you look at the logs you will find a ready map there, just re-adjust it to balance things better. Please let me know if I communicated clearly what I'm trying to say - we want all 4 gpus to have about the same memory usage - then we maximize the chance to fit t5-11b on those 4 gpus.
--------------
Next we need to try to bolt DeepSpeed on it. So we will try to use 2 gpus for pipeline and 2 gpus for ZeRO-DP and perhaps some ZeRO-Offload too. I should get access to 4 gpus soon and I will start working on figuring that step out. I will post back once I have something practical to share.<|||||>Thanks -- the autobalancing (just "chunks=4") actually seemed to give nearly entirely even results on -3B (the ~13.0-3GB each), so I tried that with 11B instead of manually supplying the device map (since it seemed a bit uneven when I tested on -base) -- but I'll tinker on 11B and report back. <|||||>> the autobalancing
FYI, currently the automatic device map just tries to split `n_layers/n_gpus` per gpu, and not taking into an account gpu0's extra load. Once everything else is working we will come up with much better heuristics based on actual gpu capacity and each layer's real memory demands.
<|||||>What's interesting is that I'm not generally observing GPU0 to have a higher load. Here's an example with unifiedqa-t5-3b (essentially just a further pre-trained t5-3b, not relevant here), chunks=4, autobalancing (with a different visualization tool). They all tend to show about the same RAM usage over time. The graph also shows the utilization (also generally under 30% most of the time):

BTW -- I tinkered with different manual device_map settings for t5-11b, but it always quickly gave out of memory errors.<|||||>Oh, what tool is that? I want it too!
It looks like different GPUs behave differently, it will take some experimentation to make sense of it all.
But clearly you're also not seeing much benefit from the pipeline over the native MP. Same as I. Either my workaround to make it work slow everything down or there is another problem elsewhere. As I mentioned I'd like to redesign my implementation in hope
to reduce the unnecessary logic and data-copying.
> BTW -- I tinkered with different manual device_map settings for t5-11b, but it always quickly gave out of memory errors.
Thank you for the experimentation. I'm still waiting to get access to a 4-gpu setup and when it happens will immediately start experimenting with bolting DeepSpeed on it and then will get back to you.<|||||>Thanks -- this handy cool visualization tool is nvtop -- I just found it to plot the relative changes rather than stare at nvidia-smi and hope to keep it all in my brain. It's available with apt ( sudo apt-get install nvtop ).
Happy to offer my rig for some testing if you need a 4 GPU setup sooner. :) <|||||>Oh, yes, I had it and forgot about its usefulness. Thank you!
I typically use
```
alias wn='watch -n 1 nvidia-smi'
```
but this is a way better.
> Happy to offer my rig for some testing if you need a 4 GPU setup sooner. :)
If don't find access by tomorrow I will gladly accept your generous offer, @PeterAJansen. Thank you!<|||||>hmm, how do you get a split screen per card in nvtop? for some reason my version reports both cards as one card. I don't see any command line options to configure that.<|||||>hmmm, it actually worked out-of-the-box for me (but looks very different depending on the dimensions of the terminal). Does it show only 1 GPU (with memory for both?), or two separate GPUs? <|||||>It reports 2 gpus but shows the report only for gpu 0. could be a bug. I just saw that for you it showed all 4 gpus.

<|||||>What happens if you make the window really tall/wide? It changes the display for me if I resize the terminal -- if I make it really tiny, it looks something like yours:

<|||||>Sorry, I forgot to mentioned I tried this already to no avail. I give it a huge console.
I even tried various terminals - same.
I think it may have to do with my 2nd card being rtx-3090 - and it doesn't work with cuda < 11.1 - most likely nvtop was built against cuda-10, so while it replicates the nvidia-smi stats, it can't access nvml for that card and thus doesn't show the graph.
Yup, installed nvtop on a machine with 2 normal gpus and it shows them both in the same-size terminal. So it just can't handle rtx-30* unless it's rebuilt from source against cuda-11.1+
But even then when it works it gives no way to separate the 2 gpu other than colors and 4 lines often around the same magnitude for different things are impossible to make sense of. This is an odd design. <|||||>:-/ That's unfortunate (though I suppose the cost of using bleeding-edge hardware). The A100s are supported with CUDA 11.0, so they must just squeak in on the current version available with apt.
(And, the usability is a little unusual, but with ASCII graphics there are strong limits... :) )<|||||>pytorch w/ cuda-11.2 nightly should be available any day now. cuda-11.2 has been out for a month now.
>(And, the usability is a little unusual, but with ASCII graphics there are strong limits... :) )
This is a good point. But at least one could use a double line or asterisks or something to differentiate 4 different things. Perhaps some people can track 4 similar colors and remember which is which. Not me. I guess the source code is there, if I really need to I could probably hack it to do be more user-friendly.<|||||>Update: this overload of the term MP to mean totally different things is a big problem.
I was sure I could easily combine non-DeepSpeed pipeline with Deepspeed after reading
https://www.deepspeed.ai/features/#support-for-custom-model-parallelism
Except, I have just now realized that it's not PP but the super-confusing-mean-different-things-in-different-contexts abbreviation MP, which in this particular context means horizontal MP and not vertical MP/PP. And there are no instructions on how to integrate non-DeepSpeed PP. So I have been trying to fix the wrong thing. https://github.com/microsoft/DeepSpeed/issues/710
So this particular branch takes us nowhere closer to integration of PP with DeepSpeed.
Back to the drawing board.<|||||>too long. closing.<|||||>We will test this branch soon.<|||||>There are probably some things that can be salvaged from this PR, but the main utility of it is to see the difficulties I run into. And of course, this is not a good solution not only because the code is absolutely nuts, but because it's very inefficient.
As I mentioned in the other thread, pytorch now has a better API, so some of the encoding/decoding of non-tensor inputs/outputs I did won't be needed anymore as it now supports non-tensor inputs/output. |
transformers | 9,764 | closed | index mismatch in "offset_mapping" with TokenizerFast and pre-tokenized input | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Information
I am not sure if this is the expected behavior or not, but when i use BertTokenizerFast with pre-tokenized input (so i set the parameter "is_split_into_words" to True) i have a mismatch in the offsets_mapping. It considers every token standalone and restart the start index from zero.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
tokenized_one_string = tokenizer(
["This is not what i was expecting"],
return_offsets_mapping=True,
)
tokenized_with_pre_tokenized_input = tokenizer(
[["This", "is", "not", "what", "i", "was", "expecting"]],
is_split_into_words=True,
return_offsets_mapping=True,
)
print(tokenized_one_string["offset_mapping"])
print(tokenized_with_pre_tokenized_input["offset_mapping"])
```
and this is the output:
```
[[(0, 0), (0, 4), (5, 7), (8, 11), (12, 16), (17, 18), (19, 22), (23, 32), (0, 0)]]
[[(0, 0), (0, 4), (0, 2), (0, 3), (0, 4), (0, 1), (0, 3), (0, 9), (0, 0)]]
```
## Expected behavior
I was expected to get the same "offset_mapping" even from the pre-tokenized input.
| 01-24-2021 00:07:22 | 01-24-2021 00:07:22 | Hi @simoneorlando
This is indeed intended behavior. The values in `offset_mapping` return a mapping to the original input, and when you provide pre-tokenized input, each of them is treated individually. In this case, you can use the word mapping to know where you should do your extraction:
```python
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased")
# Pre-tokenized input
labels = ["This", "is", "not", "what", "i", "was", "expecting"]
tokenized_with_pre_tokenized_input = tokenizer(
labels,
is_split_into_words=True,
return_offsets_mapping=True,
)
print(tokenized_with_pre_tokenized_input["offset_mapping"])
for token_id, offsets in enumerate(tokenized_with_pre_tokenized_input["offset_mapping"]):
word_id = tokenized_with_pre_tokenized_input.token_to_word(token_id)
if word_id is not None:
print(offsets, labels[word_id][offsets[0] : offsets[1]])
```
Gives the following output:
```python
[(0, 0), (0, 4), (0, 2), (0, 3), (0, 4), (0, 1), (0, 3), (0, 9), (0, 0)]
(0, 4) This
(0, 2) is
(0, 3) not
(0, 4) what
(0, 1) i
(0, 3) was
(0, 9) expecting
``` |
transformers | 9,763 | closed | squad_v2 crashes during evaluation | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.9.5-64-nvidia-418.43-x86_64-with-debian-jessie-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: examples/question-answering/run_qa.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD **v2**
## To reproduce
Steps to reproduce the behavior:
Run:
```
python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad_v2 --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --evaluation_strategy steps --fp16
```
```
***** Running Evaluation *****
Num examples = 12134
Batch size = 8
01/23/2021 21:32:15 - INFO - utils_qa - Post-processing 11873 example predictions split into 12134 features.#######################################################6| 1514/1517 [01:04<00:00, 23.34it/s]
100%|##############################################################################################################################################################| 11873/11873 [00:40<00:00, 291.51it/s]
01/23/2021 21:32:56 - INFO - utils_qa - Saving predictions to /tmp/debug_squad/predictions.json.####################################################################| 1517/1517 [01:23<00:00, 23.34it/s]
01/23/2021 21:32:56 - INFO - utils_qa - Saving nbest_preds to /tmp/debug_squad/nbest_predictions.json.##########################################################9| 11868/11873 [00:40<00:00, 303.46it/s]
Traceback (most recent call last):
File "run_qa.py", line 495, in <module>
main()
File "run_qa.py", line 457, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 929, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1004, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/transformers/examples/question-answering/trainer_qa.py", line 63, in evaluate
metrics = self.compute_metrics(eval_preds)
File "run_qa.py", line 439, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/metric.py", line 398, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/squad.py", li$e 100, in _compute
score = evaluate(dataset=dataset, predictions=pred_dict)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/evaluate.py", line 68, in evaluate
exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/evaluate.py", line 53, in metric_max_over_ground_truths
return max(scores_for_ground_truths)
ValueError: max() arg is an empty sequence
```
## Expected behavior
completing the evaluation without an exception
Thank you! :) | 01-23-2021 19:44:26 | 01-23-2021 19:44:26 | It might be because I didn't provide the version_2_with_negative argument. Sorry! :) <|||||>https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering mentions it. Although it it likely to be ignored. : ) |
transformers | 9,762 | closed | Fix a typo in `Trainer.hyperparameter_search` docstring | `compute_objectie` => `compute_objective`
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo in `Trainer.hyperparameter_search` docstring: `"compute_objectie"` => `"compute_objective"`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| 01-23-2021 19:40:50 | 01-23-2021 19:40:50 | |
transformers | 9,761 | closed | Fix broken [Open in Colab] links (#9688) | Resolves https://github.com/huggingface/transformers/issues/9688 | 01-23-2021 09:28:49 | 01-23-2021 09:28:49 | ping @patil-suraj <|||||>Thank you for fixing this! |
transformers | 9,760 | closed | fix text summarization evaluation bugs when calculate rouge |
# What does this PR do?
fix text summarization evaluation bugs when calculate rouge
It is important for calculating rouge
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patil-suraj @sshleifer @patrickvonplaten
| 01-23-2021 02:30:27 | 01-23-2021 02:30:27 | only two lines changed but it take much effort to pass the check!
I think it may be important for the evaluation result.
They are simple but true bugs<|||||>Great catch @ShichaoSun and thank you for working on this!
We are in the process of finishing the [new standalone seq2seq script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) with the same functionality but without depending on the utils or other helpers. The utils and other helpers will probably be removed once we completely test the new script.<|||||>Great job! Looking forward to seeing that. |
transformers | 9,759 | closed | fix a small bug | swap pred_lins with tgt_lns
just a typo | 01-23-2021 02:17:01 | 01-23-2021 02:17:01 | |
transformers | 9,758 | closed | save tokenizer and model from fine tuned LED model | Hello, I have been following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n) to fine the `patrickvonplaten/led-large-16384-pubmed` model on my own data. However,after training, when I tried doing:
```python
model.save_pretrained("new_model")
tokenizer.save_pretrained("new_model")
```
to save the model and tokenizer, I noticed when I check out the `config.json` for it, it says
`"_name_or_path": "patrickvonplaten/led-large-16384-pubmed",`
that said, is it actually saving the fine tuned model or just resaving `patrickvonplaten/led-large-16384-pubmed`? I'd greatly appreciate any feedback on this | 01-22-2021 20:21:24 | 01-22-2021 20:21:24 | |
transformers | 9,757 | closed | Extra indicators for BPE for Unicode Characters | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-45-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The code
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
words = ['(cid:3)', 'ํ์
จ์ต๋๊น', 'ํ๋ค']
tokenizer.batch_encode_plus(
[words],
max_length=512,
truncation=True,
padding=True,
is_split_into_words=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_tensors="pt",
)
```
2. The `offset_mapping` in the output is
```python
tensor([[[0, 0], # [CLS]
[0, 1], # for '('
[1, 4], # for 'cid'
[4, 5], # for ':'
[5, 6], # for '3'
[6, 7], # for ')'
[0, 5], # for 'ํ์
จ์ต๋๊น'
[0, 1], # for 'ํ'
[0, 1], # for 'ํ'
[1, 2], # for '๋ค'
[1, 2], # for '๋ค'
[0, 0]]])
```
3. As you could find, it generates four tokens for `ํ๋ค`. The output is correct according to Byte pair encoding. However, it generates duplicated `[0,1]` and `[1,2]`s, which changes the structure of the outputs (for regular tokens, it can only have one `[0,x]`, which can be used to project the encoded tokens back to their original positions). Therefore, we need extra indicators for positions where Byte-pair encoding is used.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. An additional output showing the mapping for input_ids -> original_token_ids . In this case, it should be something like:
```
[0, 1, 1, 1, 1, 1, 2, 3, 3, 3, 3 0]
```
Therefore, we could use this map to figure out byte code embedding is used for the 3rd token.
Updated - @n1t0 | 01-22-2021 18:34:27 | 01-22-2021 18:34:27 | Pinging @n1t0 <|||||>Thanks! I think it's something related to the Byte-Level Subwords trick (https://arxiv.org/pdf/1909.03341.pdf)? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,756 | closed | Remove a TF usage warning and rework the documentation | # What does this PR do?
Recently we moved the warning for boolean argument misused in TF models from `warnings.warning(...)` to `tf.Print` to avoid the overhelm of messages in the output. Nevertheless, the usage of `tf.Print` prevent all our TF models to be compiled and executed with XLA and quantized with TFLite because the `Print` operator is not supported in XLA and TFLite.
As a solution for both issues (logging overhelm + XLA compilation/execution) I propose to simply remove the logs and state the use case directly inside the documentation for all the TF models.
| 01-22-2021 17:31:22 | 01-22-2021 17:31:22 | Not in favor of this to be honest...Warnings / Print statements are super important IMO. It's weird that `tf.Print` is not supported in XLA -> how are logs / warnings / print statements then produced in XLA?<|||||>XLA/Graph/TFlite execution are not made to have things to be printed, this includes some of the assert usage as well.<|||||>@patrickvonplaten TF XLA has few other known issues https://www.tensorflow.org/xla/known_issues<|||||>> @patrickvonplaten TF XLA has few other known issues https://www.tensorflow.org/xla/known_issues
Thanks for sharing! I'm still very surprised by your message:
> Nevertheless, the usage of tf.Print prevent all our TF models to be compiled and executed with XLA and quantized with TFLite
To me, this means that every TF Repo that wants to be executable with XLA has no `tf.Print(...)` ops. That's a pretty hard requirement no? <|||||>> To me, this means that every TF Repo that wants to be executable with XLA has no tf.Print(...) ops. That's a pretty hard requirement no?
I agree, it is, but I see very rarely `tf.print(..)` to be used. As far as I know I never seen it implemented in official TF models during runtime (you can easily check with a quick search on https://github.com/tensorflow/models), usually it is used when evaluating a model, which is a use case that is not XLA related.<|||||>Can we use
```
import tensorflow as tf
tf_logger = tf.get_logger()
tf_logger.warn(xxx)
```
intead?<|||||>@sgugger yes, using the internal TF logger should work as expected, but I'm not sure if it will bring any conflict with the actual transformers logger in terms of configuration.
Do you want me to use it instead, and we will see later if there is indeed any conflict?<|||||>Yeah I think it would be best to have a warning the user can't silence with our centralized logging that none.<|||||>Ok done! I restored the warning but with the internal TF logger. |
transformers | 9,755 | closed | Fix a TF test | # What does this PR do?
Fix a miss changed test.
| 01-22-2021 16:30:50 | 01-22-2021 16:30:50 | |
transformers | 9,754 | closed | Improve the `run_xlni` example to use the Datasets library | The [`run_xlni`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py) example should be improved (following the model of [`run_glue`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)) to use the Datasets library to download and preprocess the datasets.
Ideally, copying the `run_glue` example and adapting the relevant parts should be the way to go. | 01-22-2021 15:47:21 | 01-22-2021 15:47:21 | Really good idea! I was looking at the `run_xnli` script the last days - because I wanted to test the "Language-Agnostic" models from [here](https://github.com/AIPHES/Language-Agnostic-Contextualized-Encoders) - and datasets integration would make experiments a lot more easier. |
transformers | 9,753 | closed | named_parameters not showing embedding matrix of RobertaLMHead (more a question than a bug) | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): RobertaForCausalLM
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load standard RobertaForCausalLM
2. Record the difference between .named_parameters() and .state_dict() of the model
```
from transformers import RobertaForCausalLM, RobertaConfig
import torch
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaForCausalLM.from_pretrained('roberta-base', config=config)
named_params = [n for n, _ in model.named_parameters()]
print("Difference: ", [n for n in list(model.state_dict().keys()) if n not in named_params])
```
This outputs:
```
Difference: ['roberta.embeddings.position_ids', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
```
## Expected behavior
I would expect the parameters: `'lm_head.decoder.weight', 'lm_head.decoder.bias'` to show up in `.named_parameters()`, why do they not? That `'roberta.embeddings.position_ids'` does not show up in `.named_parameters()` is expected as they are not learned parameters, but just of help with getting the position embedding, but this is always the same I believe.
I would like to tie my input embedding matrix weights to my output embedding matrix. But now I am not really sure how to go about this. I thought of just doing:
`model.lm_head.decoder.weight = model.roberta.embeddings.word_embeddings.weight` but because of this thing with the named_parameters, I am not sure if this will work as expected. Also, the output embedding has a bias `lm_head.decoder.bias`, while the the input embeddings don't. My goal is to have the initial hidden state space be the same space as the last hidden state space.
Thanks in advance,
Claartje
| 01-22-2021 15:46:59 | 01-22-2021 15:46:59 | I found out that the input embedding and output embedding weights are tied by default, which is why the output embedding weights end up in state_dict, but not in the named_parameters (they are overcomplete).<|||||>I've found this issue too when trying to log parameters/gradients to Weights & Biases. It doesn't log `roberta.lm_head.decoder`.
I can't quite work out the logic of [this code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L1143-L1145) in `RobertaLMHead`:
```py
self.decoder = nn.Linear(config.hidden_size, config.vocab_size)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
self.decoder.bias = self.bias
```
If I understand what @ClaartjeBarkhof is saying, this is the reason for `decoder` not showing up in `named_parameters()`. Is that right? I can't quite make the connection between tying weights and then the thing no longer being considered a named parameter. And follow up question, what does that code actually do? How does it differ from just having the first line?
|
transformers | 9,752 | closed | Improve PyTorch examples for FP16 | To get the full speed-up of FP16 training, every tensor passed through the model should have all its dimensions be a multiple of 8. In the new PyTorch examples, when using dynamic padding, the tensors are padded to the length of the biggest sentence of the batch, but that number is not necessarily a multiple of 8.
The examples should be improved to pass along the option `pad_to_multiple_of=8` when `fp16` is True, if using a data collator that applies padding (or replace the `None` passed along to `Trainer` for `data_collator` by a `DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)`). | 01-22-2021 15:44:49 | 01-22-2021 15:44:49 | Hi @sgugger
This is not done, although closed, please have a look into https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
for line-by-line padding is not multiple of 8, thanks |
transformers | 9,751 | closed | AdaFactor: avoid updating group["lr"] attributes | This affects Adafactor with `relative_step=False` and `scale_parameter=True`.
Updating `group["lr"]` makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. This isn't supposed to happen.
# What does this PR do?
I've observed weird behaviors when using Adafactor with `relative_step=False` and `scale_parameter=True` and an LR scheduler. I think the problem is that the code [updates the `lr` attribute of the current parameter group](https://github.com/huggingface/transformers/blob/490b39e6142ca8f2ccb84c5436402899ae54e44f/src/transformers/optimization.py#L549), and then uses the updated attribute to [calculate the next attribute](https://github.com/huggingface/transformers/blob/490b39e6142ca8f2ccb84c5436402899ae54e44f/src/transformers/optimization.py#L469). I don't think this is supposed to happen.
A simple fix would be replacing the update operation with an assignment to a local variable.
I'm not entirely sure if I understand the problem correctly, so I apologize in advance if this is a stupid PR. I'd appreciate it if someone could point out where I am wrong. Thanks!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@moscow25 @sshleifer
| 01-22-2021 15:36:27 | 01-22-2021 15:36:27 | Can you provide evidence that supports the following:
> Updating group["lr"] makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. **This isn't supposed to happen.**
Thanks!
<|||||>> Can you provide evidence that supports the following:
>
> > Updating group["lr"] makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. **This isn't supposed to happen.**
>
> Thanks!
Hi,
Thanks for the quick reply.
This is taken from the AdaFactor paper:


As you can see, ฯ only depends on the step number if we use relative steps. And if we switch to any other learning rate schedules (in my case, linear warmup + cosine decay), it doesn't make sense to make the ฯ part depends on the scale of the other parameters, nor can I find any reference of this approach in the paper.
If we (loosely) factor the ฮฑ<sub>t</sub> in the original implementation to ฮฑ<sub>i,t</sub>, where `i` indicate the set of parameters corresponding to the `for p in group["params"]` loop. The original implementation essentially made ฮฑ<sub>i,t</sub> depended on ฮฑ<sub>i-1,t</sub> (i.e., making ฯ<sub>i,t</sub> = ฮฑ<sub>i-1,t</sub>).
<|||||>> I've observed weird behaviors when using Adafactor with relative_step=False and scale_parameter=True and an LR scheduler.
I should probably clarify what I meant by "weird behaviors." The model (T5 v1.1) never converged when trained Adafactor with `relative_step=False` and `scale_parameter=True`. After this patch, I managed to get convergence and even better results than the built-in LR schedule in the `relative_step=True` mode (with `warmup_init=True`).<|||||>cc @patrickvonplaten @patil-suraj
This looks like a reasonable change to me!<|||||>Thank you all for your time and for accepting the patch! Glad to have made a tiny contribution to this great library.
> BTW, if you have some working code for how to train a `google/t5v1_1` model I think it would be super helpful to post it here, on the forum or as a community notebook! Many people have been asking for good t5v1_1 training scripts :-)
I don't have anything that is sufficiently readable yet. Nonetheless, I have these notebooks published on Kaggle that use the patched Adafactor: one for [T5 v1.1](https://www.kaggle.com/ceshine/preprocess-and-finetune-t5-1-1-full/) and one for [mT5](https://www.kaggle.com/ceshine/preprocess-and-finetune-mt5). They are based on this [Github repo](https://github.com/ceshine/finetuning-t5/tree/mt5-classifier-trim-lm-head/mnli), which is quite messy at this moment. The part that set up the optimizer is located [here](https://github.com/ceshine/finetuning-t5/blob/de34e0c735568d00f9244e0b6f019c3f5cb64576/mnli/train.py#L314).
|
transformers | 9,750 | closed | ValueError: Couldn't instantiate the backend tokenizer while loading model tokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Colab
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@mfuntowicz @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
T5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://github.com/allenai/unifiedqa Loading the model mentioned here for tokenizer does not work
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the instructions here https://github.com/allenai/unifiedqa to get the sample code
2. Copy paste it in Colab to run it.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return tokenizer.batch_decode(res, skip_special_tokens=True)
```
## Expected behavior
The following code should load the model without errors.
## Error
But the following error is obtained:
```
ValueError Traceback (most recent call last)
<ipython-input-4-ee10e1c1c77e> in <module>()
2
3 model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
----> 4 tokenizer = AutoTokenizer.from_pretrained(model_name)
5 model = T5ForConditionalGeneration.from_pretrained(model_name)
6
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs)
94 else:
95 raise ValueError(
---> 96 "Couldn't instantiate the backend tokenizer from one of: "
97 "(1) a `tokenizers` library serialization file, "
98 "(2) a slow tokenizer instance to convert or "
ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```
<!-- A clear and concise description of what you would expect to happen. -->
| 01-22-2021 13:02:12 | 01-22-2021 13:02:12 | Hey @rsanjaykamath,
I cannot reproduce the error on `master`. When running:
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
I don't encounter any errors...could you try to update transformers to the newest version and try again?<|||||>Hi @patrickvonplaten ,
That's strange. I just tried it on Colab with the version 4.2.2 of transformers and the same error occurs again.
Have you tried it on colab? or local machine?
<|||||>I see it's the classic sentencepiece error - I should have better read your error message ;-)
Here the colab to show how it works: https://colab.research.google.com/drive/1QybYdj-1bW0MHD0cutWBPWas5IFEhSjC?usp=sharing<|||||>Also see: https://github.com/huggingface/transformers/issues/8963<|||||>Ok got it. Installing sentencepiece and restarting the kernel did the trick for me.
Thanks for your help :) Closing the issue. <|||||>I think the error message should be more clear |
transformers | 9,749 | closed | Use object store to pass trainer object to Ray Tune (makes it work with large models) | # What does this PR do?
Tuning large models with Ray Tune did not work recently. By passing the trainer object via the object store we avoid serialization of the global object, fixing these issues.
I could reproduce the issue in #9146 on a AWS p2.xlarge node and could confirm it is resolved by these changes.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9146
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-22-2021 11:36:45 | 01-22-2021 11:36:45 | |
transformers | 9,748 | closed | Trainer object empties dataset | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.14.81.bm.20-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): XLM-Roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a multilingual dataset by concatenating a handful of languages from wikiann
2. Instantiate the model
3. Instantiate the Trainer object

## Expected behavior
The dataset object should not have been modified by the Trainer object | 01-22-2021 10:19:31 | 01-22-2021 10:19:31 | Your dataset is not really emptied (I agree it looks like this however). It's just viewed without the columns the model can accept. You can restore all your columns with
```
dataset["train"].set_format(columns=list(dataset["train"].features.keys())
```
or more simply
```
dataset["train"].reset_format()
```
I agree that this is strange and we're seeing how we can have the same behavior without changing the dataset you pass to Trainer.
Note that you didn't preprocess your data, so you won't be able to train in this example.
<|||||>Hi Sylvain,
Thanks so much for prompt response.
I was not aware of this behaviour so was left very puzzled. I tried what you suggested and indeed got back all my data.
Yes, I agree with you, the data was not yet ready for training. I was just trying to pinpoint at which stage in my script the dataset became "empty".
Thanks again :) |
transformers | 9,747 | closed | mT5 additional_special_tokens seems not work | I want add some special tokens such as <POS> <CON_START> . But T5tokenizer/MT5tokenizer both can't tokenize correctly after using additional_special_tokens parameter. It still split these special tokens to subwords.
<img width="1015" alt="ๆชๅ 2021-01-22 ไธๅ5 18 00" src="https://user-images.githubusercontent.com/26171212/105473581-3d88d100-5cd8-11eb-8568-6fedd19513e2.png">
It works when using OpenAIGPTTokenizer additional_special_tokens parameter. It's clear that after declare additional_special_tokens parameter, OpenAIGPTTokenizer tokenize <POS> as one word rather split it.
<img width="979" alt="ๆชๅ 2021-01-22 ไธๅ5 54 57" src="https://user-images.githubusercontent.com/26171212/105475970-049e2b80-5cdb-11eb-8470-576fd8f38999.png">
<img width="697" alt="ๆชๅ 2021-01-22 ไธๅ5 55 10" src="https://user-images.githubusercontent.com/26171212/105475992-0962df80-5cdb-11eb-9cae-205b57818e95.png">
The version of transformers is 4.2.2
And I'm not sure this problem is related with [issue624](https://github.com/google-research/text-to-text-transfer-transformer/issues/624) in T5 which talk about SentencePiece extra vocab.
Thank you for your feedback | 01-22-2021 09:45:31 | 01-22-2021 09:45:31 | Hi! Could you either post a link to your notebook or your code as actual code? Images makes it impossible to copy/paste or for others to search with similar issues. Thanks.<|||||>> Hi! Could you either post a link to your notebook or your code as actual code? Images makes it impossible to copy/paste or for others to search with similar issues. Thanks.
Here is colab notebook link. Thanks for your reply.
https://colab.research.google.com/drive/1fbp7VvnUvbf5r8CSitOg2pDZyM47y9xj?usp=sharing<|||||>Hi! Indeed, this is a bit misleading. Special tokens are considered as those that were in the pre-training, that is: unknown tokens, bos tokens, eos tokens, etc.
If you want to use special tokens that you use as special tokens, I would argue it is better to define them as simple tokens. Therefore doing the following:
```py
>>> from transformers import MT5Tokenizer, MT5ForConditionalGeneration
... import torch
... special_tokens = ['<POS>', '<NEG>','<CON_START>','<START>','<END>'] # Set the special tokens
... mt5_add_tokenizer = MT5Tokenizer.from_pretrained("google/mt5-small")
... mt5_add_tokenizer.add_tokens(special_tokens)
... print(mt5_add_tokenizer.tokenize("<POS> <CON_START> the biscuits and gravy were . <START>"))
```
You'll get the following output:
```out
['<POS>', '<CON_START>', 'โthe', 'โb', 'iscuit', 's', 'โand', 'โgrav', 'y', 'โwere', 'โ', '.', '<START>']
````
Let me know if that makes sense.<|||||>Thank you very much, it works well.
I was confused by [issue5940](https://github.com/huggingface/transformers/issues/5940) which mentioned "special tokens are carefully handled by the tokenizer (they are never split)". If use add_token() method, the additional simple tokens may still split?
In addition, the code in Colab notebook as above link, OpenaiTokenizer should use add_tokens method rather than add_special_tokens (define them as a simple tokens) ?<|||||>For tokens that cannot be identified as being either:
- `eos_token`
- `bos_token`
- `cls_token`
- `unk_token`
- `pad_token`
- `sep_token`
- `mask_token`
Then I would recommend using the `add_token` method. These tokens shouldn't split either:
```py
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> tokenizer.add_tokens(["<CON_START>", "<CON_ST"])
2
>>> tokenizer.tokenize("<CON_START>")
['<CON_START>']
>>> tokenizer.tokenize("<CON_STAR>")
['<CON_ST', 'AR', '>']
```<|||||>Thank for your help and recommend.<|||||>> For tokens that cannot be identified as being either:
>
> * `eos_token`
> * `bos_token`
> * `cls_token`
> * `unk_token`
> * `pad_token`
> * `sep_token`
> * `mask_token`
>
> Then I would recommend using the `add_token` method. These tokens shouldn't split either:
>
> ```python
> >>> from transformers import BertTokenizer
> >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
> >>> tokenizer.add_tokens(["<CON_START>", "<CON_ST"])
> 2
> >>> tokenizer.tokenize("<CON_START>")
> ['<CON_START>']
> >>> tokenizer.tokenize("<CON_STAR>")
> ['<CON_ST', 'AR', '>']
> ```
Hi it seems `add_tokens` not working with `AutoTokenizer` , but work with specific-defined tokenizer like `BertTokenizer`,
```python
special_tokens = ["-Title-"]
#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
tokenizer.add_tokens(special_tokens)
tokenizer.tokenize("-Title-")
# ['-', 'title', '-']
```<|||||>He! Thanks for reporting. This should be fixed by #23909 |
transformers | 9,746 | closed | Fix an efficiency related bug the "prediction_loop" of trainer_tf.py | # the "numpy.append" method is not suitable for large evaluation/test dataset
It will cause "memory ops blocked issue" (i.e. system wait for a large area of continuous memory to expand the array) because numpy array requires continuous memory, and will try to re-allocate a ram space when the current capacity is not enough
### this procedure is quite slow(as if the prediction loop is blocked): tested when data set size is larger than 10K
### we need to use the concatenate strategy(for efficiency) or batch generator strategy(when the data size is quite large).
### The method I provided here will significantly boost the prediction speed. | 01-22-2021 09:32:53 | 01-22-2021 09:32:53 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,745 | closed | fine tune patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 using LED updates | I was wondering if there was anyway to fine tune the
`patrickvonplaten/longformer2roberta-cnn_dailymail-fp16` model instead of `patrickvonplaten/led-large-16384-pubmed`? When I tried fine tuning it in the past I ran into the
`TypeError: forward() got an unexpected keyword argument 'head_mask'` issue given that `EncoderDecoderModel` wasn't intended for longformer. So I'm now trying to see if I can use `LEDForConditionalGeneration` for it but I noticed when I try doing:
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peรฑasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
tokenizer = LEDTokenizer.from_pretrained("allenai/longformer-base-4096")
model = LEDForConditionalGeneration.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")#.to("cuda").half()
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
```
I get strange results for that pretrained model
```
considerations considerations considerations lag lag lag Sith Sith Sith miracle miracle miracle Sith Sith Metropolitan Metropolitan Metropolitan Sith SithHERHERHER miracle miracle Hurt Hurt Hurt miracle miracle Joey Joey Joey Sith Sith ticking ticking ticking memorial memorial memorial tee tee tee miracle miracle Holder Holder Holder miracle miracle raspberry raspberry raspberry Sith Sithamoamoamo Sith Sith dominate dominate dominate miracle miracleDashDashDash miracle miracle scored scored scored dominate dominate Sith Sith (* (* (* dominate dominate Joey Joey miracle miracle hide hide hide miracle miracle characteristics characteristics characteristics miracle miracletighttighttight raspberry raspberry hal hal halomeveromeveromever miracle miracle ticking ticking dominate dominate Metropolitan Metropolitan dominate dominate Dek dominate dominate AWS AWS AWS sentencing sentencing sentencingCasCasCas customer customer customer Joey Joey dominate dominatetighttight miracle miracle AWS
```
if I try using `LEDForConditionalGeneration` instead of `EncoderDecoderModel` for model `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16`. Is there something I'm missing? I'd greatly appreciate any feedback/help with this
| 01-22-2021 04:30:03 | 01-22-2021 04:30:03 | Hi @mmoya01
You could fine-tune `longformer2roberta` model using the `EncoderDecoder` model class. `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16` is already fine-tuned on but as the model card says it was fine-tuned for just demo, so you should fine-tune a new `longformer2roberta`. You could follow the training script given the [model card ](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) or you can refer to this [notebook](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)
Also in your example, you are loading the `longformer2roberta` model using `LEDForConditionalGeneration` which doesn't seem right. It should be loaded using `EncoderDecoderModel`<|||||>hi @patil-suraj , thank you for the reply! So if I'm understanding this correctly, I would have to train a new `longformer2roberta` from scratch? I was trying to avoid that because the model card mentions how it took 90 hours to fine tune roberta on cnn-daily news
The reason I was trying to use `LEDForConditionalGeneration` is because I wanted to fine tune it where the pretrained `model` was `longformer2roberta` instead of `allenai/longformer-base-4096`
so, to fine tune `longformer2roberta` model in the past, I tried pip installing the [more_general_trainer_metric](https://github.com/huggingface/transformers/archive/more_general_trainer_metric.zip) branch given the note about the trainer and then running
```python
#!/usr/bin/env python3
import nlp
import logging
from nlp import arrow_dataset
from transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
#load dataset
train_bytes = s3_client.get_object(train_uri)
train = pq.read_table(BytesIO(train_bytes),columns=['reference_summary','extractive_summary'])
test_bytes = s3_client.get_object(test_uri)
test = pq.read_table(BytesIO(test_bytes),columns=['reference_summary','extractive_summary'])
train_dataset = arrow_dataset.Dataset(train)
val_dataset = arrow_dataset.Dataset(test)
# enable gradient checkpointing for longformer encoder
model.encoder.config.gradient_checkpointing = True
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
encoder_length = 2048
decoder_length = 128*2
batch_size = 16
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at Longformer at 2048
inputs = tokenizer(batch["extractive_summary"], padding="max_length", truncation=True, max_length=encoder_length)
# force summarization <= 256
outputs = tokenizer(batch["reference_summary"], padding="max_length", truncation=True, max_length=decoder_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
# set 128 tokens to global attention
batch["global_attention_mask"] = [[1 if i < 128*2 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]]
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == encoder_length for x in inputs.input_ids])
assert all([len(x) == decoder_length for x in outputs.input_ids])
return batch
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.eos_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
}
# make train dataset ready
train_dataset = train_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["extractive_summary", "reference_summary"],
)
train_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["extractive_summary", "reference_summary"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "global_attention_mask", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_from_generate=True,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=100,
save_steps=100,
eval_steps=100,
overwrite_output_dir=True,
warmup_steps=200,
save_total_limit=3,
fp16=False,
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
# start training
trainer.train()
```
^but that gave me `TypeError: forward() got an unexpected keyword argument 'head_mask'` because The `EncoderDecoderModel` did not work with longformer whereas `LEDForConditionalGeneration` does
but I'm gathering, it is not possible to fine tune the `longfomer2roberta` like I can with `patrickvonplaten/led-large-16384-pubmed` [here](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) right? I would have to fine tune/create my own `longfomer2roberta` trained on cnn daily, then fine tune further with my `train` data listed above right? If so, should I stay away from using a tokenizer/model that uses `roberta-base` and instead use `"allenai/led-base-16384"`(which I think uses BART as the base model)
Thank you for your feedback either way, I greatly appreciate it<|||||>Hey @mmoya01, you don't have to train it from scratch - you can "warm-start" the model from the pretrained checkpoints. This blog post gives an in-detail explanation on how to do so: https://huggingface.co/blog/warm-starting-encoder-decoder <|||||>Hi @patrickvonplaten thank you for your reply and the blog post. I was following your [notebook](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) and trying to adapt it to the [longformer2roberta-cnn_dailymail-fp16](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) work using my own `train_data` and `val_data`. wondering, how could I warm-start from `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16`?
I noticed I was able to do
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base")
```
But I would love to do something like
```python
roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")
```
or warm-start the `longformer2roberta-cnn_dailymail-fp16` checkpoint if possible rather than warm-start from `allenai/longformer-base-4096`? I'd greatly appreciate your feedback<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,744 | closed | Wrong offsets mapping in XLMRobertaTokenizerFast | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-124-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@mfuntowicz @thomwolf
## Information
Model I am using (Bert, XLNet ...): XLMRobertaTokenizerFast
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import XLMRobertaTokenizerFast
tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-large')
text = "๏ผโฆโฆ"
tokenized = tokenizer(text, return_offsets_mapping=True)
print('Text:', text)
print('Tokens:', tokenizer.convert_ids_to_tokens(tokenized.input_ids))
print('Mapped to:', [text[start:end] for start, end in tokenized.offset_mapping])
```
Observed behavior:
```
Text: ๏ผโฆโฆ
Tokens: ['<s>', 'โ?', '......', '</s>']
Mapped to: ['', '๏ผ', '๏ผ', '']
```
Expected behavior:
```
Text: ๏ผโฆโฆ
Tokens: ['<s>', 'โ?', '......', '</s>']
Mapped to: ['', '๏ผ', 'โฆโฆ', '']
```
## Expected behavior
I'm using XLM-R for Chinese text, and I would expect offset mappings to work correctly even in the presence of various Unicode punctuation symbols. It looks like XLM-R tokenizes "๏ผโฆโฆ" as two tokens ('โ?' and '......'), which I would expect to map back to the appropriate locations in the input. Instead, the offset mapping from these tokens is identical.
This example is an ending of an actual sentence in the Chinese Treebank -- I removed the sentence itself because it doesn't matter for reproducing the bug.
| 01-22-2021 03:26:44 | 01-22-2021 03:26:44 | The wrong alignments are caused by the `Precompiled` normalizer in `tokenizers`. This will be fixed in version 0.10.1 of the library.<|||||>This is fixed for any version of `transformers>=4.3.0` |
transformers | 9,743 | closed | DistilGPT2 extremely strange model behaviour | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0a0+1606899 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 2
- Using distributed or parallel set-up in script?: Yes
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
examples/distillation: @VictorSanh
## Information
Model I am using (Bert, XLNet ...): DistilGPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
Had to slightly modify the official scripts to even get it to run - other than that, effectively no difference
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
Chat logs. In the format
```
[22-Jul-20 04:15 AM] User
Message
[22-Jul-20 04:15 AM] Another user
Message
[22-Jul-20 04:16 AM] Etc.
You get the point
```
## To reproduce
Steps to reproduce the behavior:
1. (Fix and) run example scripts for distillation with a custom GPT2 model.
When trying to generate, the model will generate the same text with no change, regardless of using ``pipeline`` or ``model.generate``. If attempting to use input texts, the network will output the texts immediately appended by the phrase. In the two times I've run this experiment, it was the space character twice and ``[Nov-Nov-20] User`` once (after I tried removing lines that were under a certain token value)
## Expected behavior
The model to work as a normal transformer.
<!-- A clear and concise description of what you would expect to happen. -->
| 01-22-2021 01:42:36 | 01-22-2021 01:42:36 | Extra info:
command: ``python3 -m torch.distributed.launch \
--nproc_per_node=$N_GPU_NODE \
--nnodes=$N_NODES \
--node_rank $NODE_RANK \
train.py \
--force \
--fp16 \
--n_epoch 3 \
--checkpoint_interval 2000 \
--batch_size 12 \
--n_gpu $WORLD_SIZE \
--student_type gpt2 \
--student_config training_configs/distilgpt2.json \
--teacher_type gpt2 \
--teacher_name gpt2-large \
--freeze_pos_embs \
--dump_path /root/distil/ \
--data_file data/binarized_text.gpt2-large.pickle \
--token_counts data/token_counts.gpt2-large.pickle``
Not using a custom tokenizer, just the gpt2-large tokenizer<|||||>When using the original `distilgpt2` checkpoint, can you generate coherent text?<|||||>> When using the original `distilgpt2` checkpoint, can you generate coherent text?
Yep, works perfectly. Maybe worth it to note that I was using 2x 3090 GPUs (I know how amperes can be sometimes)
Since my dataset is made up of a lot of `\n`, my first thought was that it might have just copied that to attempt to minimize loss, so I set the `lm_seq_dataset.py` to remove any lines under 4 tokens. It did change the string, but it makes absolutely no sense to me that it would learn to copy a string like this.
IMO (and I haven't really had the time to look into this so take it with a whole chunk of salt) this is probably a training issue, since the phrase is different on different data and the generation works on the default model. If it's a training issue, my best guess is that it might be iterating over the same line? But this is flawed because the line contains `[Nov-Nov`, which is incorrect.
Super strange bug all in all.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,742 | closed | --fp16 fine-tuning appears to be taking more memory (4.3.0). | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: 4x A100-SXM4-40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help
@alexorona @stas00 @sgugger
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] my own modified scripts: (give details below)
Official trainer, optionally modified by adding "model.parallelize() " after loading. (Results shown with and without).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Regular seq2seq on data.
Run script:
```
export BS=1; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1,2,3 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir xsum --fp16 \
--do_train --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 \
--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--warmup_steps 5 \
```
## Brief summary
1. When fine tuning T5, I'm observing memory usage to increase when using --fp16, though not as much as previously reported in #8403 .
2. (Optional) Possibly related: I'm trying to squeeze T5-11B in 4x40GB A100s using model parallelism. I seemed to be able to do it yesterday on 4.1.1 with a sequence length of 128, and I remember observing a fairly moderate seqlength vs memory usage dependence (as expected from the comment here ( https://github.com/huggingface/transformers/issues/8771#issuecomment-764058315 ), though I'm not an expert and I'm not sure if this increase only applies >512 tokens, and if what I saw yesterday was a fluke/error on my part somewhere). Today on a fresh env/pull I'm not observing this dependence (though I'm not sure why -- it might be my issue -- data is reported at the bottom side).
## To reproduce
Steps to reproduce the behavior:
1. 4.3.0, runscript as above, run with and without --fp16 option. Different model sizes (and with/without model.parallelize() added, since I wasn't sure if that was the issue)
## Data
Below are three cases of memory usage with/without --fp16:
1. with model.parallelize()
2. without model.parallelize() (but GPUs still visible -- extra info, I thought it was interesting it still takes up extra memory on the other GPUs)
3. without model.parallelize() (only 1 GPU visible)
-
```
*** WITH MODEL.PARALLELIZE() ***
t5-3b, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 77W / 400W | 13598MiB / 40537MiB | 33% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 29C P0 80W / 400W | 12874MiB / 40537MiB | 25% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 24C P0 81W / 400W | 12874MiB / 40537MiB | 4% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 25C P0 80W / 400W | 12874MiB / 40537MiB | 23% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
WITH --fp16: (takes more memory)
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 27C P0 108W / 400W | 15138MiB / 40537MiB | 6% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 28C P0 99W / 400W | 14214MiB / 40537MiB | 9% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 23C P0 85W / 400W | 14214MiB / 40537MiB | 12% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 25C P0 92W / 400W | 14216MiB / 40537MiB | 11% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
*** WITHOUT MODEL.PARALLELIZE, but all GPUs still visible ( CUDA_VISIBLE_DEVICES=0,1,2,3 ) ***
t5-large, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 93W / 400W | 20362MiB / 40537MiB | 1% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 27C P0 78W / 400W | 6046MiB / 40537MiB | 3% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 22C P0 78W / 400W | 6046MiB / 40537MiB | 3% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 24C P0 79W / 400W | 6022MiB / 40537MiB | 7% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
t5-large, --max_source_length 128 --max_target_length 128
WITH --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 91W / 400W | 20318MiB / 40537MiB | 2% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 27C P0 80W / 400W | 7304MiB / 40537MiB | 4% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 23C P0 78W / 400W | 7304MiB / 40537MiB | 5% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 24C P0 79W / 400W | 7280MiB / 40537MiB | 5% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
*** WITHOUT MODEL.PARALLELIZE, ONLY 1 GPU VISIBLE ( CUDA_VISIBLE_DEVICES=0 ) ***
t5-large, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 29C P0 101W / 400W | 13790MiB / 40537MiB | 32% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 26C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 21C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 23C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
t5-large, --max_source_length 128 --max_target_length 128
WITH --fp16 (more memory)
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 101W / 400W | 15012MiB / 40537MiB | 42% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 26C P0 70W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 21C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 23C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
```
WRT sequence length vs data dependence, in a model parallel setting, I am observing today:
(varying --max_source_length and --max_target_length )
model, seq length, gpu0, gpu1, gpu2, gpu3
t5-large, 32 seq length, 5.7GB / 4.7GB / 4.7GB / 4.7GB
t5-large, 64 seq length, 5.7GB / 4.7GB / 4.7GB / 4.7GB
t5-large, 128 seq length, 5.8GB / 4.8GB / 4.8GB / 4.8GB
t5-large, 512 seq length, 6.0GB / 5.2GB / 5.2GB / 5.2GB
t5-3b, 64 seq length, 15.2GB / 14.3GB / 14.3GB / 14.3GB
t5-3b, 128 seq length, 15.2GB / 14.3GB / 14.3GB / 14.3GB
t5-3b, 256 seq length, 15.5GB / 14.7GB / 14.7GB / 14.7GB
t5-3b, 512 seq length, 16.2GB / 15.2GB / 15.2GB / 15.2GB
Essentially very minimal change in RAM requirements vs sequence length. Though perhaps I have misconfigured something here.
## Expected behavior
1. Less memory usage with --fp16 (should it be about half? suggested from https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117 )
2. (Optional) Nominally, smaller sequence length models taking up significantly less memory?
| 01-22-2021 00:53:44 | 01-22-2021 00:53:44 | Note that the memory as reported by nvidia-smi is not a perfectly reliable source (@stas00 could explain better than me why), the reliable metric is the batch size at which you get OOM. It is known on our side that FP16 does not save any memory for this particular script (I was investigating a general memory regression for this script that I fixed today and could see the memory being the same with or without FP16) but only for that script (memory is indeed saved on `run_glue` or other maintained examples). It seems to have been there for at least three months, so it's not a recent regression, I think it was always like this.
Didn't have time to investigate the source yet. Maybe it's something in the seq2seq models (this is on T5 here, on my side I noticed the usage being the same for mBART) or something in the script itself. Nevertheless, it's an issue to tackle indeed. I wonder if it only appears in a distributed setting or one GPU already, should run some tests to check that tomorrow.<|||||>> Note that the memory as reported by nvidia-smi is not a perfectly reliable source
## How to reliably use nvidia-smi/pynvml to measure memory used by your application.
1. when you just started the program and do `torch.ones(1)` you will see your `nvidia-smi` reporting a usage of 0.5-1.5GB depending on the card - this is the memory that CUDA allocates for its kernels. So this is not the memory used by your application. Therefore in any memory benchmark I first do `torch.ones(1)` to force the kernel preloading. And then you can somewhat rely on `nvidia-smi`
2. but it won't show you any cached by pytorch memory, so these reports can be quite meaningless. Therefore you have to call `gc.collect(); torch.cuda.empty_cache()` before you take a snapshot using `nvidia-smi`. `gc.collect()` forces garbage collection - it's not immediate in python when you deleted some variable or existed a function.
3. it's totally unreliable if you have multiple processes using the same gpu
In general for memory benchmarking it's better to use pynvml, since it's easier to use it. But it has the exact same issues as `nvidia-smi`.
If you were to follow the above 3 rules exactly, then you can use `nvidia-smi`/`pynvml` to reliably measure memory.
Otherwise, use torch.cuda memory functions which give you exact numbers in any situation. https://pytorch.org/docs/stable/cuda.html#memory-management<|||||>I have been seeing this fp16 behavior for many months, but we blamed it on my hardware. Since I have one old 1070 card and one new but not fully supported 3090. Waiting for cuda-11.2 support.
Do you notice any difference if you use apex instead of the native amp?
DeepSpeed implements their own fp16.
> Less memory usage with --fp16 (should it be about half? suggested from https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117
Well, that was a huge bug in pytorch, related to autocast/fp16 but it has been fixed in pt-nightly and pt-1.7.1 - though I won't be surprised if there are still some bugs there - this is new in pytorch. That's why I suggest to test with apex.
Also you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still.
It's in 3 places, grep for `autocast`:
```
src/transformers/trainer.py: with autocast():
src/transformers/trainer.py: with autocast():
src/transformers/trainer_seq2seq.py: with autocast():
```<|||||>Thanks both --
> I have been seeing this fp16 behavior for many months, but we blamed it on my hardware. Since I have one old 1070 card and one new but not fully supported 3090. Waiting for cuda-11.2 support.
>
> Do you notice any difference if you use apex instead of the native amp?
The original numbers (above) are with apex -- single-GPU --fp16 gives 15.0GB. It looks like if I uninstall nvidia-apex it reduces to 13.8GB.
> > Less memory usage with --fp16 (should it be about half? suggested from [#8403 (comment)](https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117)
>
> Well, that was a huge bug in pytorch, related to autocast/fp16 but it has been fixed in pt-nightly and pt-1.7.1 - though I won't be surprised if there are still some bugs there - this is new in pytorch. That's why I suggest to test with apex.
>
> Also you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still.
>
> It's in 3 places, grep for `autocast`:
>
> ```
> src/transformers/trainer.py: with autocast():
> src/transformers/trainer.py: with autocast():
> src/transformers/trainer_seq2seq.py: with autocast():
> ```
It looks like disabling 'with autocast()' in those 3 places also brings the single-GPU --fp16 memory on T5-Large from 15.0gb down to 13.8gb (the same as without the --fp16 option) -- so they're at parity in terms of memory in that case.
> DeepSpeed implements their own fp16.
Just for completeness I tried it with DeepSpeed, and in the single-GPU settling I'm seeing 10.8GB with CPU-offloading enabled (about 3gb savings), and ~21.5GB without offloading (significantly higher... I'm not sure I know enough about DeepSpeed to know whether that would be expected, or whether it may be a config error on my part). (EDIT: And DeepSpeed, CPU offload, with 4 GPUs, appears to use 12.1-12.4GB per GPU).
On my last (optional) bit in the original post -- just as a sanity check, any sense of whether the memory allocation vs sequence length looks as expected, or whether there should be much larger differences as sequence length increases/decreases? (I'm trying to get a sense of whether it's likely for me to fit T5-11b in these 4x40gb cards in the near term, or whether I'll need to wait for the full model parallelism/fp16/deep speed offloading, but these new models are new territory for me) <|||||>Sleeping on it I would like to amend my first statement. The components on GPU memory are the following:
- the model weights
- the forward activations saved for gradient computation
- the gradients
- the optimizer state
If we look at what's happening with FP16 training (mixed precision) we have:
- the model in full precision so no memory saved there
- the forward activations saved for gradient computation are in mixed precision
- the gradients are computed in mixed precision *but* converted to full precision for the update, so no saving there
- the optimizer state is in full precision as all the updates are done in full precision
So the saving only happen for the forward activations saved for the backward computation, and there is a slight overhead because the gradients are properly stored both in half and full precision. (This is probably over-simplified but I think it's enough to explain what follows.)
Now let's look at a simple text-classification fine-tuning on 2 GPUs (I'm giving the command for reference):
```
export BS=16
python -m torch.distributed.launch \
--nproc_per_node 2 examples/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size $BS \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mrpc \
--overwrite_output_dir \
--fp16
```
Since the only savings we get are in the model activations saved for the backward passed, it's logical that the bigger those activations are, the bigger the saving will be. If we try different batch sizes, I indeed get (this is with nvidia-smi so not completely reliable as said above but it will be a fair comparison):
| batch size | without --fp16 | with --fp16 | FP16 savings |
|:-:|:-:|:-:|:-:|
| 8 | 4247 | 4163 | 84 |
| 16 | 4971 | 4793 | 178 |
| 32 | 6827 | 6207 | 620 |
| 64 | 10037 | 8061 | 1976 |
So there is only a real memory saving if we train at a high batch size (and it's not half) and at batch sizes lower than 8, you actually get a bigger memory footprint (because of the overhead mentioned above). The gain for FP16 training is that in each of those cases, the training with the flag `--fp16` is twice as fast, which does require every tensor to have every dimension be a multiple of 8 (so if your batch size is not a multiple of 8, you won't get that speed-up, and the script `finetune_trainer.py` does not pad the tensors to a sequence length that is a multiple of 8).
TL;DR: FP16 with apex or AMP will only give you some memory savings with a reasonably high batch size.<|||||>Ahhh, very helpful, thanks. So right now --fp16 is mostly for speed, and since most things are stored in full precision, there are essentially no expected memory savings at BS=1.
Is storing everything at fp16 in the long term plans, or are there technical reasons why that's not a good idea? (e.g. significant degradation in task performance?) <|||||>You can't do the full training in FP16, it would not converge, which is why there is this mixed precision approach. Maybe DeepSpeed integrates further optimizations and helps to save more memory.<|||||>@sgugger, this is such a clear and well done foray into fp16 performance, let's not get it lost in the sea of Issues.
I was thinking we should start a new doc `performance.md` (markdown pretty please) where we discuss each of these issues. And you have just written a perfect entry on fp16.<|||||>> The original numbers (above) are with apex -- single-GPU --fp16 gives 15.0GB. It looks like if I uninstall nvidia-apex it reduces to 13.8GB.
As you are on pt-1.7.1 it will always use the native amp, unless you specifically request to use `apex` with `--fp16_backend apex`
> > Also you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still.
My apologies, this wasn't a good suggestion at all since it basically disabled the mixed-precision. I was trying to think how to disable just the caching to rule any leaks there, but I don't think there is a way.
If you want to read about autocast caching, this comment from its creator is excellent:
https://discuss.pytorch.org/t/autocast-and-torch-no-grad-unexpected-behaviour/93475/3
> On my last (optional) bit in the original post -- just as a sanity check, any sense of whether the memory allocation vs sequence length looks as expected, or whether there should be much larger differences as sequence length increases/decreases? (I'm trying to get a sense of whether it's likely for me to fit T5-11b in these 4x40gb cards in the near term, or whether I'll need to wait for the full model parallelism/fp16/deep speed offloading, but these new models are new territory for me)
I'd have been nice to know when either DeepSpeed or fairscale get a chance to release ZeRO stage 3 support, but I'm not sure how to find it out - perhaps ask them if they have some possible projections? That would be the most desired news since then you will probably be able to fit a 45GB model over 4x40GB.
I think it'd be great to calculate all the different components and their memory requirements - then we can do the math easier. That is calculating how many bytes each component takes and how many of those we need.
Otherwise, I hope to have some Pipeline Parallelism working soon and perhaps we could try it on your 4x rig.
Also has anybody attempted to distill t5-11b? If you could shave off some weight from it w/o losing much quality, perhaps it'd have been much easier to fit.
<|||||>> I was thinking we should start a new doc performance.md (markdown pretty please) where we discuss each of these issues. And you have just written a perfect entry on fp16.
I agree, and there is the table in the text-classification example that summarizes the speed gains. I have no time to do this this week, so if you want to go ahead and start a PR, feel free to do so!<|||||>Done: https://github.com/huggingface/transformers/issues/9824<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,741 | closed | examples: fix XNLI url | Hi,
this PR fixes the URL for the XNLI dataset in the text classification example.
(/cc @sleepinyourhat :hugs: ) | 01-22-2021 00:39:14 | 01-22-2021 00:39:14 | Thank you for fixing this! |
transformers | 9,740 | closed | RAG Model without DPR | Hello everyone,
I am interesting in studying how RAG generator answers questions without the DPR retriever but with other passages / contexts identified by other methods.
For example in the code below
```from transformers import RagRetriever
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
retriever = RagRetriever.from_pretrained(โ./rag-token-nqโ, indexed_dataset=dataset)
tokenizer = RagTokenizer.from_pretrained("./rag-token-nq")
model = RagTokenForGeneration.from_pretrained("./rag-token-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch(โHow many people live in Paris?โ, โIn Paris, there are 10 million people.โ, return_tensors=โptโ)
input_ids = input_dict[โinput_idsโ]
model = RagTokenForGeneration.from_pretrained(โfacebook/rag-token-nqโ, retriever=retriever)
generated_ids = model.generate(input_ids=input_ids, labels=input_dict[โlabelsโ])
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_string)
```
In the line
```input_dict = tokenizer.prepare_seq2seq_batch(โHow many people live in Paris?โ, โIn Paris, there are 10 million people.โ, return_tensors=โptโ) ```
, I want to use โHow many people live in Paris ?โ as the question and โIn Paris, there are 10 million people.โ the passage/context which should be used to generate the answer.
Kindly let me know how to do this?
Is my understanding of the code correct and if not, how to go about it?
Thanks,
Krishanu | 01-21-2021 22:12:43 | 01-21-2021 22:12:43 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Feel free to tag @patrickvonplaten or @lhoestq on the forum, they'll probably be able to answer your quesitons :).
Thanks! |
transformers | 9,739 | closed | Error using TFAutoModelForSequenceClassification with Tensorflow 2.2.0 | Hello.
I am trying to implement `TFAutoModelForSequenceClassification` in my code following the example for sequence classification as shown [here](https://huggingface.co/transformers/task_summary.html#sequence-classification)
The code is as follows:
```
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="tf")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="tf")
paraphrase_classification_logits = model(paraphrase)[0]
not_paraphrase_classification_logits = model(not_paraphrase)[0]
paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
```
Here is the error readout:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-20-c48d70c01597> in <module>
2 import tensorflow as tf
3 tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
----> 4 model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
5 classes = ["not paraphrase", "is paraphrase"]
6 sequence_0 = "The company HuggingFace is based in New York City"
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1189
1190 if type(config) in TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING.keys():
-> 1191 return TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING[type(config)].from_pretrained(
1192 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1193 )
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1216
1217 # Instantiate model.
-> 1218 model = cls(config, *model_args, **model_kwargs)
1219
1220 if from_pt:
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, *inputs, **kwargs)
1369
1370 self.num_labels = config.num_labels
-> 1371 self.bert = TFBertMainLayer(config, name="bert")
1372 self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
1373 self.classifier = tf.keras.layers.Dense(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_tf_utils.py in wrapped_init(self, *args, **kwargs)
105 elif isinstance(config, PretrainedConfig):
106 if len(args) > 0:
--> 107 initializer(self, *args, **kwargs)
108 else:
109 initializer(self, config, *args, **kwargs)
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, add_pooling_layer, **kwargs)
590 self.return_dict = config.use_return_dict
591 self.embeddings = TFBertEmbeddings(config, name="embeddings")
--> 592 self.encoder = TFBertEncoder(config, name="encoder")
593 self.pooler = TFBertPooler(config, name="pooler") if add_pooling_layer else None
594
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
430 super().__init__(**kwargs)
431
--> 432 self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
433
434 def call(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in <listcomp>(.0)
430 super().__init__(**kwargs)
431
--> 432 self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
433
434 def call(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
410 super().__init__(**kwargs)
411
--> 412 self.attention = TFBertAttention(config, name="attention")
413 self.intermediate = TFBertIntermediate(config, name="intermediate")
414 self.bert_output = TFBertOutput(config, name="output")
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
344 super().__init__(**kwargs)
345
--> 346 self.self_attention = TFBertSelfAttention(config, name="self")
347 self.dense_output = TFBertSelfOutput(config, name="output")
348
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
254 self.num_attention_heads = config.num_attention_heads
255 self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
--> 256 self.query = tf.keras.layers.experimental.EinsumDense(
257 equation="abc,cde->abde",
258 output_shape=(None, config.num_attention_heads, self.attention_head_size),
AttributeError: module 'tensorflow.keras.layers.experimental' has no attribute 'EinsumDense'
```
I cannot seem to find a lot of information on solving this on Google. Any ideas? I am using a dockerized version of tensorflow 2.2.0 with Jupyter. This is a fresh install of transformers.
| 01-21-2021 20:11:19 | 01-21-2021 20:11:19 | @jplu could answer here<|||||>Hello!
The last version os Transformers needs TensorFlow 2.3 as the min version.<|||||>Got it. I switched to Pytorch for testing it, so I have not faced the same issue. Out of curiousity, if I wanted to run huggingface transformers on TF 2.2.0, what version of transformers do I need to use? Thanks!<|||||>Not more than 4.1<|||||>Thanks. |
transformers | 9,738 | closed | [fsmt] onnx triu workaround | This PR
* solves
```
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.
```
as reported in https://github.com/huggingface/transformers/issues/9737.
It adds a workaround for `triu` not being supported by pytorch's onnx set, as proposed here: https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232 with some modifications to make it work with transformers. The original workaround couldn't handle a matrix with `-inf`.
* adds an onnx export test
The workaround fix is localized to fsmt, but transformers has a handful of those:
```
src/transformers/pipelines/question_answering.py: candidates = np.tril(np.triu(outer), max_answer_len - 1)
src/transformers/models/fsmt/modeling_fsmt.py: causal_mask = torch.triu(fill_with_neg_inf(torch.zeros(tgt_len, tgt_len)), 1).to(
src/transformers/models/transfo_xl/modeling_transfo_xl.py: dec_attn_mask = (torch.triu(all_ones, 1 + mlen) + torch.tril(all_ones, -mask_shift_len))[:, :, None] # -1
src/transformers/models/transfo_xl/modeling_transfo_xl.py: dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1 + mlen)[
src/transformers/models/xlnet/modeling_xlnet.py: mask_up = torch.triu(attn_mask, diagonal=1)
src/transformers/models/ctrl/modeling_ctrl.py: mask = torch.triu(torch.ones(seq_len + past_length, seq_len + past_length), 1).to(inputs_embeds.device)
src/transformers/models/prophetnet/modeling_prophetnet.py: left_block[stream_idx].triu_(-stream_idx + 1)
src/transformers/models/prophetnet/modeling_prophetnet.py: causal_mask = torch.triu(causal_mask, 1)
```
Perhaps the workaround wrapper should be somewhere in the common tools and used in other places?
Or merge this to let the user who needed this in first place move forward and then refactor to other places which haven't been spoken for. But actually if you look at https://github.com/pytorch/pytorch/issues/32968 many of the me-too comments talk about `transformers`.
https://github.com/pytorch/pytorch/issues/32968 proposes other solutions too. I tried them and either they didn't work, or were inefficient as spotted by @patrickvonplaten
Fixes: https://github.com/huggingface/transformers/issues/9737
@LysandreJik, @mfuntowicz | 01-21-2021 18:25:28 | 01-21-2021 18:25:28 | I'm a bit worried that this fix will blow up the memory for very long sequences, *e.g.* applying this to a max_length of 16384 (which we already have for LED) would create a 16384 ** 2 = 1GB tensor<|||||>Wouldn't this work-around be better in terms of memory? https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232<|||||>Glad you have the deep understanding of this one, @patrickvonplaten
wrt, https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232 - I tried it originally and while onnx was happy, everything else failed with it.
Perhaps it can be re-written not to use `triu`? some suggest using `np.triu`<|||||>> Wouldn't this work-around be better in terms of memory? [pytorch/pytorch#32968 (comment)](https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232)
I managed to fix this more efficient version to work with `-inf`. It's in the PR now.<|||||>> Did you run the FSMT integration tests to ensure that it doesn't diverge?
Yes, the function produces the same output as `triu` for the inputs we use.<|||||>This workaround seems to only work for squared-sized tensors, while `torch.triu` works for any shape (PyTorch 1.9.0). For example:
```
a = torch.randn(8, 4)
triu_onnx(a)
```
raises an `RuntimeError: The size of tensor a (8) must match the size of tensor b (4) at non-singleton dimension 1`.
Is there any additional workaround when working with non-squared shape tensors?
Thank you and best regards,
Gustavo.<|||||>Indeed, the workaround was written for that sort of inputs.
Are you running into this problem with `transformers`? Could you please file a new issue then including the way to reproduce the problem?
But I also see that perhaps we can switch back to pytorch implementation as reported here:
https://github.com/pytorch/pytorch/issues/32968#issuecomment-827054124
it's called `trilu`. I think it should be in pt-1.9.0.
https://github.com/onnx/onnx/blob/29e7aa7048809784465d06e897f043a4600642b2/docs/Operators.md#Trilu
Would you like to experiment with it and see if it solves the problem? If it works you may consider creating a PR that switches to that version instead and we can work together on polishing out the details (as we need to support older pytorch as well).
To clarify: what I'm trying to propose is to try pytorch-1.9.0 and use its built-in `triu` and see if it now works. One way you could test is by reverting my original PR that introduced the workaround and see if it just works. alternatively, if you have the right know-how you can write some test code that tests torch's `triu` directly with onnx export.
<|||||>update:
> 'triu' support added in PT-ONNX exporter in opset14 https://github.com/pytorch/pytorch/pull/59486
which I suppose should be available in pytorch-1.10 when it comes out as it was merged on july-14. So once it's released we could re-do this workaround and fallback to it for pt<1.10. |
transformers | 9,737 | closed | [fsmt] Exporting the operator triu to ONNX opset version 12 is not supported | As a follow up to https://github.com/huggingface/transformers/issues/9722
```
import torch
import transformers
from transformers import convert_graph_to_onnx
from pathlib import Path
convert_graph_to_onnx.convert(
framework="pt",
model="facebook/wmt19-en-de",
output=Path("encoder/en_de_trans.onnx"),
opset=12,
tokenizer="facebook/wmt19-en-de",
use_external_format= False,
pipeline_name= "translation_en_to_de",
)
```
after applying the fix from https://github.com/huggingface/transformers/pull/9736
it then crashes with:
```
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/transformers-master/porting/onnx2.py", line 26, in <module>
convert_graph_to_onnx.convert(
File "/hf/transformers-master/src/transformers/convert_graph_to_onnx.py", line 367, in convert
convert_pytorch(nlp, opset, output, use_external_format)
File "/hf/transformers-master/src/transformers/convert_graph_to_onnx.py", line 279, in convert_pytorch
export(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/__init__.py", line 271, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 86, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 671, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 450, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 204, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/__init__.py", line 309, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 970, in _run_symbolic_function
symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 927, in _find_symbolic_in_registry
return sym_registry.get_registered_op(op_name, domain, opset_version)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 112, in get_registered_op
raise RuntimeError(msg)
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.
Process finished with exit code 1
```
Need to look at workarounds proposed at https://github.com/pytorch/pytorch/issues/32968 | 01-21-2021 18:07:34 | 01-21-2021 18:07:34 | Applied a workaround in https://github.com/huggingface/transformers/pull/9738 |
transformers | 9,736 | closed | [fsmt] token_type_ids isn't used | This PR fixes a bug discovered in https://github.com/huggingface/transformers/issues/9722
`token_type_ids` was returned by default by the tokenizer, but it isn't used by the model.
The original fsmt port was a frankenstein of bart for the model and xlm for the tokenizer, hence the discrepancy.
I thought the tests had common onnx tests, but it doesn't seem to be the case. I added a local test in the related PR: https://github.com/huggingface/transformers/pull/9738
With this fix `convert_graph_to_onnx.convert` still fails with:
```
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator
```
but that's a totally different issue. Fixed in https://github.com/huggingface/transformers/pull/9738
Fixes: https://github.com/huggingface/transformers/issues/9722
@LysandreJik, @patrickvonplaten
| 01-21-2021 18:04:47 | 01-21-2021 18:04:47 | |
transformers | 9,735 | closed | Add `report_to` training arguments to control the integrations used | # What does this PR do?
This PR introduces a new `report_to` training argument that controls which of the multiple reporting tools to use in a training round. Currently, `Trainer` automatically uses everything installed, which can cause trouble when:
- one platform is installed but not properly set up.
- one platform is installed but the user doesn't want to use it today.
In my opinion the current behavior is too magical and does not fit our philosophy. To avoid any breaking change, the current default for this `report_to` argument is to use everything installed, but I would like to switch this to an empty list at the next major release, so the user has to opt-in the platforms they want to use.
| 01-21-2021 17:13:27 | 01-21-2021 17:13:27 | |
transformers | 9,734 | closed | Fixes to run_seq2seq and instructions | # What does this PR do?
This PR fixes two issues in the new `run_seq2seq` script and adds instructions on how to run it. The fixes are:
- default of `val_max_target_length` to `max_target_length` (had forgotten to do this in the initial PR)
- add an optional prefix to the source text (for T5 models) | 01-21-2021 16:35:58 | 01-21-2021 16:35:58 | |
transformers | 9,733 | closed | [WIP] Small improvement in shape manipulation in t5, makes exporting | # What does this PR do?
(torchscript + ONNX) easier because shape inference is not static.
- Still requires a test that would make sure there's not regression
there.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | 01-21-2021 16:30:32 | 01-21-2021 16:30:32 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,732 | closed | OSError: [Errno 116] Stale file handle | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: - only 1 GPU
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Hi, I am training finetune_trainer.py on on wmt dataset. I am getting the following error sometimes, do you have an idea what might cause it? thanks for any suggestion
```
File "finetune_trainer.py", line 342, in <module>
main()
File "finetune_trainer.py", line 256, in main
if (os.path.isdir(training_args.output_dir) and not training_args.optimize_from_scratch) else None,
File "/home/jim/trainer.py", line 814, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/jim/trainer.py", line 885, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/jim/trainer.py", line 916, in _save_checkpoint
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
File "/home/jim/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/serialization.py", line 374, in save
_legacy_save(obj, opened_file, pickle_module, pickle_protocol)
File "/home/jim/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/serialization.py", line 214, in __exit__
self.file_like.close()
OSError: [Errno 116] Stale file handle
```
Model I am using T5.
## To reproduce
this is not happening all the time, but it does happen
## Expected behavior
to run the codes
| 01-21-2021 15:31:01 | 01-21-2021 15:31:01 | This probably seems more related to your system rather than `Trainer` or `finetune_trainer.py` <|||||>thanks, but I just do not see why this happens, any idea is appreciated
On Thu, Jan 21, 2021 at 5:01 PM Suraj Patil <[email protected]>
wrote:
> This probably seems more related to your system rather than Trainer or
> finetune_trainer.py
>
> โ
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9732#issuecomment-764746041>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ANVIVSDVFWQUXCYIYLAOA23S3BFVFANCNFSM4WNBWFSQ>
> .
>
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,731 | closed | Mismatch of the mask token id of BART between fairseq and huggingface | ## ๐ Bug
The mask token id of BART is different between fairseq (torch.hub) and huggingface, and this discrepancy leads to different results in mask_filling. So I wonder which token id is actually correct.
(After checking the norm of the embedding at each mask token id, I feel that torch.hub might be correct. I have posted the same issue at fairseq github and been waiting for the reply.)
### To Reproduce
#### Code sample
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base", force_bos_token_to_be_generated=True)
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
assert tokenizer.mask_token_id == 50264
example_english_phrase = "<mask> cat is <mask>."
batch = tokenizer(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'],return_dict_in_generate = True, num_beams=10, num_return_sequences=1, output_scores = True)
print(" ".join(tokenizer.convert_ids_to_tokens(generated_ids[0][0])))
# </s> <s> This ฤ cat ฤ is ฤ adorable . </s>**
import torch
bart = torch.hub.load('pytorch/fairseq', 'bart.base')
bart.eval()
assert bart.task.source_dictionary.indices["<mask>"] == 51200
assert bart.task.source_dictionary.indices["madeupword0003"] == tokenizer.mask_token_id
# Somehow the huggingface model has a smaller vocab size, and 51200 is out of index
assert len(model.model.encoder.embed_tokens.weight) == 50265
assert len(bart.model.encoder.embed_tokens.weight) == 51201
# But the embedding at tokenizer.mask_token_id is the same between the two models
assert all(bart.model.encoder.embed_tokens.weight[tokenizer.mask_token_id] == model.model.encoder.embed_tokens.weight[tokenizer.mask_token_id])
def fill_mask(
model,
masked_inputs,
topk = 1,
match_source_len = False,
masked_token = '<mask>',
**generate_kwargs
):
batch_tokens = []
for masked_input in masked_inputs:
assert masked_token in masked_input, \
"please add one {} token for the input".format(masked_token)
text_spans = masked_input.split(masked_token)
text_spans_bpe = (' {0} '.format(masked_token)).join(
[model.bpe.encode(text_span.rstrip()) for text_span in text_spans]
).strip()
tokens = model.task.source_dictionary.encode_line(
'<s> ' + text_spans_bpe + ' </s>',
append_eos=False,
add_if_not_exist=False,
).long()
batch_tokens.append(tokens)
generate_kwargs['beam'] = max(
topk,
generate_kwargs.get('beam', -1),
)
generate_kwargs['match_source_len'] = match_source_len
batch_hypos = model.generate(batch_tokens, **generate_kwargs)
return batch_hypos
masked_inputs=[example_english_phrase]
generate_kwargs = {}
generate_kwargs['beam'] = 10
generate_kwargs['match_source_len'] = False
batch_hypos = fill_mask(bart,masked_inputs, **generate_kwargs)
print(" ".join(tokenizer.convert_ids_to_tokens(batch_hypos[0][0]["tokens"])))
# <s> The ฤ cat ฤ is ฤ dead . </s>**
#### replace <mask> with madeupword0003 ####
example_english_phrase = "madeupword0003 cat is madeupword0003."
masked_inputs=[example_english_phrase]
batch_hypos = fill_mask(bart,masked_inputs, masked_token = "madeupword0003", **generate_kwargs)
print(" ".join(tokenizer.convert_ids_to_tokens(batch_hypos[0][0]["tokens"])))
# <s> This ฤ cat ฤ is ฤ adorable . </s>
```
### Environment
- PyTorch Version: 1.5.1+cu101
- OS (e.g., Linux): Linux
- Python version: 3.6.10
- transformers version: 4.2.1
- CUDA version: 10.1
### Additional context
<!-- Add any other context about the problem here. -->
| 01-21-2021 13:59:02 | 01-21-2021 13:59:02 | hi @twadada
> Somehow the huggingface model has a smaller vocab size, and 51200 is out of index
50,265 is the actual vocab size, the rest of the tokens are dummy tokens as you can see in this issue pytorch/fairseq#2242
So I don't think 51200 can be the `mask_token_id`<|||||>Hi @patil-suraj
Thank you for your quick reply!
> 50,265 is the actual vocab size, the rest of the tokens are dummy tokens as you can see in this issue pytorch/fairseq#2242
Yes, I am aware of that. But, the embedding of the mask token in huggingface-BART is exactly the same as that of the dummy token "madeupword0003" in torch.hub-BART, as confirmed in the following line
```
assert bart.task.source_dictionary.indices["madeupword0003"] == tokenizer.mask_token_id
assert all(bart.model.encoder.embed_tokens.weight[tokenizer.mask_token_id] == model.model.encoder.embed_tokens.weight[tokenizer.mask_token_id])
```
Here, "bart" and "model" are the torch.hub and huggingface models, resp.
And this embedding looks very similar to that of the other dummy tokens, and the \<mask> token embedding in torch.hub-BART looks more accurate.
```
for dummy in ["madeupword0003", "madeupword0030","madeupword0130", "madeupword0230", "<mask>"]:
tokenid = bart.task.source_dictionary.indices[dummy]
emb_mean = bart.model.encoder.embed_tokens.weight[tokenid].mean().data
emb_norm = bart.model.encoder.embed_tokens.weight[tokenid].norm().data
print(emb_mean, emb_norm)
# tensor(-0.0083) tensor(0.9653) madeupword0003 (= huggingface mask embedding)
# tensor(-0.0087) tensor(0.9633) madeupword0030
# tensor(-0.0085) tensor(0.9688) madeupword0130
# tensor(-0.0084) tensor(0.9645) madeupword0230
# tensor(-0.0010) tensor(1.6455) torch.hub <mask> embedding
# torch.hub <mask> embedding is similar to that of other frequent words in terms of the norm.
for word in [".", "," , "ฤ the"]:
tokenid = tokenizer.convert_tokens_to_ids(word)
emb_mean = bart.model.encoder.embed_tokens.weight[tokenid].mean().data
emb_norm = bart.model.encoder.embed_tokens.weight[tokenid].norm().data
print(emb_mean, emb_norm)
# tensor(0.0391) tensor(1.6430)
# tensor(0.0512) tensor(1.8719)
# tensor(0.0483) tensor(1.8041)
```
<|||||>Hi, any update on this? I've also tried loading BART from fairseq repository and confirmed that the model is identical to the one at torch.hub. Given that the original model is at fairseq, I assume there would be something wrong with the huggingface model.
I think that registering a wrong embedding for the mask token is rather a serious bug and it needs fixing asap.<|||||>This issue has been stale for 1 month.<|||||>Has it been fixed?<|||||>I checked the HF Bart checkpoint and I agree with @twadada that the mismatch is a potential bug. Seems the HF model directly uses the first 50264 embeddings from fairseq. It should be the first 50263 embeddings concatenated with the last embedding (the embedding of `<mask>`).
But I guess it's not a severe bug unless `<mask>` token exists in the input text. <|||||>@zhaochaocs Thanks for confirming that. I think it can be a critical bug when you use BART as a masked language model, such as when you use it as a mask-filling model (e.g. a cloze test
for probing), or when you fine-tune it on additional monolingual data using the MLM objective.
<|||||>@twadada Yeah, I entirely agree. I tried some prompt ideas in the last project and found BART performed not as well as other PTMs even after post-training. Maybe the mismatch of the mask token is one of the reasons.
Hope HF can fix that someday, or we can temporarily replace the embedding of `<mask>` with the fairseq parameter.<|||||>Yea, we've gotta replace it with the correct embedding (assuming the fairseq parameter would be the correct one). Honestly, it was a little disappointing that this issue was sort of ignored and closed. Not sure whether this closed issue will draw attention from HF anymore.<|||||>Seems this issue was automatically closed by the Github bot. Maybe @patil-suraj or @patrickvonplaten can help us re-check if the embedding of `<mask>` token in BART should be fixed.<|||||>Leaving it to @patil-suraj as he has already looked at the PR, but I'm not so sure whether we have the wrong <mask> token simple because this example works quite well:
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```<|||||>Taken from: https://huggingface.co/transformers/model_doc/bart.html#mask-filling<|||||>I think that's why this potential bug is overlooked.
The example is a good demo but not a reliable test case for this issue. If you replace the `<mask>` token of `example_english_phrase` as a random token (say `refin`), the generator will return the same sentence as using the `<mask>` token. In my machine it returns `UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria`
<|||||>@patil-suraj I think we actually indeed use the wrong token for <mask> as can be verified by running the following code:
```python
import torch
from transformers import BartModel, BartTokenizer
# fsq bart base
bart = torch.hub.load('pytorch/fairseq', 'bart.base')
mask_token_id = bart.task.source_dictionary.indices["<mask>"]
mask_token_weight_fairseq = bart.model.encoder.embed_tokens.weight[mask_token_id].detach()
## Hf bart-base
hf_tok = BartTokenizer.from_pretrained("facebook/bart-base")
mask_token_id_hf = hf_tok.mask_token_id
hf_model = BartModel.from_pretrained("facebook/bart-base")
mask_token_weight_hf = hf_model.encoder.embed_tokens.weight[mask_token_id_hf].detach()
(mask_token_weight_hf - mask_token_weight_fairseq).abs().max() # => gives value > 1.0
```
=> @zhaochaocs I'm afraid however that we can't do anything about it really...the original fairseq model weigths have a length of 51201 where as the HF model weights have only 50265 -> so we can't even access the "real" mask_token_id in HF. We could go into the official model weights and change the mask_token weights to the correct values, but this could lead to some ugly backward compatibility problems. And given how much bart is used in HF I'm not really in favor of doing this...The mask token is also only really relevant for pretraining and the mask-filling task IMO whereas for pretraining it's initialized from scratch anyways so this bug doesn't affect many use cases.
Keen to hear your opinion here @patil-suraj
Also gently pinging @sshleifer - do you remember why we have different embedding shapes in HF vs. Fairseq?
<|||||>@patrickvonplaten
> given how much bart is used in HF I'm not really in favor of doing this...
I don't think it is a good idea to leave the bug unfixed just because many people have already used it. Rather, I believe it should be the reason for the bug to be fixed asap.
> The mask token is also only really relevant for pretraining and the mask-filling task
I think it can be a critical bug because nowadays more people are using cloze tests
to probe the linguistic knowledge stored in LM models. Some people also use MLM models for lexical substitution. It can also be a problem when you fine-tune BART on additional monolingual data using the MLM objective (only the mask token embedding is trained from scratch, with a small learning rate used for fine-tuning). <|||||>Thanks a lot, @zhaochaocs and @twadada for bringing this to our attention :)
@patrickvonplaten I tend to agree with what @twadada said. Leaving this unfixed might cause issues when fine-tuning BART for MLM and this is a very sneaky bug so would be hard to detect.
IMO we could update the weights of the official models since BART is primarily used for downstream tasks mostly summarization and zero-shot classification which does not involve the mask token, so it wouldn't cause any issues for such models. Models which are already fine-tuned won't also be affected since we will only update the official pre-trained weights.
This should only probably break the mask-filling task, but will actually give better results as the current mask embeddings are incorrect. <|||||>Here is a temporary solution.
I replace HF's <mask> embedding with fairseq's <mask> embedding.
Here is the model
https://huggingface.co/liangtaiwan/bart-base-correct-mask-embedding
You can verify the new weight is corrected by the following script.
```python
import torch
from transformers import BartModel, BartTokenizer
# fsq bart=base
bart = torch.hub.load('pytorch/fairseq', 'bart.base')
mask_token_id = bart.task.source_dictionary.indices["<mask>"]
mask_token_weight_fairseq = bart.model.encoder.embed_tokens.weight[mask_token_id].detach()
# my bart-base
hf_tok = BartTokenizer.from_pretrained("liangtaiwan/bart-base-correct-mask-embedding")
mask_token_id_hf = hf_tok.mask_token_id
hf_model = BartModel.from_pretrained("liangtaiwan/bart-base-correct-mask-embedding")
mask_token_weight_hf = hf_model.encoder.embed_tokens.weight[mask_token_id_hf]
assert torch.equal(mask_token_weight_hf - mask_token_weight_fairseq)
# HF bart-base
hf_original_model = BartModel.from_pretrained("facebook/bart-base")
hf_original_model_state_dict = hf_original_model.state_dict()
hf_model_state_dict = hf_model.state_dict()
embeddings = ["shared.weight", "encoder.embed_tokens.weight", "decoder.embed_tokens.weight"]
# check weight
for k in hf_model_state_dict.keys():
if k in embeddings:
continue
assert torch.equal(hf_model_state_dict[k], hf_original_model_state_dict[k])
# check embedding
for k in embeddings:
assert torch.equal(hf_model_state_dict[k][:-1], hf_original_model_state_dict[k][:-1])
```
However, I did some prompt language model experiments. The results are almost identical. The result of HF's one is even better sometimes.
<|||||>> Thanks a lot, @zhaochaocs and @twadada for bringing this to our attention :)
>
> @patrickvonplaten I tend to agree with what @twadada said. Leaving this unfixed might cause issues when fine-tuning BART for MLM and this is a very sneaky bug so would be hard to detect.
>
> IMO we could update the weights of the official models since BART is primarily used for downstream tasks mostly summarization and zero-shot classification which does not involve the mask token, so it wouldn't cause any issues for such models. Models which are already fine-tuned won't also be affected since we will only update the official pre-trained weights.
>
> This should only probably break the mask-filling task, but will actually give better results as the current mask embeddings are incorrect.
Ok - I think I'm happy to go forward with this solution. Actually would be nice to get another opinion here. @sgugger, @LysandreJik - would it be ok for you to updated existing pre-trained checkpoints?<|||||>I'm okay with updating the weights in a new commit since this fixes issues, as people can still revert to the previous commit if they really need to.<|||||>@patil-suraj - let me know if you want to handle it or if I should do it (I have some time tomorrow or next week if you're very busy ;-)) <|||||>Ok for me too!<|||||>Great! I will update the weights today :) <|||||>I've updated the weights for all 3 checkpoints pt, tf, flax https://huggingface.co/facebook/bart-base/tree/main
this issue is only associated with `bart-base`, `bart-large` does not have this problem, so no need to change the weights there :)<|||||>read the thread. i am the one who pushed the bug. the fix sounds good!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this issue now. |
transformers | 9,730 | closed | Docs suggest to use discriminator weights for ElectraForMaskedLM instead of generator | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-5.3.0-64-generic-x86_64-with-Ubuntu-19.10-eoan
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): ELECTRA
The problem arises when using:
* [x] the official example scripts: https://huggingface.co/transformers/model_doc/electra.html#codecell3
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: I want to check if a some word fits into the context of a sentence, for which I use the prediction probability of that word at the MASK position
## To reproduce
Steps to reproduce the behavior:
1. `from transformers import ElectraForMaskedLM`
2. `model = ElectraForMaskedLM.from_pretrained('google/electra-small-discriminator')`
## My issue
I get the following warning:
`Some weights of ElectraForMaskedLM were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['generator_predictions.LayerNorm.weight', 'generator_predictions.LayerNorm.bias', 'generator_predictions.dense.weight', 'generator_predictions.dense.bias', 'generator_lm_head.weight', 'generator_lm_head.bias']`
I understand that I'm loading the discriminator weights, whereas ElectraForMaskedLM needs the generator weights for the MLM output. Why do the docs tell me to use the discriminator? Am I missing something?
| 01-21-2021 13:47:30 | 01-21-2021 13:47:30 | Hello! Indeed, this is a mistake! Do you want to update the docs to show the generator instead?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,729 | closed | Changing model default for TableQuestionAnsweringPipeline. | # What does this PR do?
Niels removed his tapas model from the Hub, so we need to update the default to `google` organization
- Discussion: https://discuss.huggingface.co/t/table-question-answering-is-not-an-available-task-under-pipeline/3284/6
- Had to update the slow test that was out-of-sync I think, @LysandreJik can you confirm ?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik
@thomwolf
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | 01-21-2021 13:08:11 | 01-21-2021 13:08:11 | |
transformers | 9,728 | closed | Fix some TF slow tests | # What does this PR do?
This PR fixes several slow tests related to saved model creation.
| 01-21-2021 12:09:22 | 01-21-2021 12:09:22 | |
transformers | 9,727 | closed | ERROR about using layer_past and use_cache in Attention Layer of GPT2 | Hi,
I am trying to use "use_cache" and "past_key_values" to speed up the decode steps.
But I have some questions about the Attention Layer, here are some codes in forward function:
[layer_past code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L230)
``` python
query = self.split_heads(query)
key = self.split_heads(key, k=True)
value = self.split_heads(value)
if layer_past is not None:
past_key, past_value = layer_past[0].transpose(-2, -1), layer_past[1] # transpose back cf below
key = torch.cat((past_key, key), dim=-1)
value = torch.cat((past_value, value), dim=-2)
```
If I send the layer_past value, it raise a size unmatched ERROR.
It shows that the shapes of "key" and "value" are not match with the attention mask.
Maybe the "key" before torch.cat has the same shape as the attention mask, but after torch.cat with past_key, the shape of "key" change.
Here is an example:
``` python
import torch
from transformers import GPT2Model, GPT2Config
config = GPT2Config()
config.use_cache = True
model = GPT2Model(config=config)
input_ids = torch.randint(0, 100, (2, 6))
attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]], dtype=torch.bool)
past_key_values = None
outputs = model(input_ids=input_ids, attention_mask=attention_mask, past_key_values=None)
logits = outputs[0]
past_key_values = outputs[1]
print(logits.size())
print(len(past_key_values))
print([_kv.size() for _kv in past_key_values])
# we get the past_key_values and add the next step decoder input.
added_input_ids = torch.randint(0, 100, (2, 1))
added_attention_mask = torch.tensor([[1], [1]], dtype=torch.bool)
input_ids = torch.cat([input_ids, added_input_ids], dim=1)
attention_mask = torch.cat([attention_mask, added_attention_mask], dim=1)
print(input_ids.size(), attention_mask.size())
outputs = model(input_ids=input_ids, attention_mask=attention_mask, past_key_values=past_key_values)
# here occur the ERROR
logits = outputs[0]
past_key_values = outputs[1]
print(logits.size())
print(len(past_key_values))
print([_kv.size() for _kv in past_key_values])
```
```
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py in _attn(self, q, k, v, attention_mask, head_mask, output_attentions)
175 if attention_mask is not None:
176 # Apply the attention mask
--> 177 w = w + attention_mask
178
179 w = nn.Softmax(dim=-1)(w)
RuntimeError: The size of tensor a (13) must match the size of tensor b (7) at non-singleton dimension 3
``` | 01-21-2021 11:23:22 | 01-21-2021 11:23:22 | Hey @ouwenjie03,
you should not do this steps:
```python
input_ids = torch.cat([input_ids, added_input_ids], dim=1)
```
It should just be
```pyhton
input_ids = added_input_ids
```
When passing `past_key_values` the input_ids should correspond **only** to the last tokens. I think if you take a look at this test: https://github.com/huggingface/transformers/blob/3f290e6c8403c6a2cf80dce068869793bde49540/tests/test_modeling_gpt2.py#L446 you'll understand a bit better.
<|||||>Oh, get it! Thank you so much! |
transformers | 9,726 | closed | fix T5 head mask in model_parallel | # What does this PR do?
`head_mask` in T5 is not parallelized correctly in model parallel, each layer's head mask should be put on that layer's device if it's not `None`.
Fixes #9718 | 01-21-2021 10:51:37 | 01-21-2021 10:51:37 | That's a better solution actually! Thanks @patil-suraj <|||||>Also, there is a whole bunch of issues including this one I believe fixed in this PR: https://github.com/huggingface/transformers/pull/9323 where we no longer do it one by one. |
transformers | 9,725 | closed | AutoModel doesn't work with DPRContextEncoder | ## Environment info
- `transformers` version: 4.2.2
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@lhoestq
@patrickvonplaten
## Information
If I run:
```python
model = AutoModel.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base')
```
AutoModel infers `DPRQuestionEncoder` but not the correct class (i.e. `DPRContextEncoder`). Thus we can't use the correct model weights.
The output is:
```
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base were not used when initializing DPRQuestionEncoder: ['ctx_encoder.bert_model.embeddings.word_embeddings.weight', 'ctx_encoder.bert_model.embeddings.position_embeddings.weight', 'ctx_encoder.bert_model.embeddings.token_type_embeddings.weight', 'ctx_encoder.bert_model.embeddings.LayerNorm.weight', 'ctx_encoder.bert_model.embeddings.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.0.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.0.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.0.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.0.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.0.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.1.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.1.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.1.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.1.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.1.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.2.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.2.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.2.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.2.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.2.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.3.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.3.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.3.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.3.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.3.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.4.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.4.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.4.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.4.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.4.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.5.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.5.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.5.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.5.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.5.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.6.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.6.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.6.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.6.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.6.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.7.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.7.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.7.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.8.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.8.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.8.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.8.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.8.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.9.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.9.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.9.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.9.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.9.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.10.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.10.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.10.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.10.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.10.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.query.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.query.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.key.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.key.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.value.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.self.value.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.11.attention.output.LayerNorm.bias', 'ctx_encoder.bert_model.encoder.layer.11.intermediate.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.output.dense.weight', 'ctx_encoder.bert_model.encoder.layer.11.output.dense.bias', 'ctx_encoder.bert_model.encoder.layer.11.output.LayerNorm.weight', 'ctx_encoder.bert_model.encoder.layer.11.output.LayerNorm.bias', 'ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRQuestionEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRQuestionEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DPRQuestionEncoder were not initialized from the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base and are newly initialized: ['bert_model.embeddings.word_embeddings.weight', 'bert_model.embeddings.position_embeddings.weight', 'bert_model.embeddings.token_type_embeddings.weight', 'bert_model.embeddings.LayerNorm.weight', 'bert_model.embeddings.LayerNorm.bias', 'bert_model.encoder.layer.0.attention.self.query.weight', 'bert_model.encoder.layer.0.attention.self.query.bias', 'bert_model.encoder.layer.0.attention.self.key.weight', 'bert_model.encoder.layer.0.attention.self.key.bias', 'bert_model.encoder.layer.0.attention.self.value.weight', 'bert_model.encoder.layer.0.attention.self.value.bias', 'bert_model.encoder.layer.0.attention.output.dense.weight', 'bert_model.encoder.layer.0.attention.output.dense.bias', 'bert_model.encoder.layer.0.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.0.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.0.intermediate.dense.weight', 'bert_model.encoder.layer.0.intermediate.dense.bias', 'bert_model.encoder.layer.0.output.dense.weight', 'bert_model.encoder.layer.0.output.dense.bias', 'bert_model.encoder.layer.0.output.LayerNorm.weight', 'bert_model.encoder.layer.0.output.LayerNorm.bias', 'bert_model.encoder.layer.1.attention.self.query.weight', 'bert_model.encoder.layer.1.attention.self.query.bias', 'bert_model.encoder.layer.1.attention.self.key.weight', 'bert_model.encoder.layer.1.attention.self.key.bias', 'bert_model.encoder.layer.1.attention.self.value.weight', 'bert_model.encoder.layer.1.attention.self.value.bias', 'bert_model.encoder.layer.1.attention.output.dense.weight', 'bert_model.encoder.layer.1.attention.output.dense.bias', 'bert_model.encoder.layer.1.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.1.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.1.intermediate.dense.weight', 'bert_model.encoder.layer.1.intermediate.dense.bias', 'bert_model.encoder.layer.1.output.dense.weight', 'bert_model.encoder.layer.1.output.dense.bias', 'bert_model.encoder.layer.1.output.LayerNorm.weight', 'bert_model.encoder.layer.1.output.LayerNorm.bias', 'bert_model.encoder.layer.2.attention.self.query.weight', 'bert_model.encoder.layer.2.attention.self.query.bias', 'bert_model.encoder.layer.2.attention.self.key.weight', 'bert_model.encoder.layer.2.attention.self.key.bias', 'bert_model.encoder.layer.2.attention.self.value.weight', 'bert_model.encoder.layer.2.attention.self.value.bias', 'bert_model.encoder.layer.2.attention.output.dense.weight', 'bert_model.encoder.layer.2.attention.output.dense.bias', 'bert_model.encoder.layer.2.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.2.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.2.intermediate.dense.weight', 'bert_model.encoder.layer.2.intermediate.dense.bias', 'bert_model.encoder.layer.2.output.dense.weight', 'bert_model.encoder.layer.2.output.dense.bias', 'bert_model.encoder.layer.2.output.LayerNorm.weight', 'bert_model.encoder.layer.2.output.LayerNorm.bias', 'bert_model.encoder.layer.3.attention.self.query.weight', 'bert_model.encoder.layer.3.attention.self.query.bias', 'bert_model.encoder.layer.3.attention.self.key.weight', 'bert_model.encoder.layer.3.attention.self.key.bias', 'bert_model.encoder.layer.3.attention.self.value.weight', 'bert_model.encoder.layer.3.attention.self.value.bias', 'bert_model.encoder.layer.3.attention.output.dense.weight', 'bert_model.encoder.layer.3.attention.output.dense.bias', 'bert_model.encoder.layer.3.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.3.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.3.intermediate.dense.weight', 'bert_model.encoder.layer.3.intermediate.dense.bias', 'bert_model.encoder.layer.3.output.dense.weight', 'bert_model.encoder.layer.3.output.dense.bias', 'bert_model.encoder.layer.3.output.LayerNorm.weight', 'bert_model.encoder.layer.3.output.LayerNorm.bias', 'bert_model.encoder.layer.4.attention.self.query.weight', 'bert_model.encoder.layer.4.attention.self.query.bias', 'bert_model.encoder.layer.4.attention.self.key.weight', 'bert_model.encoder.layer.4.attention.self.key.bias', 'bert_model.encoder.layer.4.attention.self.value.weight', 'bert_model.encoder.layer.4.attention.self.value.bias', 'bert_model.encoder.layer.4.attention.output.dense.weight', 'bert_model.encoder.layer.4.attention.output.dense.bias', 'bert_model.encoder.layer.4.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.4.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.4.intermediate.dense.weight', 'bert_model.encoder.layer.4.intermediate.dense.bias', 'bert_model.encoder.layer.4.output.dense.weight', 'bert_model.encoder.layer.4.output.dense.bias', 'bert_model.encoder.layer.4.output.LayerNorm.weight', 'bert_model.encoder.layer.4.output.LayerNorm.bias', 'bert_model.encoder.layer.5.attention.self.query.weight', 'bert_model.encoder.layer.5.attention.self.query.bias', 'bert_model.encoder.layer.5.attention.self.key.weight', 'bert_model.encoder.layer.5.attention.self.key.bias', 'bert_model.encoder.layer.5.attention.self.value.weight', 'bert_model.encoder.layer.5.attention.self.value.bias', 'bert_model.encoder.layer.5.attention.output.dense.weight', 'bert_model.encoder.layer.5.attention.output.dense.bias', 'bert_model.encoder.layer.5.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.5.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.5.intermediate.dense.weight', 'bert_model.encoder.layer.5.intermediate.dense.bias', 'bert_model.encoder.layer.5.output.dense.weight', 'bert_model.encoder.layer.5.output.dense.bias', 'bert_model.encoder.layer.5.output.LayerNorm.weight', 'bert_model.encoder.layer.5.output.LayerNorm.bias', 'bert_model.encoder.layer.6.attention.self.query.weight', 'bert_model.encoder.layer.6.attention.self.query.bias', 'bert_model.encoder.layer.6.attention.self.key.weight', 'bert_model.encoder.layer.6.attention.self.key.bias', 'bert_model.encoder.layer.6.attention.self.value.weight', 'bert_model.encoder.layer.6.attention.self.value.bias', 'bert_model.encoder.layer.6.attention.output.dense.weight', 'bert_model.encoder.layer.6.attention.output.dense.bias', 'bert_model.encoder.layer.6.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.6.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.6.intermediate.dense.weight', 'bert_model.encoder.layer.6.intermediate.dense.bias', 'bert_model.encoder.layer.6.output.dense.weight', 'bert_model.encoder.layer.6.output.dense.bias', 'bert_model.encoder.layer.6.output.LayerNorm.weight', 'bert_model.encoder.layer.6.output.LayerNorm.bias', 'bert_model.encoder.layer.7.attention.self.query.weight', 'bert_model.encoder.layer.7.attention.self.query.bias', 'bert_model.encoder.layer.7.attention.self.key.weight', 'bert_model.encoder.layer.7.attention.self.key.bias', 'bert_model.encoder.layer.7.attention.self.value.weight', 'bert_model.encoder.layer.7.attention.self.value.bias', 'bert_model.encoder.layer.7.attention.output.dense.weight', 'bert_model.encoder.layer.7.attention.output.dense.bias', 'bert_model.encoder.layer.7.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.7.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.7.intermediate.dense.weight', 'bert_model.encoder.layer.7.intermediate.dense.bias', 'bert_model.encoder.layer.7.output.dense.weight', 'bert_model.encoder.layer.7.output.dense.bias', 'bert_model.encoder.layer.7.output.LayerNorm.weight', 'bert_model.encoder.layer.7.output.LayerNorm.bias', 'bert_model.encoder.layer.8.attention.self.query.weight', 'bert_model.encoder.layer.8.attention.self.query.bias', 'bert_model.encoder.layer.8.attention.self.key.weight', 'bert_model.encoder.layer.8.attention.self.key.bias', 'bert_model.encoder.layer.8.attention.self.value.weight', 'bert_model.encoder.layer.8.attention.self.value.bias', 'bert_model.encoder.layer.8.attention.output.dense.weight', 'bert_model.encoder.layer.8.attention.output.dense.bias', 'bert_model.encoder.layer.8.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.8.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.8.intermediate.dense.weight', 'bert_model.encoder.layer.8.intermediate.dense.bias', 'bert_model.encoder.layer.8.output.dense.weight', 'bert_model.encoder.layer.8.output.dense.bias', 'bert_model.encoder.layer.8.output.LayerNorm.weight', 'bert_model.encoder.layer.8.output.LayerNorm.bias', 'bert_model.encoder.layer.9.attention.self.query.weight', 'bert_model.encoder.layer.9.attention.self.query.bias', 'bert_model.encoder.layer.9.attention.self.key.weight', 'bert_model.encoder.layer.9.attention.self.key.bias', 'bert_model.encoder.layer.9.attention.self.value.weight', 'bert_model.encoder.layer.9.attention.self.value.bias', 'bert_model.encoder.layer.9.attention.output.dense.weight', 'bert_model.encoder.layer.9.attention.output.dense.bias', 'bert_model.encoder.layer.9.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.9.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.9.intermediate.dense.weight', 'bert_model.encoder.layer.9.intermediate.dense.bias', 'bert_model.encoder.layer.9.output.dense.weight', 'bert_model.encoder.layer.9.output.dense.bias', 'bert_model.encoder.layer.9.output.LayerNorm.weight', 'bert_model.encoder.layer.9.output.LayerNorm.bias', 'bert_model.encoder.layer.10.attention.self.query.weight', 'bert_model.encoder.layer.10.attention.self.query.bias', 'bert_model.encoder.layer.10.attention.self.key.weight', 'bert_model.encoder.layer.10.attention.self.key.bias', 'bert_model.encoder.layer.10.attention.self.value.weight', 'bert_model.encoder.layer.10.attention.self.value.bias', 'bert_model.encoder.layer.10.attention.output.dense.weight', 'bert_model.encoder.layer.10.attention.output.dense.bias', 'bert_model.encoder.layer.10.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.10.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.10.intermediate.dense.weight', 'bert_model.encoder.layer.10.intermediate.dense.bias', 'bert_model.encoder.layer.10.output.dense.weight', 'bert_model.encoder.layer.10.output.dense.bias', 'bert_model.encoder.layer.10.output.LayerNorm.weight', 'bert_model.encoder.layer.10.output.LayerNorm.bias', 'bert_model.encoder.layer.11.attention.self.query.weight', 'bert_model.encoder.layer.11.attention.self.query.bias', 'bert_model.encoder.layer.11.attention.self.key.weight', 'bert_model.encoder.layer.11.attention.self.key.bias', 'bert_model.encoder.layer.11.attention.self.value.weight', 'bert_model.encoder.layer.11.attention.self.value.bias', 'bert_model.encoder.layer.11.attention.output.dense.weight', 'bert_model.encoder.layer.11.attention.output.dense.bias', 'bert_model.encoder.layer.11.attention.output.LayerNorm.weight', 'bert_model.encoder.layer.11.attention.output.LayerNorm.bias', 'bert_model.encoder.layer.11.intermediate.dense.weight', 'bert_model.encoder.layer.11.intermediate.dense.bias', 'bert_model.encoder.layer.11.output.dense.weight', 'bert_model.encoder.layer.11.output.dense.bias', 'bert_model.encoder.layer.11.output.LayerNorm.weight', 'bert_model.encoder.layer.11.output.LayerNorm.bias', 'bert_model.pooler.dense.weight', 'bert_model.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
I think could be useful to generalise this behaviour and automatically detect whether the model is a context/document encoder or question encoder. | 01-21-2021 10:42:24 | 01-21-2021 10:42:24 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I ran into this recently and posted a question about it [on the forum](https://discuss.huggingface.co/t/dpr-pretrained-context-encoder-unused-weight-warning/11265), awaiting response.
I don't have a solution but, as a workaround, if you use `DPRContextEncoder` instead of `AutoModel`,
```
from transformers import DPRContextEncoder
context_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
```
you don't get the runtime warning. |
transformers | 9,724 | closed | Run_ner.py falsely aligns prediction list | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-5.9.1-kd-cluster-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@stefan-it
## Information
I am using the bert-base-german-cased model, which I trained on an NER task to predict entity labels.
When I run prediction on the test dataset, BERT tells me that the maximum sequence length (300) is exhausted and so no predictions will be made on a number of items.
However, this seems to be a wrong error report. The problem appears to be that in the align_predictions function, the preds_list variable gets a wrong dimension - the predictions are made correctly, but the index is shifted, so that they point to the wrong words.
An example of what I mean:
```
Amtsgericht Ort
Leipzig Ort
Abteilung O
fรผr O
Strafsachen O
```
becomes
```
Amtsgericht O
Leipzig O
Abteilung O
fรผr Ort
Strafsachen Ort
```
in the preds_list.
The write_predictions_to_file function then gets tangled up by this shifted index and (I think falsely) declares a sequence length error.
Strangely, another test document works just fine, without any difference between them that I could find. No sequence length error and no false indices there.
The problem arises when using:
* [] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm afraid it isn't so easy to reproduce because it requires the trained model and the data.
The align_predictions function:
```
def align_predictions(predictions: np.ndarray, label_ids: np.ndarray) -> Tuple[List[int], List[int]]:
preds = np.argmax(predictions, axis=2)
batch_size, seq_len = preds.shape
out_label_list = [[] for _ in range(batch_size)]
preds_list = [[] for _ in range(batch_size)]
for i in range(batch_size):
for j in range(seq_len):
if label_ids[i, j] != nn.CrossEntropyLoss().ignore_index:
out_label_list[i].append(label_map[label_ids[i][j]])
preds_list[i].append(label_map[preds[i][j]])
return preds_list, out_label_list
```
Console output:
```
01/21/2021 11:07:01 - INFO - filelock - Lock 140422091422400 released on /home/IAIS/tschmude/bert_remote/examples/token-classification/Data_processing_scripts/CrossVal_Files/Rotation/Test_file_swap/cached_test_BertTokenizer_340.lock
/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
0%| | 0/1 [00:00<?, ?it/s]/home/IAIS/tschmude/anaconda3/envs/bert_env_remote/lib/python3.8/site-packages/seqeval/metrics/v1.py:57: UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
01/21/2021 11:07:30 - INFO - run_ner -
precision recall f1-score support
Datum 1.00 1.00 1.00 1
Gestaendnis_ja 1.00 1.00 1.00 1
Ort 0.50 1.00 0.67 1
Schadensbetrag 1.00 1.00 1.00 1
Strafe_Gesamtfreiheitsstrafe_Dauer 1.00 1.00 1.00 1
Strafe_Tatbestand 0.00 0.00 0.00 0
Taeter_Drogenbezug_ja 1.00 1.00 1.00 1
micro avg 0.55 1.00 0.71 6
macro avg 0.79 0.86 0.81 6
weighted avg 0.92 1.00 0.94 6
/tmp/pycharm_project_44/src/transformers/trainer.py:1174: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead.
warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning)
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Angeklagte'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'trรคgt'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'die'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Kosten'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'des'.
01/21/2021 11:08:52 - WARNING - tasks - Maximum sequence length exceeded: No prediction for 'Verfahrens'.
(and a lot more of these)
```
Config that the model was trained on:
```
Training Arguments:
Data directory: ...ng_scripts/CrossVal_Files/Rotation/Train_file_swap
Model: ...bert-base-german-cased
Epochs: 8
Seq length: 300
Learning rate: 5e-05
Batch size: 16
Seed: 105
Do Train: True
Do Eval: True
Do Test: True
```
## Expected behavior
That the predictions for the documents are correctly listed and written to file without a sequence length error or shifted indices.
Thank you for your help! | 01-21-2021 10:10:34 | 01-21-2021 10:10:34 | You can check https://github.com/uf-hobi-informatics-lab/ClinicalTransformerNER
We essentially wrap the models in transformers with our NER implementation that can handle sentences longer than the max_len.<|||||>Hi @Stimmot ,
did you run the `scripts/preprocess.py` to make sure that there are no sentences > 300 subtokens in your final data splits :thinking:
This should heavily prevent these kind of "maximum sequence length execeeded" errors :)<|||||>Thank you for the response @stefan-it, but I don't think it actually has to do with the sequence lengths. The longest sentences in the documents are around 200 tokens, not even near the maximal length.
Besides, the script crashes at some point with the following error:
```
Traceback (most recent call last):
File "/tmp/pycharm_project_44/examples/token-classification/run_ner_crossval.py", line 226, in <module>
main(sys.argv[1])
File "/tmp/pycharm_project_44/examples/token-classification/run_ner_crossval.py", line 153, in main
result_string, pred_dict = run_ner.main(json_config)
File "/tmp/pycharm_project_44/examples/token-classification/run_ner.py", line 359, in main
token_classification_task.write_predictions_to_file(writer, f, preds_list)
File "/tmp/pycharm_project_44/examples/token-classification/tasks.py", line 53, in write_predictions_to_file
elif preds_list[example_id]:
IndexError: list index out of range
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 1.70it/s]
```
The script doesn't work through the predictions list as it normally would, since the indices are shifted. Any other ideas what this could be?
(Thank you @bugface as well but as said above I don't think it's because of the sequence length)<|||||>@stefan-it I now also used the preprocess.py script just to be sure, but unfortunately it didn't change anything.<|||||>Hi @Stimmot ,
I think I found an interesting information in your provided log: `cached_test_BertTokenizer_340.lock`
That means, that the dataset was initially pre-processed with a sequence length of 340! Then I think you changed the max. sequence length to 300, but your ner script is still using the cached pre-processed test dataset that has a max. sequence length of 340.
Could you try to remove all `cached*` files, so that the dataset features are newly written :thinking: Hope this helps :)<|||||>Thanks @stefan-it, it had indeed to do with the cached models.
I built a script that one by one takes test documents and runs the run_ner.py script on them, however, it seems that it saves a cached version of the model for the first document (which it predicts correctly) and then uses this cached version for all subsequent ones. The cached dimensions of the document don't work on the new ones so, naturally, it runs into errors.
The solution is to let the model be rebuilt after each run, without using cached verisons.
Another quick question on that: is there an option to tell the run_ner.py script not to build cached models, so that it will use new ones each time it predicts?<|||||>Yeah, this can be done via `--overwrite_cache` argument :hugs: <|||||>Thank you very much, worked perfectly!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,723 | closed | [LED] Reduce Slow Test required GPU RAM from 16GB to 8GB | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR prevents the slow test:
`tests/test_modeling_led.py::LEDModelIntegrationTests::test_seq_to_seq_generation` from failing due by reducing the required GPU RAM to 8GB
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-21-2021 09:36:31 | 01-21-2021 09:36:31 | |
transformers | 9,722 | closed | convert_graph_to_onnx.convert broken for translation model facebook/wmt19-en-de | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@mfuntowicz (based on initial commit of convert_graph_to_onnx)
@stas00 (based on model used here)
@thomwolf (based on history)
## Information
Model I am using (Bert, XLNet ...): facebook/wmt19-en-de
The problem arises when using:
* [X] the official example scripts: transformers.convert_graph_to_onnx.convert
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: converting the translation model to onnx
## To reproduce
Steps to reproduce the behavior:
```
import torch
import transformers
from transformers import convert_graph_to_onnx
from pathlib import Path
nlp = transformers.pipeline("translation_en_to_de", model="facebook/wmt19-en-de", tokenizer="facebook/wmt19-en-de")
convert_graph_to_onnx.convert(
framework="pt",
model="facebook/wmt19-en-de",
output=Path("encoder/en_de_trans.onnx"),
opset=12,
tokenizer="facebook/wmt19-en-de",
use_external_format= False,
pipeline_name= "translation_en_to_de",
)
```
Raises:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-d46bec961b86> in <module>
5
6 nlp = transformers.pipeline("translation_en_to_de", model="facebook/wmt19-en-de", tokenizer="facebook/wmt19-en-de")
----> 7 convert_graph_to_onnx.convert(
8 framework="pt",
9 model="facebook/wmt19-en-de",
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
274
275 with torch.no_grad():
--> 276 input_names, output_names, dynamic_axes, tokens = infer_shapes(nlp, "pt")
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in infer_shapes(nlp, framework)
196 tokens = nlp.tokenizer("This is a sample output", return_tensors=framework)
197 seq_len = tokens.input_ids.shape[-1]
--> 198 outputs = nlp.model(**tokens) if framework == "pt" else nlp.model(tokens)
199 if isinstance(outputs, ModelOutput):
200 outputs = outputs.to_tuple()
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
Subsequently, the call of the raise can be boiled down to inferring the shapes for [torch.onnx.export](https://github.com/huggingface/transformers/blob/6a346f0358a40f89ec384d441233bf54cac44f6a/src/transformers/convert_graph_to_onnx.py#L196)
I think that may be due to the incompatibility of the tokenizer() vs tokenizer.encode() for this very model.
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/wmt19-en-de")
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt19-en-de")
string = "Hello. How are you?"
# model.generate(tokenizer(string, return_tensors="pt")) # Fails
model.generate(tokenizer.encode(string, return_tensors="pt")) # Succeeds
```
## Expected behavior
Model export should work properly.
| 01-21-2021 09:25:00 | 01-21-2021 09:25:00 | Thank you for this excellent report, @oborchers - I'll investigate and report back.<|||||>Fixed in https://github.com/huggingface/transformers/pull/9736
But found another problem: https://github.com/huggingface/transformers/issues/9737. Fixed in https://github.com/huggingface/transformers/pull/9738
So you will need both PRs for your task to work in case you want to try before they are merged.
<|||||>Awesome! Thank you, @stas00! Looking forward to try it out after PRs have been merged. Much appreciated <|||||>The problem you reported has been fixed in https://github.com/huggingface/transformers/pull/9736 (merged already)
But then another one poped up in https://github.com/huggingface/transformers/issues/9737
You can just use the https://github.com/huggingface/transformers/pull/9738 branch - since it contains both fixes.
Not sure how quickly it will get merged, since we might want to solve this for other models too. I made only a local for fsmt fix in that PR.<|||||>Great, thank you for the fast response and issue handling. I will provide a followup on #9738. While export works as intended, there is an issue I encounter while running the following code (built on 1st example):
```
sess = rt.InferenceSession(str(Path("encoder/en_de_trans.onnx")), opt)
spans = [
"My name is Bert", # Succeeds
"My name is Bert and" # Fails
]
for span in spans:
model_input = nlp.tokenizer.encode_plus(span)
model_input = {name : np.atleast_2d(value) for name, value in model_input.items()}
out = nlp.model(**nlp.tokenizer(span, return_tensors="pt"))
trans_1 = out[0].detach().cpu().numpy()
trans_2 = out[1].detach().cpu().numpy()
onnx_1, onnx_2 = sess.run(None, model_input)
assert np.allclose(trans_1, onnx_1, atol=1e-5)
assert np.allclose(trans_2, onnx_2, atol=1e-5)
```
"My name is Bert and" will raise:
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-3-3ef2da9bdd5e> in <module>
10 trans_1 = out[0].detach().cpu().numpy()
11 trans_2 = out[1].detach().cpu().numpy()
---> 12 onnx_1, onnx_2 = sess.run(None, model_input)
13 assert np.allclose(trans_1, onnx_1, atol=1e-5)
14 assert np.allclose(trans_2, onnx_2, atol=1e-5)
~/anaconda3/envs/dev/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
122 output_names = [output.name for output in self._outputs_meta]
123 try:
--> 124 return self._sess.run(output_names, input_feed, run_options)
125 except C.EPFail as err:
126 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_74' Status Message: /data/shared/packages/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,6}, requested shape:{5}
```
Solely based on intuition I'd assume that some dynamic shape of was not inferred properly/not passed to the dynamic_shapes of torch.onnx.export. But thats just a quick guess. Or did I miss something?
I see that I would have to look-into/re-implement the generate function, as only the tensors are passed back. I'm going to create a feature suggestion to support the [ORT Custom Ops](https://github.com/microsoft/ort-customops). Perhaps It would be possible to retrieve the actual translated string in the far future, instead of the tensors (or specify the output).
As promised follow up feature request + suggestion under #9784 <|||||>Honestly, I don't know much about the ONNX-side of things. I asked @mfuntowicz to hopefully have a look and address this.
Also tagging @LysandreJik and @patrickvonplaten who perhaps may have some answers as well.
I wonder if this is an issue project-wise, e.g. do you have the same issue if you do this on a Bart model? I'm asking since fsmt is Bart with some tweaks.
Also I think it's best to open a new issue, since now we are dealing with a different issue, so it'd be easier to track and monitor.<|||||>Thank you for your help, @stas00! I followed your advice and created a new issue.<|||||>@oborchers It seems that it is a problem of the pythorch export of the dynamic_axes.
Using the nightly version (torch-1.9.0.dev20210212 + cpu) it works.
On the other hand, I am interested in using the onnx models to generate, (translate and summarize).
Could you give me some indication of how to do a custom forward using the onnx model, to use in the generation_utils.generate function.
PS: for what you comment here [9784](https://github.com/huggingface/transformers/issues/9784) you plan to work on a User-specific re-implementation.
Thanks |
transformers | 9,721 | closed | [T5] Fix T5 model parallel tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Those tests were failing previously:
```
FAILED tests/test_modeling_t5.py::T5ModelTest::test_model_parallel_beam_search
FAILED tests/test_modeling_t5.py::T5ModelTest::test_model_parallel_equal_results
FAILED tests/test_modeling_t5.py::T5EncoderOnlyModelTest::test_model_parallel_equal_results
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-21-2021 09:23:55 | 01-21-2021 09:23:55 | |
transformers | 9,720 | closed | Temporarily deactivate TPU tests while we work on fixing them | Temporarily deactivates TPU tests while we work on fixing them. | 01-21-2021 09:14:32 | 01-21-2021 09:14:32 | |
transformers | 9,719 | closed | [PretrainedModel] add tie_weights to init | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When someone wants to pretrain a model from scratch with `config.tie_word_embeddings=True`, one would expect that even
when doing:
```python
model = BertModel(BertConfig())
```
that the word embedding weights are tied. However this is not the case at the moment. This PR fixes it
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-21-2021 08:49:47 | 01-21-2021 08:49:47 | Actually the init takes care of this so no need for this PR |
transformers | 9,718 | closed | T5 Model Parallelism in 4.3.0 | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.0.dev20210120 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes -- 4x A100-SXM4-40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help
@stas00 @alexorona @sgugger
## Related
Related to the discussion in #8771 ( https://github.com/huggingface/transformers/issues/8771#issuecomment-764069755 ) that suggests MP can be done in 4.3.0 just by calling model.parallelize() after loading. I made another issue rather than hijack that one that's about MP improvements in general.
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] my own modified scripts: (give details below)
Added one line to funetune_trainer.py after model is loaded ( model.parallelize(), see below)
```
+++ b/examples/seq2seq/finetune_trainer.py
@@ -215,6 +215,9 @@ def main():
# use task specific params
use_task_specific_params(model, data_args.task)
+ # PJ: Parallelize model
+ model.parallelize()
+
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Running the example on an official task/dataset (seq2seq)
## To reproduce
Steps to reproduce the behavior:
On 4.3.0-dev (tonight):
1. Fresh pull of transformers. Add change above ( model.parallelize() ).
2. Run runscript (below). Error appears to reproduce for any sized model (e.g. I'm using t5-11b, but also happens under t5-large).
```
python finetune_trainer.py \
--learning_rate=3e-5 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_val 1000 \
--data_dir xsum \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path t5-large \
--fp16 \
"$@"
```
3. The error:
```
...
[INFO|modeling_utils.py:1152] 2021-01-21 00:52:03,923 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-large.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
01/21/2021 00:52:03 - INFO - utils - setting model.config to task specific params for summarization:
{'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
01/21/2021 00:52:03 - INFO - utils - note: command line args may override some of these
[INFO|trainer.py:362] 2021-01-21 00:52:14,376 >> Using amp fp16 backend
01/21/2021 00:52:14 - INFO - __main__ - *** Train ***
[INFO|trainer.py:813] 2021-01-21 00:52:14,383 >> ***** Running training *****
[INFO|trainer.py:814] 2021-01-21 00:52:14,383 >> Num examples = 204016
[INFO|trainer.py:815] 2021-01-21 00:52:14,383 >> Num Epochs = 1
[INFO|trainer.py:816] 2021-01-21 00:52:14,383 >> Instantaneous batch size per device = 8
[INFO|trainer.py:817] 2021-01-21 00:52:14,383 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:818] 2021-01-21 00:52:14,383 >> Gradient Accumulation steps = 1
[INFO|trainer.py:819] 2021-01-21 00:52:14,383 >> Total optimization steps = 25502
0%| | 0/25502 [00:00<?, ?it/s]Traceback (most recent call last):
File "finetune_trainer.py", line 370, in <module>
main()
File "finetune_trainer.py", line 301, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 910, in train
tr_loss += self.training_step(model, inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 1272, in training_step
loss = self.compute_loss(model, inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/trainer.py", line 1300, in compute_loss
outputs = model(**inputs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/torch/nn/modules/module.py", line 873, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1500, in forward
return_dict=return_dict,
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/torch/nn/modules/module.py", line 873, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pajansen/anaconda3/envs/transformers-4.1.1a/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 938, in forward
head_mask = head_mask.to(hidden_states.device)
AttributeError: 'list' object has no attribute 'to'
0%| | 0/25502 [00:00<?, ?it/s]
```
4. It's worth noting that the behavior on 4.1.1 is different and it works (essentially the same change, but with the device map specified as per https://huggingface.co/transformers/model_doc/t5.html?highlight=parallel#transformers.T5EncoderModel.parallelize , and the runscript also has the --model_parallel flag).
- `transformers` version: 4.1.1
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.0.dev20210120 (True)
Change:
```
+++ b/examples/seq2seq/finetune_trainer.py
@@ -231,6 +231,13 @@ def main():
# use task specific params
use_task_specific_params(model, data_args.task)
+ # PJ: Parrallelize
+ device_map = {0: [0, 1, 2],
+ 1: [3, 4, 5, 6, 7, 8, 9],
+ 2: [10, 11, 12, 13, 14, 15, 16],
+ 3: [17, 18, 19, 20, 21, 22, 23]}
+ model.parallelize(device_map)
+
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
```
Runscript:
```
python finetune_trainer.py \
--learning_rate=3e-5 \
--fp16 \
--do_train \
--data_dir xsum \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path t5-large \
--max_source_length 96 \
--max_target_length 96 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--model_parallel \
"$@"
```
Output (works fine):
```
01/21/2021 01:25:01 - INFO - __main__ - *** Train ***
[INFO|trainer.py:703] 2021-01-21 01:25:01,016 >> ***** Running training *****
[INFO|trainer.py:704] 2021-01-21 01:25:01,016 >> Num examples = 999
[INFO|trainer.py:705] 2021-01-21 01:25:01,016 >> Num Epochs = 1
[INFO|trainer.py:706] 2021-01-21 01:25:01,016 >> Instantaneous batch size per device = 1
[INFO|trainer.py:707] 2021-01-21 01:25:01,016 >> Total train batch size (w. parallel, distributed & accumulation) = 1
[INFO|trainer.py:708] 2021-01-21 01:25:01,016 >> Gradient Accumulation steps = 1
[INFO|trainer.py:709] 2021-01-21 01:25:01,017 >> Total optimization steps = 999
0%| | 0/999 [00:00<?, ?it/s]/home/pajansen/anaconda3/envs/transformers-4.1.1/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
{'loss': nan, 'learning_rate': 1.4984984984984986e-05, 'epoch': 0.5005005005005005}
50%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 500/999 [02:25<02:20, 3.54it/s][INFO|trainer.py:1226] 2021-01-21 01:27:26,134 >> Saving model checkpoint to xsum-mini_results/checkpoint-500
[INFO|configuration_utils.py:289] 2021-01-21 01:27:26,138 >> Configuration saved in xsum-mini_results/checkpoint-500/config.json
[INFO|modeling_utils.py:814] 2021-01-21 01:27:29,444 >> Model weights saved in xsum-mini_results/checkpoint-500/pytorch_model.bin
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 999/999 [04:54<00:00, 3.30it/s][INFO|trainer.py:862] 2021-01-21 01:29:55,140 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 1.0}
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 999/999 [04:54<00:00, 3.40it/s]
[INFO|trainer.py:1226] 2021-01-21 01:29:55,141 >> Saving model checkpoint to xsum-mini_results
[INFO|configuration_utils.py:289] 2021-01-21 01:29:55,146 >> Configuration saved in xsum-mini_results/config.json
[INFO|modeling_utils.py:814] 2021-01-21 01:29:58,207 >> Model weights saved in xsum-mini_results/pytorch_model.bin
01/21/2021 01:29:58 - INFO - __main__ - ***** train metrics *****
01/21/2021 01:29:58 - INFO - __main__ - train_samples_per_second = -0.003
01/21/2021 01:29:58 - INFO - __main__ - train_runtime = 294.1311
01/21/2021 01:29:58 - INFO - __main__ - train_n_ojbs = -1
```
(Note, I substituted the xsum dataset above for a shorter version I made with /head/ to just use the first 1000 lines of each file, to see if it would finish to completion (without taking 15 hours for the full example dataset). It looks okay. It's worth noting that if the validation arguments are added:
```
--evaluation_strategy steps \
--predict_with_generate \
--n_val 1000 \
```
then 4.1.1 will die at the checkpoints (500 iterations) with "RuntimeError: Input, output and indices must be on the current device". (I don't fully appreciate that one -- I'm assuming it means train/eval has to be done separately with MP, which is entirely manageable. #9336 showed a similar error, but that person was using BART (which doesn't have MP in 4.1.1) instead of T5, so I don't think it's the same thing).
## Expected behavior
Model parallelism -- spreading large models across multiple GPUs.
| 01-21-2021 08:39:48 | 01-21-2021 08:39:48 | If you run into other issues please try this PR: https://github.com/huggingface/transformers/pull/9323
which has lots of improvements. It just hasn't been merged since we are waiting for me I think to sort the whole MP/PP out before moving forward. |
transformers | 9,717 | closed | ConvBERT Model | 01-21-2021 08:24:27 | 01-21-2021 08:24:27 | > Also, I think you forgot to add the model to the README.md (I'll forget it all the time as well :D)
Oh and while you are at it, a short entry in the `model_summary` would be great too! |
|
transformers | 9,716 | closed | CUDA out of memory error on Trainer hyperparameter_search | Hi,
I am using Colab to GridSearch on a dataset. The dataset has 7000 samples. I use 1/5 shard and batch_size is 1. No matter what I do, I get this error.
Env Related Info
-----------
tranformers (4.2.1)
datasets (1.2.1)
Model Used
----------------
[SpanBERT Large](https://huggingface.co/SpanBERT/spanbert-large-cased)
Snippets
------------
My dataset has 7934 train examples and 690 eval examples, with maximum number of tokens around 300 per example.
Tokenization Details:
```
max_length: 384
doc_stride: 128
```
```
def model_init():
return AutoModelForQuestionAnswering.from_pretrained(model_checkpoint_name, num_labels=2)
args = TrainingArguments(
output_dir = os.path.join(checkpoint_dir,'/grid_search/'),
logging_dir= os.path.join(runs_dir,'/grid_search/'),
evaluation_strategy='epoch',
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=1,
learning_rate=2e-5,
weight_decay=0.01,
num_train_epochs=3.0,
lr_scheduler_type="linear",
warmup_steps=0,
logging_steps=200,
save_steps=200,
seed=42
)
grid_search_trainer = Trainer(
model_init=model_init,
args=args,
train_dataset=tokenized_train.shard(index=1,num_shards=5),
eval_dataset=tokenized_val.shard(index=1,num_shards=5),
data_collator=data_collator,
tokenizer=tokenizer
)
best_run = grid_search_trainer.hyperparameter_search(n_trials=10, direction="minimize")
```
Sometimes, this happens after the trainer is done with one epoch. Is the trainer initializing the model without deleting/clearing the previous one? Would that affect the GPU? How can I prevent this issue?

@patil-suraj | 01-21-2021 04:11:34 | 01-21-2021 04:11:34 | There's not enough information here to help you. Please provide env info, the model/tasks you are using, and a short text code snippet to reproduce the error. <|||||>@patil-suraj I have edited the comment. Please help me out. Thanks.<|||||>@patil-suraj Same issue here
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,715 | closed | Error when passing --line_by_line to run_mlm.py | I am trying to run the `run_mlm.py` examples/language-modeling using my own data file. It works fine if I don't pass the `--line_by_line` parameter, but if I do, it breaks.
I can't figure this out using the error trace, can anyone give me a hand?
```
python3 run_mlm.py --model_name_or_path bert-base-uncased --train_file full_dataset_lm_train.txt --validation_file full_dataset_lm_dev.txt --do_train --do_eval --output_dir /tmp/moral_foundation_lm/ --line_by_line
```
```
Traceback (most recent call last):
File "run_mlm.py", line 446, in <module>
main()
File "run_mlm.py", line 322, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1260, in map
update_data=update_data,
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/usr/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1129, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1315, in save_type
obj.__bases__, _dict), obj=obj)
File "/usr/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/usr/lib/python3.6/pickle.py", line 634, in save_reduce
save(state)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 902, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/usr/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/usr/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/homes/pachecog/.local/lib/python3.6/site-packages/dill/_dill.py", line 1148, in save_dictproxy
raise ReferenceError("%s does not reference a class __dict__" % obj)
ReferenceError: {'help': 'The name of the dataset to use (via the datasets library).'} does not reference a class __dict__
```
| 01-21-2021 01:07:00 | 01-21-2021 01:07:00 | Update: this error was present when using Python 3.6.9, it disappeared when using Python 3.7.5
I found the hint at: https://github.com/huggingface/transformers/issues/8212 <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,714 | closed | Slow BERT Tokenizer adds UNK when calling tokenize() | Hi! I've run into an inconsistency between the base tokenizer docstring and the slow BERT tokenizer. Specifically, when calling `tokenizer.encode()`, the `[UNK]` token is inserted for unknown tokens, even though the [docstring](https://github.com/huggingface/transformers/blob/v4.0.1/src/transformers/tokenization_utils_base.py#L2026) says that such tokens should be unchanged. Here's how I'm calling the tokenizer:
```python
tokenizer = BertTokenizer.from_pretrained(
save_dir, do_lower_case=False, strip_accents=False, tokenize_chinese_chars=True
)
sentence = "RINDIRIZZA ฤ wann Marija Vianney"
print(tokenizer.tokenize(sentence))
```
and the output is
```
['RI', '##ND', '##IR', '##I', '##Z', '##ZA', '[UNK]', 'Marija', 'Via', '##nne', '##y']
```
(notice the `[UNK]` in the middle).
So, it seems that this particular slow tokenizer isn't following the docstring. Is this expected?
If not, is there a way to prevent replacement of unknown tokens? I wanted to use the slow BERT tokenizer over the fast one for exactly this reason, and it'd be great if there's a way to make this work.
I'm using `transformers` v4.0.1, but it looks like this docstring hasn't changed between [`master`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L2027) and `4.0.1`.
Thanks! | 01-21-2021 00:22:19 | 01-21-2021 00:22:19 | I wonder why the docstring says that, I believe both the slow and fast tokenizers replace with the unknown token. @thomwolf do you remember if that wasn't the case when you wrote the docstring?<|||||>Hi @thomwolf, any thoughts on this?
Also @LysandreJik, do you know if there's a way to prevent the replacement of the unknown token, or at least to identify what string it replaced?<|||||>If it did return strings that it could not understand, then the model would crash (which is why the docstring is surprising, and should be changed imo), and why we don't have a flag that would allow that. We can work around it, however.
The BERT tokenizer uses a whitespace tokenizer, which means that the first step is to split the input sequence on whitespace, before trying to convert each piece (or each "word") to tokens. When it fails to do that, it replaces that piece with the unknown token, so we can be confident that the unknown tokens are always space delimited strings.
Therefore, we can do the following:
```py
text_with_unknown_words = "Let's try it with some ๐ค emojis ๐ค every ๐ค where ๐ค."
# Strip it and split it on whitespace
list_of_space_separated_pieces = text_with_unknown_words.strip().split()
# Let's try it with the BERT-base-cased tokenizer
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
# Now let's encode the list word by word, removing the special tokens for simplicity
ids = [tokenizer(piece, add_special_tokens=False)["input_ids"] for piece in list_of_space_separated_pieces]
# [[2421, 112, 188], [2222], [1122], [1114], [1199], [100], [9712, 1186, 3454, 1116], [100], [1451], [100], [1187], [100, 119]]
# The tokenizer's unknown token is 100 and we can identify it easily here
# Identify the tokens which are unknown according to the tokenizer's `unk_token_id`
unk_indices = [i for i, encoded in enumerate(ids) if tokenizer.unk_token_id in encoded]
# Retrieve the strings that were converted into unknown tokens
unknown_strings = [piece for i, piece in enumerate(list_of_space_separated_pieces) if i in unk_indices]
# ['๐ค', '๐ค', '๐ค', '๐ค.'] Victory!
```
Let me know if that helps.<|||||>Thanks @LysandreJik -- this is really helpful! I ended up adding a couple more steps from BERT's basic tokenizer, since it also splits on punctuation, etc.
Not sure if I should close this issue now, or if it should stay open until the docstring issue is figured out?<|||||>Yes, the docstring should be updated. Do you want to take a stab at contributing? :)<|||||>Sure -- would it just involve fixing the docstrings that say this in the python code and then building the docs as specified [here](https://github.com/huggingface/transformers/blob/master/docs/README.md)? Or is there more to it?<|||||>Actually just fixing the docstrings and comitting! As soon as we merge the PR the documentation of the `master` branch will be updated. |
transformers | 9,713 | closed | Fix memory regression in Seq2Seq example | # What does this PR do?
This PR fixes the memory regression introduced when putting the `Seq2SeqTrainer` inside the main library. The root of the memory regression comes from the fact that when doing label smoothing, we ended up computing the log softmax of the logits twice, once in the cross entropy loss, and a second time inside the label smoother.
To fix this, the loss computation needs to be entirely done inside the label smoother, so the labels must be extracted from the batch before being passed to the model. As a result, the `decoder_input_ids` must be computed in the `Seq2SeqDataCollator` and not the model for this to work. I've just reverted the code from #9343, I don't know if it actually matches what happens inside the models. Maybe we should have a method to compute those `decoder_input_ids` accessible from inside of those models, or a flag to tell them whether to compute the loss or not (in this case, computing the loss will not only be slower, it will also trigger back the memory regression).
The same fix will need to be applied to the `Seq2SeqDataCollator` now inside the library as well as the new `run_seq2seq` script, but I will do it once we have agreed on a long-term solution for the decoder input ids above.
Fixes #9261 | 01-21-2021 00:13:04 | 01-21-2021 00:13:04 | I agree with Patrick here, the collator was added to encode the text and to prepare the `decoder_input_ids` and `labels`, replace pad with 100 etc. Now we could encode and prepare `labels` in datasets.map(...) so collator won't be needed anymore.
The only thing we need IMO is to be able to prepare `decoder_input_ids` outside of the model for label smoothing as Sylvain said. Could we maybe make the add `shift_right` method to every s2s model to able to prepare the `decoder_input_ids` outside of the model ?<|||||>Note that this is fixing the old script with the old data collator. The new one will be fixed with the proper fix (once we agree on it and there seems to be a consensus on having a model with a `shift_right` method) but is still necessary to do dynamic padding. The `Dataset.map` method is very nice for static things but when you want to pad to the length of the biggest sample in the batch, you need a special data collator, especially if it has to pad special keys like `"labels"`, `"decoder_input_ids"`...
The old `Seq2SeqDataCollator` in the utils file will be removed in a couple of weeks when the new seq2seq example is perfectly running, so I think it's fine to merge the quick hack in the meantime :-) |
transformers | 9,712 | closed | [trainer] no --deepspeed and --sharded_ddp together | This PR fixes an invalid if branch, which fails to detect a concurrent use of `--deepspeed` and `--sharded_ddp`, which should never be used together.
@sgugger | 01-20-2021 23:29:36 | 01-20-2021 23:29:36 | Thanks for fixing! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.