url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9830/comments | https://api.github.com/repos/huggingface/transformers/issues/9830/events | https://github.com/huggingface/transformers/pull/9830 | 794,865,469 | MDExOlB1bGxSZXF1ZXN0NTYyMzA4NTU5 | 9,830 | [MT5 Import init] Fix typo | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry about that!"
] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9830/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9830",
"html_url": "https://github.com/huggingface/transformers/pull/9830",
"diff_url": "https://github.com/huggingface/transformers/pull/9830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9830.patch",
"merged_at": 1611738597000
} |
https://api.github.com/repos/huggingface/transformers/issues/9829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9829/comments | https://api.github.com/repos/huggingface/transformers/issues/9829/events | https://github.com/huggingface/transformers/pull/9829 | 794,850,959 | MDExOlB1bGxSZXF1ZXN0NTYyMjk2Mzcw | 9,829 | Update run_xnli.py to use Datasets library | {
"login": "Qbiwan",
"id": 69753975,
"node_id": "MDQ6VXNlcjY5NzUzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/69753975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qbiwan",
"html_url": "https://github.com/Qbiwan",
"followers_url": "https://api.github.com/users/Qbiwan/followers",
"following_url": "https://api.github.com/users/Qbiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Qbiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qbiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qbiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Qbiwan/orgs",
"repos_url": "https://api.github.com/users/Qbiwan/repos",
"events_url": "https://api.github.com/users/Qbiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qbiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just tested the script locally and it seems to work great, congrats! We are almost done! The last part would be top adapt the end of the README in the text-classification folder to reflect how to use the new script (since the arguments are a bit different).",
"> Just tested the script locally and it seems to work great, congrats! We are almost done! The last part would be top adapt the end of the README in the text-classification folder to reflect how to use the new script (since the arguments are a bit different).\r\n\r\nI've changed the script in README from\r\n```\r\nexport XNLI_DIR=/path/to/XNLI\r\n\r\npython run_xnli.py \\\r\n --model_name_or_path bert-base-multilingual-cased \\\r\n --language de \\\r\n --train_language en \\\r\n --do_train \\\r\n --do_eval \\\r\n --data_dir $XNLI_DIR \\\r\n --per_device_train_batch_size 32 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 128 \\\r\n --output_dir /tmp/debug_xnli/ \\\r\n --save_steps -1\r\n```\r\nto \r\n```\r\npython run_xnli.py \\\r\n --model_name_or_path bert-base-multilingual-cased \\\r\n --language de \\\r\n --train_language en \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 32 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 128 \\\r\n --output_dir /tmp/debug_xnli/ \\\r\n --save_steps -1\r\n```\r\n\r\nI've also removed these sentences below from [Fine-tuning on XNLI](https://github.com/huggingface/transformers/blob/master/examples/text-classification/README.md#fine-tuning-on-xnli)\r\n```\r\nThe data for XNLI can be downloaded with the following links and should be both saved (and un-zipped) in a $XNLI_DIR directory.\r\n\r\n XNLI 1.0\r\n XNLI-MT 1.0\r\n```",
"> This is in good shape to be merged, thanks a lot for your work! I just have a few comments on how to simplify things here and there since there is only one task to deal with in the new script.\r\n> \r\n> One question I have is, is the tokenizer the same for the training and evaluation datasets, even if the languages, can be different?\r\n\r\nI'm puzzled. Is `Trainer()` class doing the magic under the hood when the languages are different? or is it `AutoTokenizer.from_pretrained`?\r\n",
"I missed this is a multinlingual checkpoint, so there is no need for different tokenizers.\r\n\r\n@patil-suraj it's good to merge IMO, I'll let you review one last time and merge if you approve.",
" @sgugger Yay :)\r\n@patil-suraj let me know if there's anything you would like me to change further\r\n",
"Thanks @sgugger @patil-suraj for your helpful comments and guidance. I was jumping in at the deep end when I attempted this PR to be honest, but yay it's merged 😀",
"Great job adding this example and thanks a lot for your PR! Don't hesitate to brag a little bit on Twitter about your contribution ;-) "
] | 1,611 | 1,613 | 1,613 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9754
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9829/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9829",
"html_url": "https://github.com/huggingface/transformers/pull/9829",
"diff_url": "https://github.com/huggingface/transformers/pull/9829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9829.patch",
"merged_at": 1613019444000
} |
https://api.github.com/repos/huggingface/transformers/issues/9828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9828/comments | https://api.github.com/repos/huggingface/transformers/issues/9828/events | https://github.com/huggingface/transformers/pull/9828 | 794,844,448 | MDExOlB1bGxSZXF1ZXN0NTYyMjkwODAy | 9,828 | [LedFastTokenizer] Correct missing None statement | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing!"
] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
LEDTokenizerFast was not set to None when not being imported, which broke this script, e.g.:
https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-transformers.ipynb.
This PR should fix it.
Ci-Failure is unrelated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
cc @sgugger @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9828/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9828",
"html_url": "https://github.com/huggingface/transformers/pull/9828",
"diff_url": "https://github.com/huggingface/transformers/pull/9828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9828.patch",
"merged_at": 1611733395000
} |
https://api.github.com/repos/huggingface/transformers/issues/9827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9827/comments | https://api.github.com/repos/huggingface/transformers/issues/9827/events | https://github.com/huggingface/transformers/issues/9827 | 794,830,235 | MDU6SXNzdWU3OTQ4MzAyMzU= | 9,827 | I am trying to Fine tune on BartForConditionalGeneration but I end up getting all <pad_tokens>. Can you please help resolve it? | {
"login": "Sai-Ashish",
"id": 30151156,
"node_id": "MDQ6VXNlcjMwMTUxMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/30151156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sai-Ashish",
"html_url": "https://github.com/Sai-Ashish",
"followers_url": "https://api.github.com/users/Sai-Ashish/followers",
"following_url": "https://api.github.com/users/Sai-Ashish/following{/other_user}",
"gists_url": "https://api.github.com/users/Sai-Ashish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sai-Ashish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sai-Ashish/subscriptions",
"organizations_url": "https://api.github.com/users/Sai-Ashish/orgs",
"repos_url": "https://api.github.com/users/Sai-Ashish/repos",
"events_url": "https://api.github.com/users/Sai-Ashish/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sai-Ashish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9827/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/9826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9826/comments | https://api.github.com/repos/huggingface/transformers/issues/9826/events | https://github.com/huggingface/transformers/pull/9826 | 794,829,651 | MDExOlB1bGxSZXF1ZXN0NTYyMjc4NDk3 | 9,826 | Delete a needless duplicate condition | {
"login": "tomohideshibata",
"id": 16042472,
"node_id": "MDQ6VXNlcjE2MDQyNDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16042472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomohideshibata",
"html_url": "https://github.com/tomohideshibata",
"followers_url": "https://api.github.com/users/tomohideshibata/followers",
"following_url": "https://api.github.com/users/tomohideshibata/following{/other_user}",
"gists_url": "https://api.github.com/users/tomohideshibata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomohideshibata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomohideshibata/subscriptions",
"organizations_url": "https://api.github.com/users/tomohideshibata/orgs",
"repos_url": "https://api.github.com/users/tomohideshibata/repos",
"events_url": "https://api.github.com/users/tomohideshibata/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomohideshibata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you!\r\n"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Delete a needless duplicate condition in the class `PrefixConstrainedLogitsProcessor` (`src/transformers/generation_logits_process.py`).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9826/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9826",
"html_url": "https://github.com/huggingface/transformers/pull/9826",
"diff_url": "https://github.com/huggingface/transformers/pull/9826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9826.patch",
"merged_at": 1611742523000
} |
https://api.github.com/repos/huggingface/transformers/issues/9825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9825/comments | https://api.github.com/repos/huggingface/transformers/issues/9825/events | https://github.com/huggingface/transformers/pull/9825 | 794,794,785 | MDExOlB1bGxSZXF1ZXN0NTYyMjQ5MDE3 | 9,825 | Add tpu_zone and gcp_project in training_args_tf.py | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger \r\nI got the error message with ```make style```.\r\n\r\n```\r\nkiyoung@medical-ubuntu:~/transformers$ make style\r\nrunning deps_table_update\r\nupdating src/transformers/dependency_versions_table.py\r\nblack examples tests src utils\r\nmake: black: Command not found\r\nMakefile:42: recipe for target 'style' failed\r\nmake: *** [style] Error 127\r\n```",
"You need to follow the steps of the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) do be able to make PRs. In particular you didn't follow the installation part by running `pip install -e \".[dev]\"` since you don't have `black` installed.",
"@sgugger \r\nThanks, I did it.",
"Thanks for fixing! Now the problem comes from a new release of jax, which has been fixed in master so this is safe to merge.",
"This PR introduced a `datasets` submodule. I'm removing it in #9868."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Add ```tpu_zone``` and ```gcp_project``` in ```training_args_tf.py```.
For using TPUs created in a zone different from the VM zone, ```tpu_zone``` must be allocated.
See official Bert repo,
https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/run_pretraining.py#L426
- trainer: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9825/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9825",
"html_url": "https://github.com/huggingface/transformers/pull/9825",
"diff_url": "https://github.com/huggingface/transformers/pull/9825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9825.patch",
"merged_at": 1611755110000
} |
https://api.github.com/repos/huggingface/transformers/issues/9824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9824/comments | https://api.github.com/repos/huggingface/transformers/issues/9824/events | https://github.com/huggingface/transformers/issues/9824 | 794,708,501 | MDU6SXNzdWU3OTQ3MDg1MDE= | 9,824 | [wip] [doc] Performance and Scalability notes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2690307185,
"node_id": "MDU6TGFiZWwyNjkwMzA3MTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Performance",
"name": "Performance",
"color": "207F32",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"The automatic mixed precision and performance tuning recipes may be helpful.\r\nhttps://pytorch.org/tutorials/recipes/recipes/amp_recipe.html\r\nhttps://pytorch.org/tutorials/recipes/recipes/tuning_guide.html\r\n",
"thank you very much, @mcarilli - this is exactly what I was looking for!",
"Going to make it into a real doc here: https://github.com/huggingface/transformers/pull/12258"
] | 1,611 | 1,624 | 1,624 | CONTRIBUTOR | null | Let's start another doc. I think it works the best to work on these as an issue and not a PR since anybody can read these easily, rather than reading a markdown.
As in the other similar [work-in-progress-doc](https://github.com/huggingface/transformers/issues/9766), let me write the bulk of it out and then you can ask questions / make requests and clarifications.
---------------------------------------------
# Performance and Scalability: How To Fit a Bigger Model and Train It Faster
Quick notes:
This section gives brief ideas on how to make training faster and support bigger models. Later sections will expand, demonstrate and elucidate each of these.
### Faster Training
HW:
- fast connectivity between GPUs
* same node: NVLink
* multiple nodes: ???
SW:
- Data Parallel / Distributed Data Parallel
- fp16 (autocast caching)
### Bigger Models
HW:
- bigger GPUs
SW:
- ZeRO-Offload
- ZeRO-DP
- Pipeline Parallelism
- fp16 (smaller data)
## Hardware
### Multi-GPU Connectivity
If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time.
If the GPUs are on the same physical node, you can run:
```
nvidia-smi topo -m
```
and it will tell you how the GPUs are inter-connected.
On a machine with dual-GPU and which are connected with NVLink, you will most likely see something like:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X NV2 0-23 N/A
GPU1 NV2 X 0-23 N/A
```
on a different machine w/o NVLink we may see:
```
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X PHB 0-11 N/A
GPU1 PHB X 0-11 N/A
```
The report includes this Legend:
```
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
So the first report `NV2` tells us the GPUs are interconnected with 2 NVLinks, and the second report `PHB` we have a typical consumer-level PCIe+Bridge setup.
Check what type of connectivity you have on your setup. Some of these will make the communication between cards faster (e.g. NVLink), others slower (e.g. PHB).
Depending on the type of scalability solution used, the connectivity speed could have a major or a minor impact. If the GPUs need to sync rarely, as in DDP, the impact of a slower connection will be less significant. If the GPUs need to send messages to each other often, as in ZeRO-DP, then faster connectivity becomes super important to achieve faster training.
### NVlink
[NVLink](https://en.wikipedia.org/wiki/NVLink) is a wire-based serial multi-lane near-range communications link developed by Nvidia.
Each new generation provides a faster bandwidth, e.g. here is a quote from [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf):
> Third-Generation NVLink®
> GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links,
> with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four
> links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth
> between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink.
> (Note that 3-Way and 4-Way SLI configurations are not supported.)
So the higher `X` you get in the report of `NVX` in the output of `nvidia-smi topo -m` the better. The generation will depend on your GPU architecture.
Let's compare the execution of a gpt2 language model training over a small sample of wikitext.
The results are:
|type| time secs |
|----|-----|
| w/ NVlink| 101 |
| w/o NVlink | 131 |
You can see that NVLink completes the training ~23% faster.
In the second benchmark we use `NCCL_P2P_DISABLE=1` to tell the GPUs not to use NVLink.
Here is the full benchmark code and outputs:
```
# DDP w/ NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \
examples/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm
--per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVLink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 python -m torch.distributed.launch \
--nproc_per_node 2 examples/language-modeling/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \
--per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
## Software
### Anatomy of Model's Memory
The components on GPU memory are the following:
- the model weights
- the forward activations saved for gradient computation
- the gradients
- the optimizer state
### `forward` vs `backward` Execution Speed
For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput).
### fp16
AMP = Automatic Mixed Precision
If we look at what's happening with FP16 training (mixed precision) we have:
- the model in full precision so no memory saved there
- the forward activations saved for gradient computation are in mixed precision
- the gradients are computed in mixed precision *but* converted to full precision for the update, so no saving there
- the optimizer state is in full precision as all the updates are done in full precision
So the saving only happen for the forward activations saved for the backward computation, and there is a slight overhead because the gradients are properly stored both in half and full precision. (This is probably over-simplified but I think it's enough to explain what follows.)
Now let's look at a simple text-classification fine-tuning on 2 GPUs (I'm giving the command for reference):
```
export BS=16
python -m torch.distributed.launch \
--nproc_per_node 2 examples/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size $BS \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mrpc \
--overwrite_output_dir \
--fp16
```
Since the only savings we get are in the model activations saved for the backward passed, it's logical that the bigger those activations are, the bigger the saving will be. If we try different batch sizes, I indeed get (this is with nvidia-smi so not completely reliable as said above but it will be a fair comparison):
| batch size | without --fp16 | with --fp16 | FP16 savings |
|:-:|:-:|:-:|:-:|
| 8 | 4247 | 4163 | 84 |
| 16 | 4971 | 4793 | 178 |
| 32 | 6827 | 6207 | 620 |
| 64 | 10037 | 8061 | 1976 |
So there is only a real memory saving if we train at a high batch size (and it's not half) and at batch sizes lower than 8, you actually get a bigger memory footprint (because of the overhead mentioned above). The gain for FP16 training is that in each of those cases, the training with the flag `--fp16` is twice as fast, which does require every tensor to have every dimension be a multiple of 8 (so if your batch size is not a multiple of 8, you won't get that speed-up, and the script `finetune_trainer.py` does not pad the tensors to a sequence length that is a multiple of 8).
TL;DR: FP16 with apex or AMP will only give you some memory savings with a reasonably high batch size.
Some amazing tutorials to read on mixed precision:
- @sgugger wrote a great explanation of mixed precision [here](https://docs.fast.ai/callback.fp16.html#A-little-bit-of-theory)
- Aleksey Bilogur's [A developer-friendly guide to mixed precision training with PyTorch](https://spell.ml/blog/mixed-precision-training-with-pytorch-Xuk7YBEAACAASJam)
### fp16 caching
pytorch `autocast` which performs AMP include a caching feature, which speed things up by caching fp16-converted values. Here is the full description from this [comment](https://discuss.pytorch.org/t/autocast-and-torch-no-grad-unexpected-behaviour/93475/3):
Autocast maintains a cache of the FP16 casts of model params (leaves). This helps streamline parameter reuse: if the same FP32 param is used in several different FP16list ops, like several matmuls, instead of re-casting the param to FP16 on entering each matmul, the cast will occur on the first matmul, the casted FP16 copy will be cached, and for all later matmuls the FP16 copy will be reused. The cache is maintained only within a particular outermost autocast context. When you exit the autocast context the cache is dropped. For recommended usage, in which autocast wraps the forward pass, and then you exit the context before calling backward(), this means the cache only lasts the duration of the forward pass each iteration, and will be rebuilt next iteration. (The cache of FP16-casted copies MUST be rebuilt each iteration. The FP32 params get updated by the optimizer, so the FP16 copies must be recreated, otherwise the FP16 values will be stale.)
### DP vs DDP
`DistributedDataParallel` (DDP) is typically faster than `DataParallel` (DP), but it is not always the case:
* while DP is python threads-based, DDP is multiprocess-based - and as such it has no python threads limitations, such as GIL
* on the other hand a slow inter-connectivity between the GPU cards could lead to an actual slower outcome with DDP
Here are the main differences in the inter-GPU communication overhead between the two modes:
[DDP](https://pytorch.org/docs/master/notes/ddp.html):
- At the start time the main process replicates the model once from gpu 0 to the rest of gpus
- Then for each batch:
1. each gpu consumes each own mini-batch of data directly
2. during `backward`, once the local gradients are ready, they are then averaged across all processes
[DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html):
For each batch:
1. gpu 0 reads the batch of data and then sends a mini-batch to each gpu
2. replicates the up-to-date model from gpu 0 to each gpu
3. runs `forward` and sends output from each gpu to gpu 0, computes loss
4. scatters loss from gpu 0 to all gpus, runs `backward`
5. sends gradients from each gpu to gpu 0 and averages those
The only communication DDP performs per batch is sending gradients, whereas DP does 5 different data exchanges per batch.
DP copies data within the process via python threads, whereas DDP copies data via [torch.distributed](https://pytorch.org/docs/master/distributed.html).
Under DP gpu 0 performs a lot more work than the rest of the gpus, thus resulting in under-utilization of gpus.
You can use DDP across multiple machines, but this is not the case with DP.
There are other differences between DP and DDP but they aren't relevant to this discussion.
If you want to go really deep into understanding these 2 modes, this [article](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/) is highly recommended, as it has great diagrams, includes multiple benchmarks and profiler outputs on various hardware, explains all the nuances that you may need to know.
Let's look at an actual benchmark:
|type| time secs |
|----|-----|
| 2:DP w/ NVlink| 110 |
| 2:DDP w/ NVlink| 101 |
| 2:DDP w/o NVlink | 131 |
Analysis:
Here DP is ~10% slower than DDP w/ NVlink, but ~15% faster than DDP w/o NVlink
The real difference will depend on how much data each GPU needs to sync with the others - the more there is to sync, the more a slow link will slow down the total runtime.
Here is the full benchmark code and outputs:
`NCCL_P2P_DISABLE=1` was used to disable the NVLink feature on the corresponding benchmark.
```
# DP
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}
# DDP w/ NVlink
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}
# DDP w/o NVlink
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
python -m torch.distributed.launch --nproc_per_node 2 examples/language-modeling/run_clm.py \
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}
```
Hardware: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (`NV2` in `nvidia-smi topo -m`)
Software: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
### Batch Sizes
The best performance is achieved when the tensor's batch size dimension is a multiple of 8. It's the final batch size of the tensor that gets passed to the GPU to calculate something that's important.
Examples:
- if you use a DP or DDP on 2 GPUs you want to have a total batch size of at least 16 (2x8), or a higher multiple. If your total batch size is 8, then each GPU will get a mini-batch of 4.
- if you use a Pipeline you want to make sure that after chunking you end up with micro-batches that are multiples of 8. For example if `chunks=3` is used, you want the batch size to be 24 (or a higher multiple of 8). Because if you use a batch size of 16, you will end up with 3 micro-batches of size 6,5,5.
There is no harm in using smaller batch sizes and at times one can hardly squeeze a batch size of 1 before getting OOM, it just won't be as fast as it can be.
### DataLoader
One of the important requirements to reach great training speed is the ability to feed the GPU at the maximum speed it can handle. By default everything happens in the main process and it might not be able to read the data from disk fast enough, and thus create a bottleneck, leading to GPU under-utilization.
- `DataLoader(`pin_memory=True`, ...)` which ensures that the data gets preloaded into the pinned memory on CPU and typically leads to much faster transfers from CPU to GPU memory.
- `DataLoader(`num_workers=4`, ...)` - spawn several workers to pre-load data faster - during training watch the GPU utilization stats and if it's far from 100% experiment with raising the number of workers. Of course, the problem could be elsewhere so a very big number of workers won't necessarily lead to a better performance.
### Faster optimizer
pytorch-nightly introduced `torch.optim._multi_tensor` which should significantly speed up the optimizers for situations with lots of small feature tensors. It should eventually become the default, but if you want to experiment with it sooner and don't mind using the bleed-edge, see: https://github.com/huggingface/transformers/issues/9965
-----------------
## Credits
It'd be difficult to track and record every contribution, so in order to keep things practical I will try to keep track of major contributors. And I have a huge gratitude to everybody who has ever asked or answered a question on forums/issues/slacks/SO/etc., parts or summaries of which were integrated into this article. Thank you!
The major contributors:
- @sgugger: fp16 section from [here](https://github.com/huggingface/transformers/issues/9742#issuecomment-765488087)
- @moyix: ideas on NVLink testing https://github.com/huggingface/transformers/issues/9371
- @ngimel: multiple insights on pytorch slack/issues
- @mcarilli: pytorch autocast
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9824/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9824/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9823/comments | https://api.github.com/repos/huggingface/transformers/issues/9823/events | https://github.com/huggingface/transformers/pull/9823 | 794,699,382 | MDExOlB1bGxSZXF1ZXN0NTYyMTcyMTE1 | 9,823 | Allow --arg Value for booleans in HfArgumentParser | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
The primary reason I dived into this PR is because when launching a training with sagemaker, bool arguments need to be passed along with something (e.g. we can't do `--do_train`, we have to do `--do_train True` because the arguments are passed as a dict). This is refused by the current argparser.
This PR changes a little bit the way `HfArgumentParser` handles bool fields in dataclasses. Up until now:
- a bool arg `foo` with `True` as a default gives a flag `--no_foo` that stores `False` in foo
- a bool arg `bar` with no default value or `False` as a default value gives a flag `bar` that stores `True` in bar
- an optional bool arg `opt` gives the same as a bool if its default is True or False. If the default is None, it gives a flag `--opt` that requires an argument that accepts anything and stores the value as a string (which is obviously a bug)
After this PR, the following happens:
- a bool arg `foo` with `True` as a default gives a flag `--no_foo` that stores `False` in foo, it also gives a flag `--foo` that can be used as is (will store `True` in foo), or by using any truthy/falsy value (`--foo yes`, `--foo True`, `--foo no`...) that will store the result as a proper bool.
- a bool arg `bar` with no default value or `False` gives a flag `bar` that can be used as is (will store `True` in bar), or by using any truthy/falsy value (`--bar yes`, `--bar True`, `--bar no`...) that will store the result as a proper bool.
- an optional bool arg `opt` gives the same as a bool if its default is True or False. If the default is None, it gives a flag `--opt` that requires an argument that accepts a truthy value and stores the value as a proper bool.
In all cases above, when a truthy value is expected but something else is passed (that is not `true`, `false`, `yes`, `no`, `1`, `0`, `t`, `f`, `y`, `n`), an error is raised.
So no breaking changes at all and all bool values can be used with an argument so that sagemaker is happy. Tests are updated and improved to check the behaviors summarized above are all correct. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9823/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9823",
"html_url": "https://github.com/huggingface/transformers/pull/9823",
"diff_url": "https://github.com/huggingface/transformers/pull/9823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9823.patch",
"merged_at": 1611757903000
} |
https://api.github.com/repos/huggingface/transformers/issues/9822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9822/comments | https://api.github.com/repos/huggingface/transformers/issues/9822/events | https://github.com/huggingface/transformers/pull/9822 | 794,665,170 | MDExOlB1bGxSZXF1ZXN0NTYyMTQ0Mjk2 | 9,822 | Fix auto-resume training from checkpoint | {
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncasey/followers",
"following_url": "https://api.github.com/users/jncasey/following{/other_user}",
"gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jncasey/subscriptions",
"organizations_url": "https://api.github.com/users/jncasey/orgs",
"repos_url": "https://api.github.com/users/jncasey/repos",
"events_url": "https://api.github.com/users/jncasey/events{/privacy}",
"received_events_url": "https://api.github.com/users/jncasey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh you need to run `make style` on your branch for the styling test to pass. Let me know if you run into any issue doing that!",
"Sorry! I'm pretty rusty on the software dev stuff - college was 16 years ago. I think I've fixed it now."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This fixes a few minor issues with training auto-resume, as discussed [here](https://github.com/huggingface/transformers/pull/9776#issuecomment-767841895)
1. `checkpoints = [path for path in content if _re_checkpoint.search(path) is not None and os.path.isdir(path)]` was returning empty. I changed `os.path.isdir(path)` to `os.path.isdir(os.path.join(folder, path))` and now it returns a list of the checkpoint folders as expected.
2. Similarly, the `get_last_checkpoint` function was returning the basename of the checkpoint folder, not the full path, which seems to be expected based on the updates to the example scripts. I changed the last line of the function to `return os.path.join(folder, max(checkpoints, key=lambda x: int(_re_checkpoint.search(x).groups()[0])))`
3. After I made those update, it was resuming from the oldest checkpoint, not the newest. I noticed the checkpoint regex was only capturing the final digit in the directory name. I changed it to `_re_checkpoint = re.compile(r"^" + PREFIX_CHECKPOINT_DIR + r"\-(\d+)$")` with the `+` inside the capture group, and now `get_last_checkpoint` is giving me the newest checkpoint as expected.
## Who can review?
- trainer: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9822/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9822",
"html_url": "https://github.com/huggingface/transformers/pull/9822",
"diff_url": "https://github.com/huggingface/transformers/pull/9822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9822.patch",
"merged_at": 1611737299000
} |
https://api.github.com/repos/huggingface/transformers/issues/9821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9821/comments | https://api.github.com/repos/huggingface/transformers/issues/9821/events | https://github.com/huggingface/transformers/issues/9821 | 794,583,399 | MDU6SXNzdWU3OTQ1ODMzOTk= | 9,821 | [trainer] renaming cl args/ trainer attributes to be clear per-gpu vs total | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I'd love to get feedback on that. Thank you!",
"As it doesn't seem to resonate as a real need, I'm closing this one."
] | 1,611 | 1,616 | 1,616 | CONTRIBUTOR | null | As we started discussing here https://github.com/huggingface/transformers/issues/9801#issuecomment-767825869 perhaps we could have a design session where we look at all of the trainer cl args (and their class attribute counterparts) and see which of them contain ambiguity wrt per-gpu vs total (and perhaps other important renames where we find things are confusing).
The intention is to make the API more intuitive and minimize the number of time we introduce breaking changes, but to attempt to do that in one go as much as possible.
One such item we started to discuss is `--max_steps`, then @sgugger mentioned `--num_train_epochs` and there are probably others.
I also proposed to potentially entertain creating a back-compat module to minimize the breaking changes pain where it's possible - renames fall perfectly into this category. I wrote:
> In some previous projects for such things we also had a back-compat mode, which ones enabled supported a whole bunch of old ways until the user was ready to make the shift to the new code. Surely a rename of a cl arg could be easily supported by such feature. So here, instead of a deprecation cycle per item the approach is to keep anything old around but only if it's loaded from a helper module. So that the main code remains clean of deprecated things. This was in a different programming environment where it was developer, so I will have to think how to do the same here.
@LysandreJik, @patrickvonplaten, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9821/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9820/comments | https://api.github.com/repos/huggingface/transformers/issues/9820/events | https://github.com/huggingface/transformers/pull/9820 | 794,583,162 | MDExOlB1bGxSZXF1ZXN0NTYyMDc1MzM5 | 9,820 | Add a flag for find_unused_parameters | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR adds a flag to control whether `find_unused_parameters` is set to `True` or not in DDP training, while keeping the current behavior as default to avoid any breaking change.
Fixes #9802 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9820",
"html_url": "https://github.com/huggingface/transformers/pull/9820",
"diff_url": "https://github.com/huggingface/transformers/pull/9820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9820.patch",
"merged_at": 1611746286000
} |
https://api.github.com/repos/huggingface/transformers/issues/9819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9819/comments | https://api.github.com/repos/huggingface/transformers/issues/9819/events | https://github.com/huggingface/transformers/pull/9819 | 794,579,936 | MDExOlB1bGxSZXF1ZXN0NTYyMDcyNjA3 | 9,819 | Add head_mask and decoder_head_mask to FSMT | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I know than one can add, for example, a line like this\r\n```\r\n# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->FSMT\r\n```\r\nbefore `Attention` module in FSMT. However, this does not copy only additions, but the whole module from BART, which is, in this case, undesirable, I guess, as these modules are a little bit different. But maybe there is another way I am not aware of.",
"@LysandreJik, @patrickvonplaten - how can we make sure fsmt gets tracked and synced with all the bart-family changes? while the tokenizer is different, the model is ~95% identical.",
"as @stancld said, we can do that with some statements of the following kind:\r\n\r\n```\r\n# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->FSMT\r\n```\r\n\r\nThe difference between the BART and FSMT implementation of the targeted object must only be the \"BART\" occurrences that change to \"FSMT\". @sgugger can tell you more about it.",
"Thank you, @LysandreJik\r\n\r\nI think this is really a question to @patrickvonplaten - who I remember was planning to refactor FSMT to match what he did for Bart. So if this is still planned, Patrick perhaps you could add this item to the agenda - keeping FSMT in sync with the Bart-family (modeling only - tokenizer is similar to xlm).\r\n\r\nSo the currently proposed solution can't be used, since Bart diverged since FSMT forked it.\r\n\r\nIt might help to treat FSMT as Bart with the main difference of it having a dual vocab and no tied weights - and a few layers that are different - but identical otherwise. (again for the model only).",
"> I think this is really a question to @patrickvonplaten - who I remember was planning to refactor FSMT to match what he did for Bart. So if this is still planned, Patrick perhaps you could add this item to the agenda - keeping FSMT in sync with the Bart-family (modeling only - tokenizer is similar to xlm).\r\n\r\nYes, the FSTM / ProphetNet refactor is still on my ToDo List (think next week is reasonable). After the refactor I'll try to add as many # Copied from statements to keep the models in sync. Nevertheless, this PR can be merged as it is now!\r\n\r\nGreat work @stancld"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR implements `head_mask` and `decoder_head_mask` for FSMT and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9569).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewer: @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9819/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9819",
"html_url": "https://github.com/huggingface/transformers/pull/9819",
"diff_url": "https://github.com/huggingface/transformers/pull/9819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9819.patch",
"merged_at": 1612161022000
} |
https://api.github.com/repos/huggingface/transformers/issues/9818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9818/comments | https://api.github.com/repos/huggingface/transformers/issues/9818/events | https://github.com/huggingface/transformers/pull/9818 | 794,567,640 | MDExOlB1bGxSZXF1ZXN0NTYyMDYyMjI4 | 9,818 | When resuming training from checkpoint, Trainer loads model | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If I may add, under `def train():`, I think the initialisation of `self._globalstep_last_logged = 0` should be `self._globalstep_last_logged=self.state.global_step`, to ensure that the first logging of the loss is correct when you later divide by `self.state.global_step-self._globalstep_last_logged`? "
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
Trainer was not reloading model when resuming training from a checkpoint, which was confusing for users (see #9099) and also was preventing the recent auto-reload from checkpoint to fully work.
This isn't a breaking change (if users were passing a model with the checkpoint already loaded, it is just loaded twice). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9818",
"html_url": "https://github.com/huggingface/transformers/pull/9818",
"diff_url": "https://github.com/huggingface/transformers/pull/9818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9818.patch",
"merged_at": 1611757879000
} |
https://api.github.com/repos/huggingface/transformers/issues/9817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9817/comments | https://api.github.com/repos/huggingface/transformers/issues/9817/events | https://github.com/huggingface/transformers/pull/9817 | 794,551,009 | MDExOlB1bGxSZXF1ZXN0NTYyMDQ4MjA0 | 9,817 | [docs] expand install instructions | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR: expands the "install from source" section in the instruction file to:
- clarify that the user is not installing the release version but bleed edge
- expand how to update it
- gives the shortcut at doing it all in one command and not needing to keep the checkout folder around
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9817/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9817",
"html_url": "https://github.com/huggingface/transformers/pull/9817",
"diff_url": "https://github.com/huggingface/transformers/pull/9817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9817.patch",
"merged_at": 1611855407000
} |
https://api.github.com/repos/huggingface/transformers/issues/9816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9816/comments | https://api.github.com/repos/huggingface/transformers/issues/9816/events | https://github.com/huggingface/transformers/pull/9816 | 794,535,798 | MDExOlB1bGxSZXF1ZXN0NTYyMDM1MzY4 | 9,816 | Setup logging with a stdout handler | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
Explicitly add stdout as a handler for the logging configuration in the example scripts, otherwise no logs are reported when training on sagemaker. Also consistently sets the level of the logging outside of the config method, as otherwise it does not work (probably a bug in the logging module).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9816/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9816",
"html_url": "https://github.com/huggingface/transformers/pull/9816",
"diff_url": "https://github.com/huggingface/transformers/pull/9816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9816.patch",
"merged_at": 1611736752000
} |
https://api.github.com/repos/huggingface/transformers/issues/9815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9815/comments | https://api.github.com/repos/huggingface/transformers/issues/9815/events | https://github.com/huggingface/transformers/pull/9815 | 794,492,489 | MDExOlB1bGxSZXF1ZXN0NTYxOTk5Mzg5 | 9,815 | Fix a bug in run_glue.py (#9812) | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the fix!",
"Thank you for your quick response!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
It seems the `if` statement for `label_to_id` seems to be strange.
There should be `not` before `is_regression`.
Fixes #9812
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9815/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9815",
"html_url": "https://github.com/huggingface/transformers/pull/9815",
"diff_url": "https://github.com/huggingface/transformers/pull/9815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9815.patch",
"merged_at": 1611689540000
} |
https://api.github.com/repos/huggingface/transformers/issues/9814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9814/comments | https://api.github.com/repos/huggingface/transformers/issues/9814/events | https://github.com/huggingface/transformers/issues/9814 | 794,486,591 | MDU6SXNzdWU3OTQ0ODY1OTE= | 9,814 | Missing head_mask and decoder_head_mask arguments in encoder-decoder models | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | # 🚀 Feature request
Following the PRs #9569, #9634 and #9639, there are other encoder-decoder models, which either do not support `head_mask` and `decoder_head_mask` input arguments at all or can be only provided with a single `head_mask` argument used for head masking both in encoder and decoder. It would be, therefore, nice to make this feature uniform over all the decoder-models.
<hr>
**Models:**
| Model | Pytorch | TensorFlow | PR | Copy dependency |
| ------ | :------: | :---------: | :--: | :-----: |
| BERTGeneration | ☑️ | ✖️ | - | - |
| EncoderDecoderModel | ☑️ | ✖️ | - | - |
| FSMT | ✅ | ✖️ | #9819 | - |
| LED | ✅ | ☑️ | PT - #9856 ; TF - #9988 | - |
| ProphetNet | ☑️ | ✖️ | #9964 | - |
| Longformer | ✅ | ☑️ | PT - #9856; TF - #9988 | LED |
## Your contribution
I'm happy to add this feature in the following days, both for PyTorch and TensorFlow models. (Likely in shorter PRs in order not to create large, overwhelming PRs)
<hr>
Reviewers: @patrickvonplaten, @jplu, @sgugger, @LysandreJik, @stas00 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9814/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9813/comments | https://api.github.com/repos/huggingface/transformers/issues/9813/events | https://github.com/huggingface/transformers/pull/9813 | 794,473,758 | MDExOlB1bGxSZXF1ZXN0NTYxOTgzODgz | 9,813 | ADD BORT | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2669577093,
"node_id": "MDU6TGFiZWwyNjY5NTc3MDkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PR%20for%20Model%20Addition",
"name": "PR for Model Addition",
"color": "5319e7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"> Thanks for adding this new model! When referencing other pages in the documentation, it's better to use `:doc:` instead of a hard link, as it will then work in all versions of the documentation (which don't have the same base url).\r\n\r\nSorry that was my bad! I copied it from DialoGPT -> Updated it there as well",
"Ah sorry @patrickvonplaten, the `model_doc/` should be removed are the pages are in the same folder. That should resolve the build doc error.",
"Great job @stefan-it "
] | 1,611 | 1,612 | 1,611 | COLLABORATOR | null | Hi,
this is a "clean" follow-up PR to the first attempt of adding Bort to Transformers (see #9112).
As Bort is based on the BERT architecture, there's no need to define own model classes, such as `BortModel`. This is done in the main Bort configuration via:
```json
"model_type": "bert"
```
Bort uses the same vocab as RoBERTa, so the tokenizer instance is also configured in the model configuration:
```json
"tokenizer_class": "RobertaTokenizer"
```
Basic integration tests and a (hopefully verbose) conversion script are also included in this PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9813/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9813/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9813",
"html_url": "https://github.com/huggingface/transformers/pull/9813",
"diff_url": "https://github.com/huggingface/transformers/pull/9813.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9813.patch",
"merged_at": 1611771912000
} |
https://api.github.com/repos/huggingface/transformers/issues/9812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9812/comments | https://api.github.com/repos/huggingface/transformers/issues/9812/events | https://github.com/huggingface/transformers/issues/9812 | 794,467,745 | MDU6SXNzdWU3OTQ0Njc3NDU= | 9,812 | `label_to_id` in `run_glue.py` seems to have a wrong `if` statement | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, there should be a not here. Do you want to open a PR since you found the problem and its fix?",
"Thanks, I'd love to open a PR! Please wait a minute.",
"I've opened a PR to fix this issue, and all checks have passed.\r\nI would be grateful if you could check it when you have time. "
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.4.0-179-generic-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Bert, xlm-roberta-large
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
It seems the `if` statement for `label_to_id` seems to be strange.
https://github.com/huggingface/transformers/blob/eba418ac5df71d08927efb7e3b738833998162ff/examples/text-classification/run_glue.py#L316-L333
About `and is_regression` in L320, should not it be `and not is_regression`?
I inserted `logging.info` to check the True/False as below:
```python
label_to_id = None
logger.info("--- label_to_id if statement check ---")
logger.info(f"{model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id}")
logger.info(f"{data_args.task_name is not None}")
logger.info(f"{is_regression}")
if (
model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id
and data_args.task_name is not None
and is_regression
):
logger.info("loading model.config.label2id")
# Some have all caps in their config, some don't.
label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()}
```
Then I got:
```python
01/27/2021 03:23:24 - INFO - __main__ - --- label_to_id if statement check ---
01/27/2021 03:23:24 - INFO - __main__ - False
01/27/2021 03:23:24 - INFO - __main__ - True
01/27/2021 03:23:24 - INFO - __main__ - False
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.02ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 20.86ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 12.80ba/s]
01/27/2021 03:23:25 - INFO - __main__ - Sample 2619 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 2916, 'input_ids': [0, 581, 172337, 5180, 3542, 39958, 1257, 678, 117303, 1010, 22230, 1810, 150, 592, 2363, 7225, 26548, 2022, 112478, 6, 4, 16454, 3912, 37967, 111, 60525, 1810, 150, 592, 747, 125682, 7, 26548, 4049, 6, 5, 2, 2, 84607, 26420, 5180, 3542, 39958, 1257, 678, 117303, 1010, 22230, 1810, 150, 592, 2363, 7225, 26548, 2022, 112478, 6, 4, 16454, 10, 3912, 9, 22469, 94309, 1363, 31330, 47, 70, 29685, 6, 5, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'label': 1, 'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .'}.
```
When `is_regression` is False (when the task is `classification`), `model.config.label2id` is never used for `label_to_id`.
If I'm not mistaken, would not this behave differently than what is intended?
I am sorry that I could not find an appropriate task/model combination to show when all other conditions would be true.
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9812/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9811/comments | https://api.github.com/repos/huggingface/transformers/issues/9811/events | https://github.com/huggingface/transformers/pull/9811 | 794,463,287 | MDExOlB1bGxSZXF1ZXN0NTYxOTc1MjQz | 9,811 | adapt mbart and generate for Mbart50 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I like the idea of using `LogitsProcessor`, my only concern is now each time the user wants to use a different `max_length` they would need to pass the `forced_pos_id_pairs= {max_length: eos_token_id}`.\r\n\r\nAlso IMO `prefix_token` sounds more intuitive than `forced_pos_id_pairs`. So I think we could add `ForceTokenProcessor ` and keep both `prefix_token` and `forced_pos_id_pairs` arguments. \r\n\r\nAnd if `prefix_token` is passed or `config.force_bos_token_to_be_generated` is `True` we set\r\n```python\r\nforced_pos_id_pairs = {2: prefix_token or bos_token_id, max_length: eos_token_id}\r\n```\r\n\r\nthis would avoid breaking change.\r\n",
"Putting our discussion with @patil-suraj down here for everyone to see. @patil-suraj brought up a good point that `forced_pos_id_pairs` is not super user friendly and might be hard to read and people usually never force \"in the middle\" tokens to be generated. So I like the proposed approach making two LogitsProcessor better as well => we should therefore make \r\na `ForcedBosTokenLogitsProcessor` that takes a `token` as input and always forces the first token to be generated and a `ForcedEosTokenLogitsProcessor` that also takes a `token` as input and forces this token to be generated at `max_length`. \r\nAs discussed we should delete all `adjust_logits` functionality and also get rid of Bart's `config.force_eos_to_be_generated` parameter while keeping full backwards compatibility as discussed.",
"Thanks a lot for making this work @patil-suraj! \r\n\r\nWill review the PR now. I saw that you changed the config for `facebook/bart-large-cnn` online which is nice, but should not delete: `force_bos_token_to_be_generated` from the config or it won't be backwards compatible (previous transformer versions need to still use this param). Also it seems like you accidently uploaded a `.ipynb_checkpoints` folder: force_bos_token_to_be_generated",
"> We have to make sure that all slow tests for Bart, MBart, Pegasus, Marian, Blenderbot, BlenderbotSmall, FSMT, RAG pass for both PT and TF. I think some of the RAGTokenGeneration could have been broken here\r\n\r\nall slow tests are passing for PT, will run the RAG tests now. Also, there is no `force_bos_token_id_to_be_generated` parameter in `RagConfig`, it's in the `generator` config if the `generator` is BART, and `BartConfig` already handles this.\r\n\r\n> Not sure whether we want to do inheritance for the MBart50Tokenizer\r\n\r\nNo strong opinion here will remove the inheritance.\r\n\r\nRegarding the `prepare_seq2seq_batch` method:\r\nThese checkpoints are mostly intended for multilingual fine-tuning/translation, in this case, it's actually nice to be able to pass lang_id directly for encoding rather than setting `src_lang` and `tgt_lang` on tokenizer each time we have a new language pair.\r\n\r\nIf we have fixed src and target language then in that case `prepare_seq2seq_batch` definitely doesn't make much sense."
] | 1,611 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
This PR adapts `MBartForConditionalGeneration` and `generate` for mbart-50 models.
There are two main differences between mbart-50 and existing mbart-cc25 models
1. for mbart-50 both source and target language text begin with the `<language token>`, in mbart-cc25 `<language_token>` is used as suffix token.
2. Also the `decoder_input_ids` begin with `[eos] [tgt_lang_token] ...`, so for generation we need to use `eos` as the `decoder_start_token_id` and force the `tgt_lang_token` as the first generated token.
This PR
1. adds `MBart50Tokenizer` which encodes the text as described above, IMO adding a new tokenizer makes sense as it'll make it explicit that mbart-50 encodes the text differently.
2. introduces two new `generate` arguments and `LogitsProcessor`
- `forced_bos_token_id` and `forced_eos_token_id`, to force a specific start and end token. This is particularly useful for
many to many and one to many translation models, so we can pass different language tokens as `forced_bos_token_id` to
`generate`,
- `ForcedBosTokenLogitsProcessor` and `ForcedEosTokenLogitsProcessor`
3. Remove `adjust_logits_during_generation` method from all models (except from `Marian`) and handle that use case using the newly introduced logits processors.
4. remove the `force_bos_token_to_be_generated` argument from `BartConfig`
For `Marian` we still need to keep the `adjust_logits_during_generation` method to force the model to not generate pad token. Adding the pad token to `bad_words_ids` does not resolve this issue, the score of `pad_token_id` needs to be set to `-inf` before calling `log_softmax`.
Below is an example of mbart-50 model using `forced_bos_token_id`
```python
from transformers import MBartForConditionalGeneration, MBart50Tokenizer
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-50-large-many-to-many")
tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-50-large-many-to-many")
# translate Hindi to French
encoded_hi = tokenizer.prepare_seq2seq_batch(src_texts=article_hi, src_lang="hi_IN", return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire dans la Syrie."
# translate Arabic to English
encoded_ar = tokenizer.prepare_seq2seq_batch(src_texts=article_ar, src_lang="ar_AR", return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
```
TODOs:
- [x] Make sure all generation related slow integration tests pass for affected models
- [x] BART
- [x] mBART (one test is failing, but it's failing on master as well, so not related to this PR)
- [x] Blender
- [x] FSMT
- [x] Marian
- [x] Pegasus
- [x] Generation integration test
- [x] add tests for `ForcedBosTokenLogitsProcessor` and `ForcedEosTokenLogitsProcessor`
- [x] document mBART-50
- [x] Add model cards ([all mbart-50 models](https://huggingface.co/models?filter=mbart-50))
- [x] add the forced params to `facebook/bart-large-cnn`'s config on the hub
- [ ] notebook explaining how to use the one-to-many and many-to-many translation models
Fixes #7060 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9811",
"html_url": "https://github.com/huggingface/transformers/pull/9811",
"diff_url": "https://github.com/huggingface/transformers/pull/9811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9811.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9810/comments | https://api.github.com/repos/huggingface/transformers/issues/9810/events | https://github.com/huggingface/transformers/issues/9810 | 794,439,866 | MDU6SXNzdWU3OTQ0Mzk4NjY= | 9,810 | Can I use a smaller base model than allenai/led-base-16384 for LED? | {
"login": "mmoya01",
"id": 17535683,
"node_id": "MDQ6VXNlcjE3NTM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17535683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmoya01",
"html_url": "https://github.com/mmoya01",
"followers_url": "https://api.github.com/users/mmoya01/followers",
"following_url": "https://api.github.com/users/mmoya01/following{/other_user}",
"gists_url": "https://api.github.com/users/mmoya01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmoya01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmoya01/subscriptions",
"organizations_url": "https://api.github.com/users/mmoya01/orgs",
"repos_url": "https://api.github.com/users/mmoya01/repos",
"events_url": "https://api.github.com/users/mmoya01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmoya01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'd suggest to still use `allenai/led-base-16384` and just pad the input to a maximum length of only `2048`. Also you could think about reducing `config.attention_window` of the model's config to something like 512 or 256 to make your model more efficient for `2048` input",
"E.g. in this notebook: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing, I just pad the max length of every input to `8192` and use `allenai/led-base-16384` which works very well!",
"Sounds great, I'll look into setting`led.config.attention_window=512` instead of 1024 and the `max_input_length=2048` for the encoder. Thank you for your feedback!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hello, I'm trying to fine tune my own Longformer Encoder Decoder following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n). However, I was wondering if there was a way to consider a base model like
`allenai/longformer-base-4096`
instead of
`led-base-16384`?
When I try doing
```python
led = AutoModelForSeq2SeqLM.from_pretrained(
"allenai/longformer-base-4096",
config="roberta-base",
gradient_checkpointing=True,
use_cache=False,
)
```
but that gives me
```
led = AutoModelForSeq2SeqLM.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/modeling_auto.py", line 1221, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.longformer.configuration_longformer.LongformerConfig'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of LEDConfig, BlenderbotSmallConfig, MT5Config, T5Config, PegasusConfig, MarianConfig, MBartConfig, BlenderbotConfig, BartConfig, FSMTConfig, EncoderDecoderConfig, XLMProphetNetConfig, ProphetNetConfig.
```
my `encoder_max_length` is only 2048 since I'm not planning on feeding transcripts of 8k word to summarize but rather transcripts of 2k words. So using the smaller base model would work great. I'm basically trying to replicate the fine tuning for this
https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16
^but I would want to use that model more as a checkpoint to fine tune further on domain specific data. Hence why I'm trying to create:
1.) a model fine tuned on cnn dailymail data(basically replicate the [longformer2roberta](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) but a version that we can fine tune further)
2.) using part 1 as a checkpoint, fine tune that on domain specific transcript+summary data
@patrickvonplaten or others in the community, I'd greatly appreciate any advice on this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9810/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9809/comments | https://api.github.com/repos/huggingface/transformers/issues/9809/events | https://github.com/huggingface/transformers/pull/9809 | 794,370,868 | MDExOlB1bGxSZXF1ZXN0NTYxODk5MTE2 | 9,809 | Fix fine-tuning translation scripts | {
"login": "mbiesialska",
"id": 7369819,
"node_id": "MDQ6VXNlcjczNjk4MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7369819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbiesialska",
"html_url": "https://github.com/mbiesialska",
"followers_url": "https://api.github.com/users/mbiesialska/followers",
"following_url": "https://api.github.com/users/mbiesialska/following{/other_user}",
"gists_url": "https://api.github.com/users/mbiesialska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbiesialska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbiesialska/subscriptions",
"organizations_url": "https://api.github.com/users/mbiesialska/orgs",
"repos_url": "https://api.github.com/users/mbiesialska/repos",
"events_url": "https://api.github.com/users/mbiesialska/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbiesialska/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
In the [seq2seq README.md](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#new-script) there are some errors. This PR fixes typos that cause the following problem:
```
Traceback (most recent call last):
File "transformers/examples/seq2seq/run_seq2seq.py", line 536, in <module>
main()
File "transformers/examples/seq2seq/run_seq2seq.py", line 419, in main
load_from_cache_file=not data_args.overwrite_cache,
File ".../lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File ".../lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "transformers/examples/seq2seq/run_seq2seq.py", line 388, in preprocess_function
inputs = [ex[source_lang] for ex in examples["translation"]]
File "transformers/examples/seq2seq/run_seq2seq.py", line 388, in <listcomp>
inputs = [ex[source_lang] for ex in examples["translation"]]
KeyError: 'en-XX'
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
<!--
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -->
## Who can review?
@sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9809/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9809",
"html_url": "https://github.com/huggingface/transformers/pull/9809",
"diff_url": "https://github.com/huggingface/transformers/pull/9809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9809.patch",
"merged_at": 1611678632000
} |
https://api.github.com/repos/huggingface/transformers/issues/9808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9808/comments | https://api.github.com/repos/huggingface/transformers/issues/9808/events | https://github.com/huggingface/transformers/pull/9808 | 794,336,337 | MDExOlB1bGxSZXF1ZXN0NTYxODcwMzcy | 9,808 | Adding a test to prevent late failure in the Table question answering pipeline. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
- If table is empty then the line that contain `answer[0]` will fail.
- This PR add a check to prevent `answer[0]`.
- Also adds an early check for presence of `table` and `query` to
prevent late failure and give better error message.
- Adds a few tests to make sure these errors are correctly raised.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9808",
"html_url": "https://github.com/huggingface/transformers/pull/9808",
"diff_url": "https://github.com/huggingface/transformers/pull/9808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9808.patch",
"merged_at": 1611738653000
} |
https://api.github.com/repos/huggingface/transformers/issues/9807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9807/comments | https://api.github.com/repos/huggingface/transformers/issues/9807/events | https://github.com/huggingface/transformers/pull/9807 | 794,304,150 | MDExOlB1bGxSZXF1ZXN0NTYxODQzNjU1 | 9,807 | Partial local tokenizer load | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | This PR aims to allow partial loading of a cached tokenizer.
Fixes #9147 which explains the issue in a lot of detail.
Currently, if we download a tokenizer from the hub using the `from_pretrained` method:
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google/bert_uncased_L-2_H-128_A-2", local_files_only=True)
```
It caches the files to be reused later. Reloading the tokenizer while specifying `local_files_only=True`
```py
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("google/bert_uncased_L-2_H-128_A-2", local_files_only=True)
```
results in a failure as it tries to fetch all of the tokenizer files, even those that are necessary. It currently fails with a hard error.
This PR changes that error to an info log, and prints a single log containing all the files that were not loaded. I put it as an `info` and not as a `warning` or an `error`, because the situation where this is an actual issue is imo very rare; it is a real issue only when the initial `from_pretrained` managed to obtain only some of the necessary files, i.e., when the download was interrupted.
Running the last snippet results in the following warning:
```
Can't load following files from cache: ['added_tokens_file', 'special_tokens_map_file', 'tokenizer_config_file', 'tokenizer_file'] and cannot check if these files are necessary for the tokenizer to operate.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9807/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9807",
"html_url": "https://github.com/huggingface/transformers/pull/9807",
"diff_url": "https://github.com/huggingface/transformers/pull/9807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9807.patch",
"merged_at": 1611822554000
} |
https://api.github.com/repos/huggingface/transformers/issues/9806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9806/comments | https://api.github.com/repos/huggingface/transformers/issues/9806/events | https://github.com/huggingface/transformers/pull/9806 | 794,273,850 | MDExOlB1bGxSZXF1ZXN0NTYxODE4MTc3 | 9,806 | Add a test for TF mixed precision | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think that the tests that fails are related to this PR."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a test to check if our TF models are float16 compliant or not. It also helps me to detect what are those that have to be fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9806/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9806",
"html_url": "https://github.com/huggingface/transformers/pull/9806",
"diff_url": "https://github.com/huggingface/transformers/pull/9806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9806.patch",
"merged_at": 1611736609000
} |
https://api.github.com/repos/huggingface/transformers/issues/9805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9805/comments | https://api.github.com/repos/huggingface/transformers/issues/9805/events | https://github.com/huggingface/transformers/pull/9805 | 794,261,700 | MDExOlB1bGxSZXF1ZXN0NTYxODA3OTY1 | 9,805 | Commit the last step on world_process_zero in WandbCallback | {
"login": "tristandeleu",
"id": 2018752,
"node_id": "MDQ6VXNlcjIwMTg3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2018752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tristandeleu",
"html_url": "https://github.com/tristandeleu",
"followers_url": "https://api.github.com/users/tristandeleu/followers",
"following_url": "https://api.github.com/users/tristandeleu/following{/other_user}",
"gists_url": "https://api.github.com/users/tristandeleu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tristandeleu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tristandeleu/subscriptions",
"organizations_url": "https://api.github.com/users/tristandeleu/orgs",
"repos_url": "https://api.github.com/users/tristandeleu/repos",
"events_url": "https://api.github.com/users/tristandeleu/events{/privacy}",
"received_events_url": "https://api.github.com/users/tristandeleu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Would it make sense to move [those 2 lines](https://github.com/huggingface/transformers/blob/0dd939bf1e01594eadf21b40e5cdb07001233cbf/src/transformers/integrations.py#L573-L574) to the init method instead of setting `self._log_model` to False?",
"This should be equivalent indeed, and probably easier to read",
"Looks great!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Only commit the last step on the first process (`is_world_process_zero == True`), to avoid calling `wandb.log()` without a prior call to `wandb.init()` (the latter being only called on the first process) with DDP.
Fixes (at least partially) #9623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@borisdayma @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9805/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9805",
"html_url": "https://github.com/huggingface/transformers/pull/9805",
"diff_url": "https://github.com/huggingface/transformers/pull/9805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9805.patch",
"merged_at": 1611685287000
} |
https://api.github.com/repos/huggingface/transformers/issues/9804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9804/comments | https://api.github.com/repos/huggingface/transformers/issues/9804/events | https://github.com/huggingface/transformers/issues/9804 | 794,175,108 | MDU6SXNzdWU3OTQxNzUxMDg= | 9,804 | Finetuning ProphetNet with Seq2SeqTrainer fails. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @alexvaca0,\r\n\r\nThanks for your issue. We have started to create a more general script called `run_seq2seq.py` with which fine-tuning ProphetNet should work rather easily. \r\n\r\nCould you try to pull current master and do:\r\n\r\n```\r\npython examples/seq2seq/run_seq2seq.py --learning_rate=3e-5 --task summarization --do_train --do_eval --evaluation_strategy steps --model_name_or_path microsoft/prophetnet-large-uncased --output_dir myoutputdir --per_device_train_batch_size 8 --per_device_eval_batch_size 16 --eval_accumulation_steps 8 --gradient_accumulation_steps 8 --num_train_epochs=20 --eval_beams=1 --load_best_model_at_end --save_steps 25 --logging_steps 25 --fp16 --overwrite_output_dir --dataset_name cnn_dailymail --dataset_config_name 3.0.0\r\n```\r\n\r\nfor the cnn/dailymail dataset *e.g.*.\r\n\r\nPlease let me know how it goes, I'm very interested in ProphetNet fine-tuning results.",
"Thank you very much for your quick response! @patrickvonplaten As soon as I can, I'll try that command to check if the new script run_seq2seq.py works fine with ProphetNet. When I have results/errors I'll let you know.\r\n",
"I've tried to run the script you said @patrickvonplaten , but it returns the following error when evaluating:\r\n\r\n```{python}\r\nAll the weights of ProphetNetForConditionalGeneration were initialized from the model checkpoint at microsoft/prophetnet-large-uncased.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use ProphetNetForConditionalGeneration for predictions without further training.\r\nLoading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-2def39d5bd2a9c76/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-7e4959c336c61e5a.arrow\r\nLoading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-2def39d5bd2a9c76/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-b898db3404de8043.arrow\r\nThe following columns in the training set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.\r\nThe following columns in the evaluation set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.\r\n***** Running training *****\r\n Num examples = 33451\r\n Num Epochs = 20\r\n Instantaneous batch size per device = 8\r\n Total train batch size (w. parallel, distributed & accumulation) = 128\r\n Gradient Accumulation steps = 16\r\n Total optimization steps = 5220\r\n{'loss': 5.5221, 'learning_rate': 4.760536398467433e-05, 'epoch': 0.96}\r\n 5% 250/5220 [16:57<5:41:20, 4.12s/it]***** Running Evaluation *****\r\n Num examples = 2697\r\n Batch size = 16\r\n\r\n 0% 0/169 [00:00<?, ?it/s]\r\n 1% 2/169 [00:00<00:10, 16.33it/s]\r\n 2% 3/169 [00:00<00:12, 13.45it/s]\r\n 3% 5/169 [00:00<00:14, 11.34it/s]\r\n 4% 6/169 [00:00<00:15, 10.62it/s]\r\n 4% 7/169 [00:00<00:21, 7.53it/s]\r\n 5% 8/169 [00:00<00:21, 7.48it/s]\r\n 5% 9/169 [00:01<00:21, 7.54it/s]\r\n 6% 10/169 [00:01<00:24, 6.46it/s]\r\n 7% 11/169 [00:01<00:22, 7.01it/s]\r\n 7% 12/169 [00:01<00:21, 7.40it/s]\r\n 8% 13/169 [00:01<00:20, 7.47it/s]\r\n 8% 14/169 [00:01<00:19, 8.06it/s]\r\n 9% 15/169 [00:01<00:18, 8.48it/s]\r\n 9% 16/169 [00:01<00:19, 7.72it/s]\r\n 10% 17/169 [00:02<00:18, 8.19it/s]\r\n 11% 18/169 [00:02<00:19, 7.60it/s]\r\n 11% 19/169 [00:02<00:19, 7.77it/s]\r\n 12% 20/169 [00:02<00:19, 7.60it/s]\r\n 12% 21/169 [00:02<00:20, 7.31it/s]\r\n 13% 22/169 [00:02<00:18, 7.79it/s]\r\n 14% 23/169 [00:02<00:19, 7.36it/s]\r\n 14% 24/169 [00:03<00:18, 7.76it/s]\r\n 15% 25/169 [00:03<00:18, 7.77it/s]\r\n 15% 26/169 [00:03<00:18, 7.93it/s]\r\n 16% 27/169 [00:03<00:17, 8.29it/s]\r\n 17% 28/169 [00:03<00:18, 7.82it/s]\r\n 17% 29/169 [00:03<00:22, 6.14it/s]\r\n 18% 30/169 [00:03<00:24, 5.79it/s]\r\n 18% 31/169 [00:04<00:22, 6.04it/s]\r\n 19% 32/169 [00:04<00:21, 6.28it/s]\r\n 20% 34/169 [00:04<00:18, 7.11it/s]\r\n 21% 35/169 [00:04<00:17, 7.67it/s]\r\n 21% 36/169 [00:04<00:16, 7.98it/s]\r\n 22% 37/169 [00:04<00:15, 8.27it/s]\r\n 22% 38/169 [00:04<00:17, 7.38it/s]\r\n 23% 39/169 [00:05<00:20, 6.40it/s]\r\n 24% 40/169 [00:05<00:18, 7.00it/s]\r\n 25% 42/169 [00:05<00:16, 7.65it/s]\r\n 26% 44/169 [00:05<00:15, 8.00it/s]\r\n 27% 45/169 [00:05<00:14, 8.46it/s]\r\n 28% 47/169 [00:06<00:14, 8.53it/s]\r\n 28% 48/169 [00:06<00:13, 8.65it/s]\r\n 29% 49/169 [00:06<00:15, 7.98it/s]\r\n 30% 50/169 [00:06<00:14, 8.33it/s]\r\n 30% 51/169 [00:06<00:15, 7.67it/s]\r\n 31% 52/169 [00:06<00:14, 7.95it/s]\r\n 31% 53/169 [00:06<00:16, 7.03it/s]\r\n 32% 54/169 [00:07<00:21, 5.43it/s]\r\n 33% 55/169 [00:07<00:18, 6.27it/s]\r\n 33% 56/169 [00:07<00:16, 6.98it/s]\r\n 34% 57/169 [00:07<00:14, 7.61it/s]\r\n 34% 58/169 [00:07<00:13, 7.94it/s]\r\n 35% 59/169 [00:07<00:13, 8.30it/s]\r\n 36% 60/169 [00:07<00:12, 8.71it/s]\r\n 36% 61/169 [00:07<00:12, 8.69it/s]\r\n 37% 62/169 [00:08<00:12, 8.65it/s]\r\n 37% 63/169 [00:08<00:13, 7.87it/s]\r\n 38% 64/169 [00:08<00:13, 7.93it/s]\r\n 39% 66/169 [00:08<00:12, 8.46it/s]\r\n 40% 67/169 [00:08<00:13, 7.43it/s]\r\n 40% 68/169 [00:08<00:13, 7.74it/s]\r\n 41% 69/169 [00:08<00:12, 8.10it/s]\r\n 41% 70/169 [00:08<00:11, 8.41it/s]\r\n 42% 71/169 [00:09<00:11, 8.79it/s]\r\n 43% 72/169 [00:09<00:10, 9.06it/s]\r\n 43% 73/169 [00:09<00:10, 9.22it/s]\r\n 44% 74/169 [00:09<00:10, 9.02it/s]\r\n 45% 76/169 [00:09<00:10, 8.95it/s]\r\n 46% 77/169 [00:09<00:11, 8.09it/s]\r\n 46% 78/169 [00:09<00:10, 8.39it/s]\r\n 47% 79/169 [00:10<00:10, 8.45it/s]\r\n 47% 80/169 [00:10<00:10, 8.63it/s]\r\n 48% 81/169 [00:10<00:10, 8.44it/s]\r\n 49% 82/169 [00:10<00:12, 7.12it/s]\r\n 49% 83/169 [00:10<00:11, 7.58it/s]\r\n 50% 84/169 [00:10<00:13, 6.09it/s]\r\n 50% 85/169 [00:10<00:12, 6.88it/s]\r\n 51% 86/169 [00:11<00:11, 7.25it/s]\r\n 51% 87/169 [00:11<00:10, 7.80it/s]\r\n 52% 88/169 [00:11<00:09, 8.32it/s]\r\n 53% 89/169 [00:11<00:09, 8.67it/s]\r\n 53% 90/169 [00:11<00:08, 8.86it/s]\r\n 54% 91/169 [00:11<00:08, 8.92it/s]\r\n 54% 92/169 [00:11<00:08, 8.57it/s]\r\n 55% 93/169 [00:11<00:08, 8.56it/s]\r\n 56% 94/169 [00:11<00:08, 8.76it/s]\r\n 56% 95/169 [00:12<00:08, 8.68it/s]\r\n 57% 96/169 [00:12<00:08, 8.58it/s]\r\n 57% 97/169 [00:12<00:09, 7.23it/s]\r\n 58% 98/169 [00:12<00:09, 7.33it/s]\r\n 59% 99/169 [00:12<00:08, 7.89it/s]\r\n 59% 100/169 [00:12<00:08, 8.21it/s]\r\n 60% 101/169 [00:12<00:09, 7.45it/s]\r\n 60% 102/169 [00:12<00:08, 7.87it/s]\r\n 61% 103/169 [00:13<00:08, 7.92it/s]\r\n 62% 104/169 [00:13<00:08, 8.08it/s]\r\n 62% 105/169 [00:13<00:08, 7.98it/s]\r\n 63% 106/169 [00:13<00:08, 7.09it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [117,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [77,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [157,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [37,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"transformers/examples/seq2seq/run_seq2seq.py\", line 541, in <module>\r\n main()\r\n File \"transformers/examples/seq2seq/run_seq2seq.py\", line 503, in main\r\n train_result = trainer.train(model_path=model_path)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py\", line 924, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py\", line 999, in _maybe_log_save_evaluate\r\n metrics = self.evaluate()\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer_seq2seq.py\", line 96, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py\", line 1447, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py\", line 1564, in prediction_loop\r\n loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer_seq2seq.py\", line 175, in prediction_step\r\n model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/trainer.py\", line 1670, in prediction_step\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py\", line 1772, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py\", line 1656, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/content/gdrive/MyDrive/GColab_folder/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py\", line 1223, in forward\r\n hidden_states = inputs_embeds + position_embeddings\r\nRuntimeError: CUDA error: device-side assert triggered\r\n 5% 250/5220 [17:12<5:42:05, 4.13s/it]\r\n```\r\nI've run it with --no_cuda and there are no errors, it works properly in that setting. Therefore it must be a cuda-related issue. I've tried disabling fp16 and the error persists.",
"I confirm that with t5 it works, therefore it's prophetnet-related.",
"Who is in charge of developing ProphetNet code? @patrickvonplaten @sgugger ",
"Hey @alexvaca0, thanks for trying out the script! I'm quite sure that this is an indexing error that occers because a data sample is too large for the model to handle. It should be easy to fix by simply adding:\r\n\r\n```\r\n--max_source_length 512\r\n```\r\n\r\nto the command above. Could you try this and let me know if it works? :-)",
"@patrickvonplaten Great! That was it, the sequence length! \r\n\r\nActually, I'm trying to fine-tune ProphetNet in a Summarization task, in which models like T5, BART etc achieve eval losses of around 0.5-0.6 (approx), but with ProphetNet I'm not able to go below 5, and the eval loss doesn't actually decrease over training, it seems like it's diverging. I've tried using the same parameters as with BART and T5, and also with the parameters of the paper (https://arxiv.org/pdf/2001.04063.pdf) for CNN/DailyMail, that is batch size 512, learning rate 1e-04 with warmup steps 1000 (in my case I use less due to training data size). \r\n\r\nAny recommendations/suggestions? ProphetNet was expected to work similarly to BART but its performance is much worse until now...",
"I don't know if this warning provides some extra info: The following columns in the training set don't have a corresponding argument in `ProphetNetForConditionalGeneration.forward` and have been ignored: token_type_ids.\r\n @patrickvonplaten ",
"> @patrickvonplaten Great! That was it, the sequence length!\r\n> \r\n> Actually, I'm trying to fine-tune ProphetNet in a Summarization task, in which models like T5, BART etc achieve eval losses of around 0.5-0.6 (approx), but with ProphetNet I'm not able to go below 5, and the eval loss doesn't actually decrease over training, it seems like it's diverging. I've tried using the same parameters as with BART and T5, and also with the parameters of the paper (https://arxiv.org/pdf/2001.04063.pdf) for CNN/DailyMail, that is batch size 512, learning rate 1e-04 with warmup steps 1000 (in my case I use less due to training data size).\r\n> \r\n> Any recommendations/suggestions? ProphetNet was expected to work similarly to BART but its performance is much worse until now...\r\n\r\nInteresting! Could you share the exact command you used here? Also pinging @qiweizhen - do you know what could be a problem for this? Are we sure that the n-gram loss is correctly implemented?",
"```{bash}\r\npython transformers/examples/seq2seq/run_seq2seq.py \\\r\n --model_name_or_path microsoft/prophetnet-large-uncased \\\r\n --do_eval --do_train \\\r\n --task summarization \\\r\n --train_file train_df.csv \\\r\n --validation_file val_df.csv \\\r\n --output_dir prophetnet_0201 \\\r\n --overwrite_output_dir \\\r\n --per_device_train_batch_size=8 \\\r\n --per_device_eval_batch_size=16 \\\r\n --eval_accumulation_steps=10 \\\r\n --text_column text \\\r\n --max_source_length 364 \\\r\n --summary_column summary \\\r\n --max_target_length 60 \\\r\n --val_max_target_length 60 --evaluation_strategy steps \\\r\n --gradient_accumulation_steps 64 --num_train_epochs=20 --eval_beams=1 \\\r\n --load_best_model_at_end --save_steps 75 --logging_steps 75 --learning_rate 1e-04 --warmup_steps 200 \r\n```\r\nThis is the command I'm using. After trying some modifications I observe the same: no progress is made in evaluation, and almost no progress in training (loss 5.1 after almost 7 epochs), so it seems there may be some issue with ProphetNet implementation...\r\n@patrickvonplaten @qiweizhen ",
"I got the same problem that the training crashed in run_seq2seq.py. BTW, I guess that it is related to sequence lengths in configuration and training sets.\r\n\r\n```bash\r\npython ./run_seq2seq.py \\\r\n --model_name_or_path sshleifer/student_marian_en_ro_6_1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --task translation_en_to_ro \\\r\n --dataset_name wmt16 \\\r\n --dataset_config_name ro-en \\\r\n --source_lang en_XX \\\r\n --target_lang ro_RO\\\r\n --output_dir ~/tmp/tst-translation \\\r\n --per_device_train_batch_size=4 \\\r\n --per_device_eval_batch_size=4 \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate\r\n```\r\n\r\ntransformers version: 4.4.0.dev0\r\nPlatform: Ubuntu 16.04.7\r\nPython version: 3.8.5\r\nPyTorch version (GPU?): 1.7.1 (YES)\r\nTensorflow version (GPU?):\r\nUsing GPU in script?: YES\r\nUsing distributed or parallel set-up in script?: Yes, it detects 6 GPUs.\r\n\r\nError:\r\n\r\n> /opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu/opt/conda/conda-bld/pytorch_1607369981906/work/at en/src/ATen/native/cuda/Indexing.cu:658/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: index SelectLargeIndex:658: indexSelectLargeIndex: block: [264,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [264,0,0], thr ead: [1,0: block: [267,0,0: indexSelectLargeIndex], thread: [32,0: block: [263,0,0,0,0], thread: [96,0,0] Assertion `srcIndex < srcSele ctDimSize` failed.\r\n] Assertion `srcIndex < srcSelectDimSize] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [263,0/opt/con da/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu` failed.\r\n,0:658/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu], thread: [97: indexSelectLargeIndex:658,0: block: [264: indexSelectLargeIndex,0,0: block: [267] Assertion `srcIndex < srcSelectDimSize,0,0` failed.\r\n], thread: [2,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu,0], thread: [33:658,0,0: indexSele ctLargeIndex] Assertion `srcIndex < srcSelectDimSize,0: block: [263` failed.\r\n] Assertion `srcIndex < srcSelectDimSize,0` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [267,0,0], thr ead: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [267,0,0], thr ead: [35,0,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu,0] Assertion `srcIndex < srcSelectDim Size:658], thread: [98` failed.\r\n: indexSelectLargeIndex,0/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu: block: [264,0:658,0] As sertion `srcIndex < srcSelectDimSize: indexSelectLargeIndex,0` failed.\r\n: block: [267], thread: [3/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIn dex: block: [263,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [263,0,0,0,0], thread: [100,0,0,0], thread: [36] Assertion `srcIndex < srcSelectDimSize,0,0` failed.\r\n/opt/conda/conda-bld/pytorch_1607369981906/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [264,0,0], thr ead: [4,0] Assertion `srcIndex < srcSelectDimSize,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n,0` failed.\r\n",
"Any updates on ProphetNet loss?? @patrickvonplaten ",
"I have some more information on this. After training for 20 epochs, it learns almost nothing. Most interestingly, its outputs doesn't change when inputs change, that is, it always predicts the same. Predictions are like a mix of different summaries, getting elements from different types of summarizable texts, but it's the same for all... This brings me to think that in some way the network is constructed so that the output layer must always output the same thing, as if it must improve on all batches at the same time, I don't know if I'm explaining myself. It's clear that it is learning \"something\", in the sense that the summaries are clearly taken from my corpus style, but it's kind of learning to make the same summary for all texts. Since I'm using the same script as for other models, I guess there is some error in the network implementation...",
"Hey @alexvaca0,\r\n\r\nI think I can reproduce your error. My training loss is also not improving after quite some time - will look into it!",
"Okay perfect! Please let me know when the issue is solved.",
"The original author @qiweizhen of the model was so nice to say he'll take a look. @qiweizhen - feel free to directly post any potential bugs in this PR.",
"Any updates on this? @qiweizhen @patrickvonplaten ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Ping",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"If prophetnet is not going to be fixed, then I think it should be removed from the library, as it is worthless having it here without being able to use it.",
"The model works fine in inference - it's the training that seems to be buggy. @qiweizhen - do you think we could take a look at ProphetNet together? ",
"> The model works fine in inference - it's the training that seems to be buggy. @qiweizhen - do you think we could take a look at ProphetNet together?\r\n\r\n\r\n\r\n> If prophetnet is not going to be fixed, then I think it should be removed from the library, as it is worthless having it here without being able to use it.\r\n\r\nSorry. Will fix it as soon as possible.\r\n",
"It's strange that I can get correct inference / forward results with beam search, but as you pointed out, the model has non-convergence problem. I try to load the pretrained checkpoint and finetuned checkpoint to carry out further fine-tuning, all of their loss is optimized to 7.x and keeps that loss. With the finetuned checkpoint plus further fine-tuning, the results are still reasonable but a bit worse. I suspect the most part of the model is frozen and only a small part is trainable but I failed to find this bug. I also tried overfitting experiments and the model still can not converge. I will try 1) old Transformers version and 2) fairseq model to compare the intermediate hidden states with the latest Transformers prophetnet model to localize the bug this weekend.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Has the problem of ProphetNet non-convergence been solved? I want to fine tune it based on its checkpoint.",
"I think the code to compute the loss may be wrong.\r\n\r\nThis is the code to compute the loss in [`ProphetNetForConditionalGeneration`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/modeling_prophetnet.py#L1968):\r\n```python\r\n predicting_streams = outputs[1].view(batch_size, self.config.ngram, sequence_length, -1)\r\n predict_logits = self.lm_head(predicting_streams)\r\n\r\n ...\r\n\r\n loss = None\r\n if labels is not None:\r\n loss = self._compute_loss(predict_logits, labels)\r\n```\r\nThe shape of `predicting_streams` is `(batch_size, ngram, sequence_length, hidden_size)`.\r\nThe shape of `predict_logits` is `(batch_size, ngram, sequence_length, vocab_size)`.\r\nThe shape of `labels` is `(batch_size, sequence_length)`.\r\n\r\nThen pass `predict_logits` and `labels` to `_compute_loss`, the code of [`_compute_loss`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/modeling_prophetnet.py#L2001) is: \r\n```python\r\n def _compute_loss(self, logits, labels, ignore_index=-100):\r\n expend_targets = labels.new_zeros(self.config.ngram, labels.size(0), labels.size(1)).fill_(ignore_index)\r\n\r\n for i in range(self.config.ngram):\r\n if i > 0 and self.disable_ngram_loss:\r\n break\r\n expend_targets[i, :, :] = labels\r\n\r\n lprobs = nn.functional.log_softmax(\r\n logits.view(-1, logits.size(-1)),\r\n dim=-1,\r\n dtype=torch.float32,\r\n )\r\n\r\n loss = nn.functional.nll_loss(lprobs, expend_targets.view(-1), reduction=\"mean\")\r\n\r\n ...\r\n\r\n return loss\r\n```\r\nThe shape of `expend_targets` is `(ngram, batch_size, sequence_length)`, the shape of `expend_targets.view(-1)` is `(ngram * batch_size * sequence_length)`, .\r\nThe shape of `lprobs` is `(batch_size * ngram * sequence_length, vocab_size)`.\r\nThen computing the `nll_loss` of `lprobs` and `expend_targets` leads to the mismatch.",
"@patrickvonplaten ",
"This is the code of the prophetnet [hub](https://github.com/microsoft/ProphetNet/blob/master/ProphetNet_En/prophetnet/ngram_criterions.py#L36).\r\nYou can see [line 62](https://github.com/microsoft/ProphetNet/blob/master/ProphetNet_En/prophetnet/ngram_criterions.py#L62) that the shape of `logits` is `(ngram * batch_size * sequence_length, vocab_size)`.",
"Hey @StevenTang1998,\r\n\r\nThanks a lot for taking a closer look here! Would you be interested in opening a PR to fix it?"
] | 1,611 | 1,632 | 1,632 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Ubuntu 18
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1 (YES)
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik @patrickvonplaten @sgugger
## Information
When trying to fine tune ProphetNet in a summarization task (with transformers/examples/seq2seq/finetune_trainer.py), the model crashes just after performing the evaluation. This script has worked fine with Bart, Pegasus and T5, the other 3 models I've tried. The error trace is the following:
```{python}
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.:24, 2.57it/s]
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [234,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [290,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [228,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [284,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
{'loss': 8.933700561523438, 'learning_rate': 2.992816091954023e-05, 'epoch': 0.04782400765184122}
Traceback (most recent call last):
File "finetune_trainer.py", line 498, in <module>
main()
File "finetune_trainer.py", line 426, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 853, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 923, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1352, in evaluate
metric_key_prefix=metric_key_prefix,
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1469, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer_seq2seq.py", line 175, in prediction_step
model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/trainer.py", line 1574, in prediction_step
outputs = model(**inputs)
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1769, in forward
return_dict=return_dict,
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1667, in forward
return_dict=return_dict,
File "/home/alejandro.vaca/miniconda/envs/spainai_hackaton/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1365, in forward
) = self.compute_buffered_relative_buckets(position_ids)
File "/home/alejandro.vaca/SpainAI_Hackaton_2020/transformers/src/transformers/models/prophetnet/modeling_prophetnet.py", line 1496, in compute_buffered_relative_buckets
position_ids = torch.arange(1, self.max_target_positions).to(position_ids.device).repeat(1, 1)
RuntimeError: CUDA error: device-side assert triggered
0%| | 25/10440 [02:19<16:08:03, 5.58s/it]
```
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
It arises when using official script for training Seq2Seq models.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
A dataset with texts and their summaries.
## To reproduce
Steps to reproduce the behavior:
1. Run script transformers/examples/seq2seq/finetune_trainer.py with any script you want, passing as argument the model for prophetnet. More concretely, call the script the following way:
```{bash}
python finetune_trainer.py --learning_rate=3e-5 --task summarization \
--do_train --do_eval --evaluation_strategy steps --model_name_or_path microsoft/prophetnet-large-uncased \
--data_dir mydatadir --output_dir myoutputdir \
--per_device_train_batch_size 8 --per_device_eval_batch_size 16 \
--eval_accumulation_steps 8 --gradient_accumulation_steps 8 --num_train_epochs=20 --eval_beams=1 \
--load_best_model_at_end --save_steps 25 --logging_steps 25 --fp16 \
--overwrite_output_dir
```
## Expected behavior
It should not crash when training ProphetNet, as it doesn't crash for Bart, Pegasus or T5... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9803/comments | https://api.github.com/repos/huggingface/transformers/issues/9803/events | https://github.com/huggingface/transformers/issues/9803 | 794,120,320 | MDU6SXNzdWU3OTQxMjAzMjA= | 9,803 | convert_graph_to_onnx.convert broken for model bart-large / wmt19-en-de | {
"login": "oborchers",
"id": 26734737,
"node_id": "MDQ6VXNlcjI2NzM0NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26734737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oborchers",
"html_url": "https://github.com/oborchers",
"followers_url": "https://api.github.com/users/oborchers/followers",
"following_url": "https://api.github.com/users/oborchers/following{/other_user}",
"gists_url": "https://api.github.com/users/oborchers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oborchers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oborchers/subscriptions",
"organizations_url": "https://api.github.com/users/oborchers/orgs",
"repos_url": "https://api.github.com/users/oborchers/repos",
"events_url": "https://api.github.com/users/oborchers/events{/privacy}",
"received_events_url": "https://api.github.com/users/oborchers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"Thank you very much, @oborchers for opening a new ticket and re-testing with other models and verifying that this problem is project-wide.\r\n\r\nI hope @mfuntowicz gets a chance to have a look at it, or tag someone else who understands this sub-domain.",
"Hi @mfuntowicz @stas00 , is this a known issue with GPT2 as well? Please let me know if there is a workaround.\r\n\r\nI was considering to convert ```gpt2``` or ```gpt2-medium``` to ONNX using the notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb).\r\n\r\nOn executing the line of code below:\r\n```convert(framework=\"pt\", model=\"gpt2-medium\", output=Path(\"onnx/gpt2-medium.onnx\"), opset=11)```\r\n\r\nI get this error:\r\n```~/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)\r\n 1115 for i, x in enumerate(value):\r\n 1116 if not isinstance(x, int):\r\n-> 1117 raise ValueError(\"The type of axis index is expected to be an integer\")\r\n 1118 if x in value_dict:\r\n 1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'\r\n\r\nValueError: The type of axis index is expected to be an integer```",
"I recently stumbled upon this issue myself. Specifically case 2. The same error appears for `facebook/bart-large`, `facebook/bart-large-cnn`, `IlyaGusev/mbart_ru_sum_gazeta`. The main issue here is that for some outputs the tokenizer/model gives not a tensor, but rather a **tuple of tensors**, which is then converted into a list of shape dicts. \r\n\r\n`torch.onnx._validate_dynamic_axes` (line 1193 in the latest release) expects a dict (and does nothing) or a list of ints for `dynamic_axes` (and mocks up some axes names), however (for the reason above) it gets a __list of dicts ([map int -> string])__\r\n\r\n```python3\r\n for key, value in dynamic_axes.items():\r\n if key not in valid_names:\r\n warnings.warn(\"Provided key {} for dynamic axes is not a valid input/output name\".format(key))\r\n if isinstance(value, list):\r\n warnings.warn('No names were found for specified dynamic axes of provided input.'\r\n 'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))\r\n\r\n value_dict = {}\r\n for i, x in enumerate(value):\r\n if not isinstance(x, int):\r\n raise ValueError(\"The type of axis index is expected to be an integer\")\r\n if x in value_dict:\r\n warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'\r\n .format(x, key))\r\n else:\r\n value_dict[x] = str(key) + '_dynamic_axes_' + str(i + 1)\r\n dynamic_axes[key] = value_dict\r\n```\r\n\r\nI will keep digging into that, but the core question here is why Bart and related models return tuple of tensors (for outputs 1 to 12; outputs 0 and 13 are fine)? Although, I'm not an expert in either transformers, pytorch or onnx, so I might be missing something.\r\n\r\nOn a slight tangent here, is there a specific reason why `summarization` pipeline is not in the supported pipeline types for this script? ",
"any update?",
"any update?",
"any update? ",
"We're currently working on a rework of the ONNX implementation within Transformers, which is available here: https://github.com/huggingface/transformers/pull/11786\r\n\r\nInstead of offering a script to enable conversions for all models (which was not kept up to date with recent model releases), we're opting for a case-by-case approach, while offering the tools to convert models manually in a straightforward and simple manner; by creating `OnnxConfig` configuration objects to specify the input and output types of each model.\r\n\r\nPlease take a look at the PR and give us your feedback.",
"@LysandreJik: Thank you very much! I think this is an excellent way to go. Having converted a dozen models myself, we internally went for something similar, albeit not nearly as streamlined / sophisticated.\r\n\r\n```\r\[email protected](auto_attribs=True)\r\nclass TransformersONNXConfig(BaseConfig):\r\n \"\"\"Provides the basic configuration for all models.\"\"\"\r\n\r\n base_model: str\r\n trans_cfg: PretrainedConfig\r\n\r\n input_names: List[str]\r\n output_names: List[str]\r\n dynamic_axes: Dict\r\n model_args: Set[torch.tensor]\r\n tokenizer: PreTrainedTokenizerFast\r\n extra_args: Dict\r\n```\r\n\r\nand\r\n\r\n```\r\ndef create_and_export_onnx_model(self):\r\n \"\"\"Creates a new model if the current model does not exist and exports it.\"\"\"\r\n torch.onnx.export(\r\n self.create_torch_model(),\r\n self.cfg.model_args,\r\n f=self.onnx_posix_pth,\r\n input_names=self.cfg.input_names,\r\n output_names=self.cfg.output_names,\r\n dynamic_axes=self.cfg.dynamic_axes,\r\n do_constant_folding=True,\r\n use_external_data_format=False,\r\n enable_onnx_checker=True,\r\n opset_version=12,\r\n )\r\n```\r\n\r\nWhere the most important part is `self.create_torch_model`, as we regularly modify the basic torch model with custom layers down the line. Is support for such a feature planned? If not, is it considerable? As it would substantially easy conversion of custom models, such as the [sbert](https://www.sbert.net) ones.\r\n\r\nFurthermore, would it make sense to make `OnnxConfig` a part of the `PreTrainedModel` config to enable support from the get-go?\r\n\r\nAnd finally, I assume this leaves us with the export, so that for seq2seq models we need still need to re-write the `.generate` function? Or is it possible to add support for an ONNX model from your side (probably difficult, as it's a part of the pre-trained model already, which would require double loading the model)? ",
"Thanks @oborchers for your comments and use-cases.\r\n\r\nI will let @LysandreJik speak about a potential integration of the `OnnxConfig` within the `PreTrainedModel` config, my initial plan was to have 100% backward compatibility, this explain why I put this somewhere else _(currently)_.\r\n\r\nRegarding `generate`, this is something that might require some investigations but I'm seeing good opportunities to have something within the ONNX graph with [the recent knobs released by Microsoft](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb) folks on the ONNXRuntime project _(cc @tianleiwu for visibility on this)_.\r\n\r\nStill, for this initial rework of the ONNX exporting capabilities we focused on \"model only\", with the ability to extend to full pipelines in the future. Generation is definitively one of the hardest task to get within the graph, but also one where I can see the biggest benefits.",
"@mfuntowicz: Thank you for your feedback! Yes, I understand the point for the compatibility to the fullest. After all, it's not that difficult to get to the config if done once or twice. \r\n\r\nRegarding the `.generate` function. Thanks for the link! Will look into this more! Yes, absolutely!!",
"> Hi @mfuntowicz @stas00 , is this a known issue with GPT2 as well? Please let me know if there is a workaround.\r\n> \r\n> I was considering to convert `gpt2` or `gpt2-medium` to ONNX using the notebook provided [here](https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb).\r\n> \r\n> On executing the line of code below: `convert(framework=\"pt\", model=\"gpt2-medium\", output=Path(\"onnx/gpt2-medium.onnx\"), opset=11)`\r\n> \r\n> I get this error:\r\n> \r\n> ```python\r\n> 1115 for i, x in enumerate(value):\r\n> 1116 if not isinstance(x, int):\r\n> -> 1117 raise ValueError(\"The type of axis index is expected to be an integer\")\r\n> 1118 if x in value_dict:\r\n> 1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'\r\n> \r\n> ValueError: The type of axis index is expected to be an integer```\r\n> ```\r\n\r\nHello @mriganktiwari \r\nAny update on this? I am still facing the issue with GPT-2. I have used the same code as yours. Please guide, thanks!",
"can i work on this?\r\nplease assign this to me\r\n",
"Hey! This is no longer something handled at the `transformers` level, will close it! Sorry for the inconvenience \r\n\r\nThe way to handle it now is through optimum! See this documentation page for more information: [ONNX exporter](https://huggingface.co/docs/optimum/exporters/onnx/overview)"
] | 1,611 | 1,698 | 1,698 | NONE | null | @stas00's edit on top:
I currently don't have the know-how in this domain, so if there are members of the community with ONNX experience and this issue resonates with you, please don't hesitate to comment if you'd like to work on resolving this. Thank you very much!
------------------------
## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
- ONNX Version: 1.5.2 (ONNX custom build w CUDA 11)
### Who can help
@stas00 (based on his suggestion to open a new issue in #9722 and run this with bart)
@patrickvonplaten (based on link of @stas00 in #9722)
@mfuntowicz (based on link of @stas00 in #9722)
@LysandreJik (based on link of @stas00 in #9722)
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large & facebook/wmt19-en-de
The problem arises when using:
* [X] the official example scripts: transformers.convert_graph_to_onnx.convert
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## Description
Initially, I was about to use the export for facebook/wmt19-en-de via ONNX for our deployment. Yet, it turns out that the exported models do not work properly. It seems, that there are several things broken for the export of this model type.
## To reproduce
### 1. Testing facebook/wmt19-en-de
```
import torch
import transformers
import numpy as np
import onnxruntime as rt
from pathlib import Path
from transformers import convert_graph_to_onnx
print(rt.__version__)
opt = rt.SessionOptions()
model_name = "facebook/wmt19-en-de"
pipeline_name = "translation_en_to_de"
model_pth = Path("encoder/en_de_trans.onnx")
if model_pth.exists():
model_pth.unlink()
nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
convert_graph_to_onnx.convert(
framework="pt",
model=model_name,
output=model_pth,
opset=12,
tokenizer=model_name,
use_external_format= False,
pipeline_name= pipeline_name,
)
sess = rt.InferenceSession(str(model_pth), opt)
spans = [
"My name is Bert", # passes facebook/wmt19-en-de
"My name is Bert and" # fails facebook/wmt19-en-de
]
for span in spans:
model_input = nlp.tokenizer.encode_plus(span)
model_input = {name : np.atleast_2d(value) for name, value in model_input.items()}
out = nlp.model(**nlp.tokenizer(span, return_tensors="pt"))
trans_1 = out[0].detach().cpu().numpy()
trans_2 = out[1].detach().cpu().numpy()
onnx_1, onnx_2 = sess.run(None, model_input)
assert np.allclose(trans_1, onnx_1, atol=1e-5)
assert np.allclose(trans_2, onnx_2, atol=1e-5)
```
Will raise the following exception:
```
Some weights of FSMTModel were not initialized from the model checkpoint at facebook/wmt19-en-de and are newly initialized: ['model.encoder.embed_positions.weight', 'model.decoder.embed_positions.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
ONNX opset version set to: 12
Loading pipeline (model: facebook/wmt19-en-de, tokenizer: facebook/wmt19-en-de)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Found output output_1 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
**[skipped warnings for brevity...]**
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-2-f4eec5b0ac5f> in <module>
51 trans_1 = out[0].detach().cpu().numpy()
52 trans_2 = out[1].detach().cpu().numpy()
---> 53 onnx_1, onnx_2 = sess.run(None, model_input)
54 assert np.allclose(trans_1, onnx_1, atol=1e-5)
55 assert np.allclose(trans_2, onnx_2, atol=1e-5)
~/anaconda3/envs/dev/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
122 output_names = [output.name for output in self._outputs_meta]
123 try:
--> 124 return self._sess.run(output_names, input_feed, run_options)
125 except C.EPFail as err:
126 if self._enable_fallback:
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_74' Status Message: /data/shared/packages/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,6}, requested shape:{5}
```
As stated in #9722, I'd assume that some dynamic shape of was not inferred properly/not passed to the dynamic_shapes of torch.onnx.export. But thats just a quick guess, which I find when I build my own ONNX models. Important: The first string passes the assertions, the second one doesn't.
### 2. Testing facebook/bart-large (feature extraction)
@stas00 suggested to re-test the behavior with the underlying BART model. Now, say we run the same script with the following parameters:
```
model_name = "facebook/bart-large"
pipeline_name = "feature-extraction"
model_pth = Path("generator/bart.onnx")
```
Raises
```
ONNX opset version set to: 12
Loading pipeline (model: facebook/bart-large, tokenizer: facebook/bart-large)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
**[skipped output axes for brevity...]**
Found output output_13 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py:1111: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output_1
warnings.warn('No names were found for specified dynamic axes of provided input.'
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-3362f5ef6ea8> in <module>
30 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
31
---> 32 convert_graph_to_onnx.convert(
33 framework="pt",
34 model=model_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
--> 279 export(
280 nlp.model,
281 model_args,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
223
224 from torch.onnx import utils
--> 225 return utils.export(model, args, f, export_params, verbose, training,
226 input_names, output_names, aten, export_raw_ir,
227 operator_export_type, opset_version, _retain_param_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
83 else:
84 operator_export_type = OperatorExportTypes.ONNX
---> 85 _export(model, args, f, export_params, verbose, training, input_names, output_names,
86 operator_export_type=operator_export_type, opset_version=opset_version,
87 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes)
627 if dynamic_axes is None:
628 dynamic_axes = {}
--> 629 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
630
631 graph, params_dict, torch_out = \
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1115 for i, x in enumerate(value):
1116 if not isinstance(x, int):
-> 1117 raise ValueError("The type of axis index is expected to be an integer")
1118 if x in value_dict:
1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
ValueError: The type of axis index is expected to be an integer
```
### 3. Testing facebook/bart-large (text-generation)
```
model_name = "facebook/bart-large"
pipeline_name = "text-generation"
model_pth = Path("generator/bart.onnx")
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-d6fa1456dc0e> in <module>
28 model_pth.unlink()
29
---> 30 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
31
32 convert_graph_to_onnx.convert(
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
403 )
404
--> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs)
406 if task == "translation" and model.config.task_specific_params:
407 for key in model.config.task_specific_params:
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1040 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1041 )
-> 1042 raise ValueError(
1043 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
1044 "Model type should be one of {}.".format(
ValueError: Unrecognized configuration class <class 'transformers.models.bart.configuration_bart.BartConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.
```
### 4. Testing facebook/bart-large (fill-mask)
```
model_name = "facebook/bart-large"
pipeline_name = "fill-mask"
model_pth = Path("generator/bart.onnx")
```
```
ONNX opset version set to: 12
Loading pipeline (model: facebook/bart-large, tokenizer: facebook/bart-large)
Using framework PyTorch: 1.7.1
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
**[skipped for brevity]**
Found output output_13 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
decoder_input_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d55ec01c8b87> in <module>
34 nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
35
---> 36 convert_graph_to_onnx.convert(
37 framework="pt",
38 model=model_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
365 # Export the graph
366 if framework == "pt":
--> 367 convert_pytorch(nlp, opset, output, use_external_format)
368 else:
369 convert_tensorflow(nlp, opset, output)
~/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/convert_graph_to_onnx.py in convert_pytorch(nlp, opset, output, use_external_format)
277 ordered_input_names, model_args = ensure_valid_input(nlp.model, tokens, input_names)
278
--> 279 export(
280 nlp.model,
281 model_args,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
223
224 from torch.onnx import utils
--> 225 return utils.export(model, args, f, export_params, verbose, training,
226 input_names, output_names, aten, export_raw_ir,
227 operator_export_type, opset_version, _retain_param_name,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format)
83 else:
84 operator_export_type = OperatorExportTypes.ONNX
---> 85 _export(model, args, f, export_params, verbose, training, input_names, output_names,
86 operator_export_type=operator_export_type, opset_version=opset_version,
87 _retain_param_name=_retain_param_name, do_constant_folding=do_constant_folding,
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes)
627 if dynamic_axes is None:
628 dynamic_axes = {}
--> 629 _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
630
631 graph, params_dict, torch_out = \
~/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py in _validate_dynamic_axes(dynamic_axes, model, input_names, output_names)
1115 for i, x in enumerate(value):
1116 if not isinstance(x, int):
-> 1117 raise ValueError("The type of axis index is expected to be an integer")
1118 if x in value_dict:
1119 warnings.warn('Duplicate dynamic axis index {} was provided for input {}.'
ValueError: The type of axis index is expected to be an integer
```
## Expected behavior
1 & 2 & 4 point into the direction, that something is wrong with inferring the dynamic shapes, if I am right. 3 just popped up while I was testing the other pipelines.
In all cases, the export & usage should work properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9803/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9802/comments | https://api.github.com/repos/huggingface/transformers/issues/9802/events | https://github.com/huggingface/transformers/issues/9802 | 793,963,980 | MDU6SXNzdWU3OTM5NjM5ODA= | 9,802 | [trainer] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Edit: I am not sure why this param is always set to `True` except for gradient checkpointing. We can certainly make a training argument control its value to avoid hard-coding it. At least to experiment and benchmark whether it's best at True/False."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | When running DDP:
```
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \
run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200
```
I get:
> [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator())
but it's not possible to turn it off from the trainer. i.e. it's hardwired.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9802/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9801/comments | https://api.github.com/repos/huggingface/transformers/issues/9801/events | https://github.com/huggingface/transformers/issues/9801 | 793,953,344 | MDU6SXNzdWU3OTM5NTMzNDQ= | 9,801 | [trainer] a consistent way to limit the number of items | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Mmm, which scripts use `n_obs`? I don't remember seeing this one the official maintained examples.\r\n\r\n`--max_steps` is different from `n_train`/`n_val`/`n_test`: `--max_steps` runs training for `max_steps`, using the *full training set*. `--n_train` restrains the training set to its first `n_train` samples. The first has its place inside `Trainer` for obvious reason, the second is part of the processing of the training (or eval/test) dataset so I don't think this has its place in `Trainer`.\r\n\r\nAs for a consistent way to do this in all examples, it doesn't really matter in non seq2seq scripts as their evaluation runs quite fast. I imagine those arguments were introduces in the seq2seq script originally because its evaluation is super long. We can add them with a need-to basis on other datasets, but I haven't felt the need to do this.",
"> Mmm, which scripts use `n_obs`? I don't remember seeing this one the official maintained examples.\r\n\r\nall `seq2seq/run_*py`\r\n\r\n> `--max_steps` is different from `n_train`/`n_val`/`n_test`: `--max_steps` runs training for `max_steps`, using the _full training set_. `--n_train` restrains the training set to its first `n_train` samples. The first has its place inside `Trainer` for obvious reason, the second is part of the processing of the training (or eval/test) dataset so I don't think this has its place in `Trainer`.\r\n\r\nright, so this confusion leads to an incorrect benchmark. that's what I thought last night but it was too late to see.\r\nhttps://github.com/huggingface/transformers/issues/9371#issuecomment-767323420\r\n\r\nWe need a way to be able to truncate the dataset to an identical size and then compare say 1-gpu vs 2-gpu benchmark on the same total number of input objects.\r\n\r\nSo how do we currently do that with other scripts that aren't `finetune_trainer.py`?\r\n\r\n> As for a consistent way to do this in all examples, it doesn't really matter in non seq2seq scripts as their evaluation runs quite fast. I imagine those arguments were introduces in the seq2seq script originally because its evaluation is super long. We can add them with a need-to basis on other datasets, but I haven't felt the need to do this.\r\n\r\nfast? try `run_clm.py` on gpt2/wiki - it's multiple hours\r\ne.g. see: https://github.com/huggingface/transformers/issues/9371#issuecomment-759074475",
"> all seq2seq/run_*py\r\n\r\nThose are not official maintained examples except for the new `run_seq2seq`. No one has really touched them since Sam left and they are in need for cleanup ;-)\r\n\r\n> fast? try run_clm.py on gpt2/wiki - it's multiple hours e.g. see: #9371 (comment)\r\n\r\nYou are pointing to a comment that does not contain any evaluation. So I stand by what I say. Evaluation on wikitext-2 runs in a couple of seconds.\r\n\r\n> We need a way to be able to truncate the dataset to an identical size and then compare say 1-gpu vs 2-gpu benchmark on the same total number of input objects.\r\n\r\nLike I said, if it's needed it can be added.\r\n\r\n> So how do we currently do that with other scripts that aren't finetune_trainer.py?\r\n\r\nBy opening a PR adding this ;-)",
"Thank you for clarifying which is which, @sgugger \r\n\r\nOK, so what should we call a new flag in HF Trainer that would be an equivalent of --n_train? or use the same?\r\n\r\nDo you suggest it should be train-specific?",
"I think it should be in the scripts, not the Trainer, as it's part of the preprocessing. I don't think it should be train-specific, we can do eval/test like in the finetune_trainer script.",
"but then we have to change all the scripts. Why not have an option to truncate the dataset at trainer level and solve it at once for all scripts?",
"Because it doesn't have much to do with the Trainer itself IMO. It's like putting all the arguments of all the scripts about tokenization in the Trainer, it doesn't really make sense as the Trainer is supposed to take the lead after the data preprocessing.\r\n\r\nLet's see if @LysandreJik and @patrickvonplaten think differently maybe?",
"This makes sense, then perhaps having a Trainer-subclass that all scripts can tap into?\r\n\r\nAlso may I suggest that `--max_steps` is an ambiguous argument as it tells the user nothing about whether this is per gpu or per the whole thing?",
"The documentation says number of training steps. I don't see how the number GPU intervenes here as a training step is the full combination of forward, backward (perhaps multiple times if gradient accumulation is activated) and optimizer step.\r\n\r\nOne training step can have a different number of training samples depending on the number of GPUs, but also depending on the batch size, gradient accumulation steps etc. This information is logged at the beginning of training (`logger.info(f\" Total train batch size (w. parallel, distributed & accumulation) = {total_train_batch_size}\")` in Trainer.train)",
"Right, so what you're saying is that `--max_steps` is just the wrong tool for the truncating job and we need an explicit `--use-that-many-total-train-records`.\r\n\r\nHonestly, I have been staring at all these different trainer options for a long time now and I still get confused at which is which, and which are impacted by number of gpus and which aren't. Every time this happens I have to go through the source code to see how it's used and then I get it. To me some of these arg names are hard to make sense of in the multi-gpu vs single gpu env.\r\n\r\n* `--per_device_train_batch_size` is loud and clear.\r\n* `--max_steps` is not.\r\n\r\nI propose we use `total` and `per_device` prefix for any cl arg that behaves differently depending on the number of gpus.",
"The problem is that this then is a breaking change. I'm not necessarily super fond of the name `max_steps` myself but I'm not sure it's worth going through the trouble of a deprecation cycle for this one.",
"Do you think it's actually used a lot? \r\n\r\nI agree with avoiding break changes, but since we are trying to make the API intuitive, such changes in the long run will benefit a much larger community than the annoyance it'd cause to those who use it right now.\r\n\r\nI think the main issue we have here is that all these proposals to renames happen dynamically. But instead I think it'd make sense for a group of us to sit down, review all the cl args and do a single adjustment. Surely, this won't guarantee that in the future we won't find we missed something, but it's definitely better than doing it a little bit at a time, which is much more annoying.\r\n\r\nIn some previous projects for such things we also had a back-compat mode, which ones enabled supported a whole bunch of old ways until the user was ready to make the shift to the new code. Surely a rename of a cl arg could be easily supported by such feature. So here, instead of a deprecation cycle per item the approach is to keep anything old around but only if it's loaded from a helper module. So that the main code remains clean of deprecated things. This was in a different programming environment where it was developer, so I will have to think how to do the same here.",
"Note that this is not just a CI arg rename, since `TrainingArguments` is also a public class users may very well directly use in their code (you need to instantiate one each time you use a `Trainer`). We can certainly have a discussion around the arguments and decide which one we want to rename, though it should be in a separate issue. We're starting to derail this one ;-)\r\n\r\nAnd from the issues, I'd say that half the users use `num_train_epochs` and half use `max_steps` to control the length of their training, so it is used a lot.",
"Thank you for flagging that we are diverging from the topic at hand, @sgugger \r\nAs you suggested I opened a new one: https://github.com/huggingface/transformers/issues/9821\r\n\r\nAnd thank you for confirming that these are used a lot.",
"> Because it doesn't have much to do with the Trainer itself IMO. It's like putting all the arguments of all the scripts about tokenization in the Trainer, it doesn't really make sense as the Trainer is supposed to take the lead after the data preprocessing.\r\n> \r\n> Let's see if @LysandreJik and @patrickvonplaten think differently maybe?\r\n\r\nSo for the benefit of reviewers, and to bring us back to the focus of this Issue. I proposed to have a cl arg that will truncate the dataset (train, others?) (total!) across all example scripts.\r\n\r\n@sgugger, correctly suggested that perhaps this shouldn't belong to Trainer, and then I suggested that perhaps there should be a sub-class that does such nice little tweaks consistently across all example scripts, rather than manually replicating the same code and which often leads to scripts diverging.\r\n\r\nPlus, @sgugger points out that `examples/seq2seq/run*.py` haven't yet been converted to the new way.",
"I always thought that `max_steps` defines the total number of weight update steps (which is then not really influenced by other parameters such as number of GPUs or `gradient_accumalation_steps` or whatever). To me it defines: \"How often do I want to update my weights?\" or am I wrong here?. Think the name is clear and does not need to be changed, the documentation could be updated with a sentence that makes clear that `max_steps` = number of weight updates. Also, I use this arg quite often when training and think it's important to keep.\r\n\r\nI agree with @sgugger here that I think a `--max_num_train_samples` arg (or whatever the name) should not go into the trainer, but should be added to all examples scripts. It's actually incredibly easy to do this with `datasets`: \r\n\r\n```python\r\nds = load_dataset(\"crime_and_punish\", split=\"train\")\r\nds = ds.select(range(arg.max_num_train_samples))\r\n```\r\n\r\nI'm totally fine with having this as another cl arg for the scripts, but don't think it's the responsibility of the `trainer`.",
"I agree with Sylvain and Patrick about `max_steps`.\r\n\r\nAnd for controlling the number of examples, this should go in scripts than `Trainer`, as we do all the pre-processing in the scripts. We could add two arguments to `DataTrainingArguments` in every script.\r\n`--max_train_samples` = number of training examples\r\n`--max_val_samples` = number of validation examples\r\n\r\nThese args are already there in the new `run_seq2seq.py` script.\r\n",
"Thank you for your input, guys. Your suggestions work for me.\r\n\r\n> We could add two arguments to DataTrainingArguments in every script.\r\n> --max_train_samples = number of training examples\r\n> --max_val_samples = number of validation examples\r\n> \r\n> These args are already there in the new run_seq2seq.py script.\r\n\r\nbut not in other `run_*.py` scripts.\r\n\r\nand then we have `test` too - at least in `finetune_trainer.py`\r\n\r\nI proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that? \r\n\r\n---------------------------\r\n\r\nThe other slight confusion across some scripts is `val` vs `eval` - it's inconsistent - some reports say `val` others `eval` - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?",
"> I proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that?\r\n\r\nI don't think this is a good idea personally. The goal of the scripts is to provide examples for our users. Having examples that don't use the main object of the library is counterproductive. It's one other instance where we have to bear the burden of duplicate code to make the user experience easier IMO.\r\n\r\n> The other slight confusion across some scripts is val vs eval - it's inconsistent - some reports say val others eval - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?\r\n\r\nI think this is mostly `finetune_trainer` (and maybe `run_seq2seq2` since I may have copied some names) not using the same terminology as the other scripts in this instance. So those two scripts should get aligned with the rest on this matter. Again, let's keep the examples simple (I feel like I'm repeating this all day long but *they are just examples* we cannot have scripts that will solve every use case and trying to do so make them un-understandable for our users) and match train/eval/test with what is done (training/evaluation/predict).",
"> > I proposed to have a Trainer subclass that implements this for all scripts vs repeating the same cl arg definition and code in every script a new (and forgetting to sync some) - could you please address that?\r\n> \r\n> I don't think this is a good idea personally. The goal of the scripts is to provide examples for our users. Having examples that don't use the main object of the library is counterproductive. It's one other instance where we have to bear the burden of duplicate code to make the user experience easier IMO.\r\n\r\nYou're correct. I didn't think of that. \r\n\r\nSo we have a conflict here between example scripts and them being used for more than that.\r\n\r\nI, for one, need a solid set of scripts to do:\r\n\r\n1. integration validation\r\n2. benchmarking\r\n\r\nIn the absence of these I have been heavily relying on the example scripts. And this is probably where the conflict is.\r\n\r\nSo I keep on bringing this up - should we have a set of scripts that are not examples, but real production work horses and we treat them as such? Perhaps they can have much less functionality but do it consistently across different domains and simple?\r\n\r\nPerhaps, instead of `run_(foo|bar|tar).py` it's one script that can tap into any of these domains and then it can have a simple identical cl args. And all we change is model names and most other args are almost the same.\r\n\r\n> > The other slight confusion across some scripts is val vs eval - it's inconsistent - some reports say val others eval - train/val/test are splits and are orthogonal to train/evaluate/predict - and while they are the same for train, the rest are just confusing, since you can have predict for val split and evaluate for test split. Should we discuss this in a separate issue?\r\n> \r\n> I think this is mostly `finetune_trainer` (and maybe `run_seq2seq2` since I may have copied some names) not using the same terminology as the other scripts in this instance. So those two scripts should get aligned with the rest on this matter. Again, let's keep the examples simple (I feel like I'm repeating this all day long but _they are just examples_ we cannot have scripts that will solve every use case and trying to do so make them un-understandable for our users) and match train/eval/test with what is done (training/evaluation/predict).\r\n\r\nYou're absolutely correct, please see my response in the comment above.\r\n",
"> So I keep on bringing this up - should we have a set of scripts that are not examples, but real production work horses and we treat them as such? Perhaps they can have much less functionality but do it consistently across different domains and simple?\r\n\r\nIf the basic examples do not suffice, then yes, definitely.",
"But we are walking in circles. If these are examples and they are treated as examples, these aren't tools to be relied upon. I hope you can see the irony...\r\n\r\nI need a solid tool that will not change its API, start doing all the benchmarks in it so that we could go back to benchmarks from 6 months or a year ago and be able to run those and re-check.\r\n",
"I'm not sure why you say we are walking in circles. I just dais yes to having benchmark-specific scripts if the examples do not have all the functionality you need.",
"I see what you mean. But you asked a tricky question - can I figure out how to the use the example scripts to meet my needs - mostly yes - but then every time I ask for something that ensures consistency, you say - but the audience is wrong - it should be for users. And I say, yes, of course, you're right. And we end up nowhere. Do you see where the circle is?\r\n\r\nIdeally there should be just one benchmarking tool that can handle any model (or at least the majority of them) and support the different tasks and it probably won't need all the possible flags the various scripts have. If that makes sense.\r\n\r\nI was using `finetune_trainer.py` for many things, but then a user asks to validate/benchmark/integrate a model not supported by that script, so I go into that subdomain in examples and things aren't the same there. And I know we are trying to make the example scripts consistent, but the example of this Issue I know for a fact that when one manually copies the same feature across scripts they are bound to become inconsistent. At least that's the experience with transformers so far.\r\n\r\nComplaining and frustration expression aside - perhaps we could start with one best script that you think is a good model and then making it non-examples and to start transforming it to support a multitude of tasks/models/features? Would that be a good way to move forward?\r\n",
"The issue is derailing a bit as I think adding the `max_train_samples` etc to all scripts has been validated (and is useful to quickly test the example is running on the user data).\r\n\r\nIf you want to look at a benchkmarking script, I think a good starting point is `run_glue` for fine-tuning on text classification, `run_mlm` for language modeling. Those are more for BERT-like models than seq2seq models however. `finetune_trainer` is aimed at being deprecated and once `run_seq2seq` has all its features, it can be the one good script to be based on for all things seq2seq.",
"> The issue is derailing a bit as I think adding the `max_train_samples` etc to all scripts has been validated (and is useful to quickly test the example is running on the user data).\r\n\r\nExcellent!\r\n\r\n> If you want to look at a benchkmarking script, I think a good starting point is `run_glue` for fine-tuning on text classification, `run_mlm` for language modeling. Those are more for BERT-like models than seq2seq models however. `finetune_trainer` is aimed at being deprecated and once `run_seq2seq` has all its features, it can be the one good script to be based on for all things seq2seq.\r\n\r\nI feel I'm not managing to successfully communicate the need here. I will let it go for now.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"This is getting resolved by https://github.com/huggingface/transformers/pull/10551\r\n",
"> I always thought that `max_steps` defines the total number of weight update steps (which is then not really influenced by other parameters such as number of GPUs or `gradient_accumalation_steps` or whatever). To me it defines: \"How often do I want to update my weights?\" or am I wrong here?. Think the name is clear and does not need to be changed, the documentation could be updated with a sentence that makes clear that `max_steps` = number of weight updates. Also, I use this arg quite often when training and think it's important to keep.\r\n> \r\n> I agree with @sgugger here that I think a `--max_num_train_samples` arg (or whatever the name) should not go into the trainer, but should be added to all examples scripts. It's actually incredibly easy to do this with `datasets`:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"crime_and_punish\", split=\"train\")\r\n> ds = ds.select(range(arg.max_num_train_samples))\r\n> ```\r\n> \r\n> I'm totally fine with having this as another cl arg for the scripts, but don't think it's the responsibility of the `trainer`.\r\n\r\nhi,I want to use the crime_and_punish dataset to do evaluation on model reformer,which task code should I use?",
"@LeopoldACC, it looks like you posted your question in a very unrelated discussion. Please try https://discuss.huggingface.co/. Thank you."
] | 1,611 | 1,618 | 1,614 | CONTRIBUTOR | null | # 🚀 Feature request
We have:
1. `finetune_trainer.py` has
```
n_train: Optional[int] = field(default=-1, metadata={"help": "# training examples. -1 means use all."})
n_val: Optional[int] = field(default=-1, metadata={"help": "# validation examples. -1 means use all."})
n_test: Optional[int] = field(default=-1, metadata={"help": "# test examples. -1 means use all."})
```
2. some other `run_` scripts use `--n_obs`
3. `--max_steps` in the main trainer - which works only on the train_dataset - no ability to limit items on eval_dataset
Requests/Questions:
1. How does one use `--max_steps` if one needs to use a different number of items for train and eval?
2. Can we have a consistent way across examples to do this same thing?
Thank you.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9800/comments | https://api.github.com/repos/huggingface/transformers/issues/9800/events | https://github.com/huggingface/transformers/pull/9800 | 793,949,722 | MDExOlB1bGxSZXF1ZXN0NTYxNTQ4NjE3 | 9,800 | [traner] fix --lr_scheduler_type choices | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This fix works, but only for this particular enum. I'm wondering if it shouldn't be better to just change [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L83) in `HFArgumentParser`? For instance\r\n```bash\r\n$ python examples/text-classification/run_glue.py -h | grep evaluation_strategy\r\n```\r\n\r\nreturns\r\n```\r\n--evaluation_strategy {EvaluationStrategy.NO,EvaluationStrategy.STEPS,EvaluationStrategy.EPOCH}\r\n```\r\n\r\nwhich probably does not work as well. (Or if does, we want to display \"no\"/\"steps\"/\"epoch\" here.)",
"That's an excellent suggestion, @sgugger - thank you! Please have a look at this variation.",
"Weird, somehow it broke `finetune_trainer.py`:\r\n\r\n```\r\nexamples/seq2seq/test_finetune_trainer.py:84: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\nexamples/seq2seq/test_finetune_trainer.py:76: in finetune_trainer_quick\r\n output_dir = self.run_trainer(1, \"12\", MBART_TINY, 1, distributed, deepspeed, extra_args_str)\r\nexamples/seq2seq/test_finetune_trainer.py:210: in run_trainer\r\n main()\r\nexamples/seq2seq/finetune_trainer.py:160: in main\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\nsrc/transformers/hf_argparser.py:150: in parse_args_into_dataclasses\r\n namespace, remaining_args = self.parse_known_args(args=args)\r\n/usr/local/lib/python3.6/argparse.py:1773: in parse_known_args\r\n self.error(str(err))\r\n/usr/local/lib/python3.6/argparse.py:2393: in error\r\n self.exit(2, _('%(prog)s: error: %(message)s\\n') % args)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = HfArgumentParser(prog='finetune_trainer.py', usage=None, description=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=True)\r\nstatus = 2\r\nmessage = \"finetune_trainer.py: error: argument --evaluation_strategy: invalid choice: <EvaluationStrategy.STEPS: 'steps'> (choose from 'no', 'steps', 'epoch')\\n\"\r\n\r\n```\r\n\r\nThe test is just passing `--evaluation_strategy steps`\r\n",
"Ok, tested locally and doing this does not work indeed (e.g. you can't launch any script with `--evaluation_strategy steps` or `--lr_scheduler_type linear`). To have it work, we have to change the type of the enum to `str`, but then the actual values of the dataclass are strings.\r\n\r\nSo there is no easy solution. I'm fine with leaving as is or also to remove the choices and expand the help to show the actual possible values, but it looks like it won't work as is.",
"Thank you for validating it.\r\n\r\nHow about moving my initial PR's `get_arg_names` to `ExplicitEnum` so any sub-class has access to it - perhaps a different name? And then we change just `meta[\"choices\"]` in the arg parser to get these values?",
"No the initial PR doesn't work either (this is not caught by the tests since the test do not use `--lr_scheduler_type` in any of the example scripts). The field ends up being a `str` if you try on your side (and not a `SchedulerType` despite the cast in the post_init so then all tests comparing `self.args.lr_scheduler_type` to `SchedulerType.XXX` will fail.",
"Ah, my bad! I was testing on the `parse` method of `HfArgumentParser`, not `parse_into_dataclasses`. There is a way to make this work :-)",
"I'm all ears.\r\n\r\n",
"The easy fix is to force `kwargs[\"type\"] = type(kwargs[\"choices\"][0])` for the Enum subclasses, after your line `kwargs[\"choices\"] = [x.value for x in field.type]`, and let the dataclass set them back to their proper enum types in the postinit (as is done right now).\r\n\r\nI even have a function that will automagically do the casting back after the init, which is the following:\r\n```\r\n for dtype in self.dataclass_types:\r\n keys = {f.name for f in dataclasses.fields(dtype) if f.init}\r\n keys_to_enum_types = {f.name: f.type for f in dataclasses.fields(dtype) if isinstance(f.type, type) and issubclass(f.type, Enum)}\r\n inputs = {k: v for k, v in vars(namespace).items() if k in keys}\r\n for k in keys:\r\n if k in keys_to_enum_types:\r\n inputs[k] = keys_to_enum_types[k](inputs[k])\r\n delattr(namespace, k)\r\n obj = dtype(**inputs)\r\n outputs.append(obj)\r\n```\r\nin the `parse_into_dataclasses` method. The same would need to be done in the other special parse methods for consistency.",
"Please feel free to take over, rather than me being the middle person as you know what you're doing and I will learn from your work when it's done. Thank you!",
"Fabulous! \r\n\r\nThis is not a user facing code, correct? There are 3 pretty big identical chunks of code which can be refactored then.",
"Not sure we actually need them since the dataclasses need to re-cast those args to the enum type anyway for when someone is using `TrainingArguments` not in CLI. I was hoping to remove the lines\r\n```\r\nself.evaluation_strategy = EvaluationStrategy(self.evaluation_strategy)\r\nself.lr_scheduler_type = SchedulerType(self.lr_scheduler_type)\r\n```\r\nby using those four lines, but they are still necessary. So we'll probably just remove those three blocks of identical code (let's see what @LysandreJik thinks!)"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR fixes:
```
$ python ./run_clm.py -h | grep lr_scheduler_type
[--lr_scheduler_type {SchedulerType.LINEAR,SchedulerType.COSINE,SchedulerType.COSINE_WITH_RESTARTS,SchedulerType.POLYNOMIAL,SchedulerType.CONSTANT,SchedulerType.CONSTANT_WITH_WARMUP}]
```
to:
```
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup}]
```
I'm not sure what the original intention was since the current suggestions do not work:
```
run_clm.py: error: argument --lr_scheduler_type: invalid SchedulerType value: 'SchedulerType.LINEAR'
```
I couldn't find any readily-available methods to do the same in the `enum` superclass: https://docs.python.org/3/library/enum.html
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9800",
"html_url": "https://github.com/huggingface/transformers/pull/9800",
"diff_url": "https://github.com/huggingface/transformers/pull/9800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9800.patch",
"merged_at": 1611760335000
} |
https://api.github.com/repos/huggingface/transformers/issues/9799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9799/comments | https://api.github.com/repos/huggingface/transformers/issues/9799/events | https://github.com/huggingface/transformers/pull/9799 | 793,846,950 | MDExOlB1bGxSZXF1ZXN0NTYxNDY0MjIw | 9,799 | Authorize last version of tokenizer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah I forgot to add this in a comment -> Talked to @n1t0 about it and he says those are unwanted breaking changes, so we will pin to the next patch release he is going to make to tokenizers. Leaving the PR open in the meantime!",
"Added a few more things to this PR:\r\n- Use last tokenizers RC release\r\n- Update the conversion from slow to fast tokenizers as described in https://github.com/huggingface/transformers/issues/9637\r\n- Added a script to verify the conversion from slow to fast tokenizers looks good\r\n- Fix some links to the hub",
"Regarding the masks, should that be applied to all SentencePiece-based tokenizers? Should it be added to XLNet/ALBERT/T5 as well?",
"Can't approve since this is my PR originally, but this looks good to me. We just need to make sure all special masks tokens are taken into account.",
"Most of the tests are unrelated (a rebase on `master` will make them pass), but these two aren't:\r\n\r\n```\r\nFAILED tests/test_tokenization_pegasus.py::PegasusTokenizationTest::test_mask_tokens_rust_pegasus\r\nFAILED tests/test_tokenization_mbart.py::MBartTokenizationTest::test_embeded_special_tokens\r\n=== 2 failed, 3352 passed, 3280 skipped, 2512 warnings in 765.06s (0:12:45) ====\r\n```"
] | 1,611 | 1,612 | 1,612 | COLLABORATOR | null | # What does this PR do?
This PR bumps the version pinned in the setup to authorize the last version of tokenziers (which in particular contains fixes for the `run_qa` script). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9799",
"html_url": "https://github.com/huggingface/transformers/pull/9799",
"diff_url": "https://github.com/huggingface/transformers/pull/9799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9799.patch",
"merged_at": 1612466314000
} |
https://api.github.com/repos/huggingface/transformers/issues/9798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9798/comments | https://api.github.com/repos/huggingface/transformers/issues/9798/events | https://github.com/huggingface/transformers/pull/9798 | 793,785,315 | MDExOlB1bGxSZXF1ZXN0NTYxNDE0Mjg4 | 9,798 | Smdistributed trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very cool integration.🚀 🚀 🔥 🔥 \r\nI'm doing some tests over the day for single-GPU and multi-GPU and let you know if I find something strange"
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR adds support for the variant of `torch.distributed` developed by AWS on SageMaker (`smdistributed`). It's been tested to work on the `run_glue` example.
The main steps are:
- to replace all operations from the `torch.distributed` module by their equivalent in `smdistributed.torch.distributed`.
- use their wrapper for the model instead of `DistributedDataParallel`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9798/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9798/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9798",
"html_url": "https://github.com/huggingface/transformers/pull/9798",
"diff_url": "https://github.com/huggingface/transformers/pull/9798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9798.patch",
"merged_at": 1611674902000
} |
https://api.github.com/repos/huggingface/transformers/issues/9797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9797/comments | https://api.github.com/repos/huggingface/transformers/issues/9797/events | https://github.com/huggingface/transformers/issues/9797 | 793,784,278 | MDU6SXNzdWU3OTM3ODQyNzg= | 9,797 | Conversion of Electra checkpoint from official repo TF (pretrained on custom dataset) | {
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can see from the message that it's skipping the optimizer states. We're only saving the model, so it makes sense that the optimizer states are discarded :)",
"Thanks you for your reply ! \r\n\r\nI will close the issue then."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest from pip install git+https://github.com/huggingface/transformers.git
- Platform: Colab
- Python version:
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): 1.15
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik if I am not wrong
-->
## Information
Model I am using (Bert, XLNet ...):
Electra from google repo
The problem arises when using:
* [ x] the official example scripts: (give details below) : Yes
The tasks I am working on is:
* conversion of Electra tf checkpoint train from the official repository on a custom dataset to pytorch
## To reproduce
Steps to reproduce the behavior:
1. install latest version of huggingface : pip install git+https://github.com/huggingface/transformers.git
2. train on official repo few steps Electra on a dataset
3. use the script to convert electra checkpoint : transformers/src/transformers/models/electra/convert_electra_original_tf_checkpoint_to_pytorch.py
```
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias
Skipping discriminator_predictions/dense/bias/adam_m ['discriminator_predictions', 'dense', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense/bias/adam_v ['discriminator_predictions', 'dense', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel
Skipping discriminator_predictions/dense/kernel/adam_m ['discriminator_predictions', 'dense', 'kernel', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense/kernel/adam_v ['discriminator_predictions', 'dense', 'kernel', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias
Skipping discriminator_predictions/dense_1/bias/adam_m ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense_1/bias/adam_v ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel
Skipping discriminator_predictions/dense_1/kernel/adam_m ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping discriminator_predictions/dense_1/kernel/adam_v ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta
Skipping electra/embeddings/LayerNorm/beta/adam_m ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings/LayerNorm/beta/adam_v ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma
Skipping electra/embeddings/LayerNorm/gamma/adam_m ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings/LayerNorm/gamma/adam_v ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'position_embeddings'] electra/embeddings/position_embeddings
Skipping electra/embeddings/position_embeddings/adam_m ['electra', 'embeddings', 'position_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/position_embeddings/adam_v ['electra', 'embeddings', 'position_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'token_type_embeddings'] electra/embeddings/token_type_embeddings
Skipping electra/embeddings/token_type_embeddings/adam_m ['electra', 'embeddings', 'token_type_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/token_type_embeddings/adam_v ['electra', 'embeddings', 'token_type_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings', 'word_embeddings'] electra/embeddings/word_embeddings
Skipping electra/embeddings/word_embeddings/adam_m ['electra', 'embeddings', 'word_embeddings', 'adam_m'] 'Embedding' object has no attribute 'adam_m'
Skipping electra/embeddings/word_embeddings/adam_v ['electra', 'embeddings', 'word_embeddings', 'adam_v'] 'Embedding' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings_project', 'bias'] electra/embeddings_project/bias
Skipping electra/embeddings_project/bias/adam_m ['electra', 'embeddings_project', 'bias', 'adam_m'] 'Parameter' object has no attribute 'adam_m'
Skipping electra/embeddings_project/bias/adam_v ['electra', 'embeddings_project', 'bias', 'adam_v'] 'Parameter' object has no attribute 'adam_v'
Initialize PyTorch weight ['electra', 'embeddings_project', 'kernel'] electra/embeddings_project/kernel
```
Is it a normal message ? I remembered few months ago these message were not present. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9797/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9796/comments | https://api.github.com/repos/huggingface/transformers/issues/9796/events | https://github.com/huggingface/transformers/pull/9796 | 793,760,074 | MDExOlB1bGxSZXF1ZXN0NTYxMzkzMjc4 | 9,796 | Improve pytorch examples for fp16 | {
"login": "ak314",
"id": 9784302,
"node_id": "MDQ6VXNlcjk3ODQzMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9784302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ak314",
"html_url": "https://github.com/ak314",
"followers_url": "https://api.github.com/users/ak314/followers",
"following_url": "https://api.github.com/users/ak314/following{/other_user}",
"gists_url": "https://api.github.com/users/ak314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ak314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ak314/subscriptions",
"organizations_url": "https://api.github.com/users/ak314/orgs",
"repos_url": "https://api.github.com/users/ak314/repos",
"events_url": "https://api.github.com/users/ak314/events{/privacy}",
"received_events_url": "https://api.github.com/users/ak314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for addressing the comments, this is good to go IMO.\r\nPro-tip, if you edit your description to replace `Issue #9752` by `Fixes #9752`, the issue will be automatically closed when we merge this PR."
] | 1,611 | 1,612 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
When fp16 is True in pytorch training examples, use padding to multiple of 8, if the currently used data collator allows, to speed up training. If no collator was used, add one using the padding option.
Fixes #9752
## Who can review?
Trainer: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9796",
"html_url": "https://github.com/huggingface/transformers/pull/9796",
"diff_url": "https://github.com/huggingface/transformers/pull/9796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9796.patch",
"merged_at": 1611654428000
} |
https://api.github.com/repos/huggingface/transformers/issues/9795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9795/comments | https://api.github.com/repos/huggingface/transformers/issues/9795/events | https://github.com/huggingface/transformers/issues/9795 | 793,750,307 | MDU6SXNzdWU3OTM3NTAzMDc= | 9,795 | does LED use distributed training by default? | {
"login": "mmoya01",
"id": 17535683,
"node_id": "MDQ6VXNlcjE3NTM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17535683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmoya01",
"html_url": "https://github.com/mmoya01",
"followers_url": "https://api.github.com/users/mmoya01/followers",
"following_url": "https://api.github.com/users/mmoya01/following{/other_user}",
"gists_url": "https://api.github.com/users/mmoya01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmoya01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmoya01/subscriptions",
"organizations_url": "https://api.github.com/users/mmoya01/orgs",
"repos_url": "https://api.github.com/users/mmoya01/repos",
"events_url": "https://api.github.com/users/mmoya01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmoya01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah, I just noticed `Seq2SeqTrainingArguments` sets `local_rank=-1` by default"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hello, I'm currently fine tuning the `allenai/led-base-16384` model and following allow this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n) .The node that I'm using has a couple of V100-SXM2-16GB GPUs in it. That said, does `Seq2SeqTrainer` automatically use distributed training by default?
I noticed it says
```
the inner model is wrapped in ``DeepSpeed`` and then again in ``torch.nn.DistributedDataParallel``.
```
but I just wanted to triple check that the training job would distributed across GPUs. I'd greatly appreciate the feedback | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9794/comments | https://api.github.com/repos/huggingface/transformers/issues/9794/events | https://github.com/huggingface/transformers/pull/9794 | 793,727,383 | MDExOlB1bGxSZXF1ZXN0NTYxMzY2MjY0 | 9,794 | [Flaky Generation Tests] Make sure that no early stopping is happening for beam search | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The PR fixes the flaky ci which is probably due to early stopping in beam search with the top `num_return_sequences` beams being smaller than the
longest `num_beams` beam.
This PR should (hopefully) fix flaky CI failures, such as:
- https://app.circleci.com/pipelines/github/huggingface/transformers/18912/workflows/70862bd9-bc94-4f2d-9b07-b85be146c867/jobs/155831
- https://app.circleci.com/pipelines/github/huggingface/transformers/18910/workflows/284df8e3-373b-4168-bc3b-f7079c5aa17d/jobs/155797
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9794/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9794",
"html_url": "https://github.com/huggingface/transformers/pull/9794",
"diff_url": "https://github.com/huggingface/transformers/pull/9794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9794.patch",
"merged_at": 1611649304000
} |
https://api.github.com/repos/huggingface/transformers/issues/9793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9793/comments | https://api.github.com/repos/huggingface/transformers/issues/9793/events | https://github.com/huggingface/transformers/issues/9793 | 793,696,300 | MDU6SXNzdWU3OTM2OTYzMDA= | 9,793 | Add the ability to skip runtime version check | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If anyone else comes here, until this is fixed you can find the version in `transformers/dependency_versions_check.py` - e.g. change `\"tokenizers\": \"tokenizers==0.9.4\"` to `\"tokenizers\": \"tokenizers>=0.9.4\"`",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | # 🚀 Feature request
Add the ability to skip version check / raise a warning instead of an error if the checks failed.
## Motivation
Currently, Transformers perform a runtime libraries version check and raise an error if some of the requirements has a version that is different from the specified.
While version check is a reasonable thing to do, an error seems like an overkill. As [I mentioned](https://github.com/huggingface/transformers/pull/8073#issuecomment-765632175) in #8073, this, for example, prevents from using the latest version of Tokenizers without modifying the source code.
## Your contribution
I suggest creating an environment variable (e.g. `TRANSFORMERS_VERSION_CHECK_STRICT`) that would control the reaction to a version mismatch. If it is set to `true`, the current behavior remains. If `false`, we log a warning instead of raising an error. Personally, I think that having this variable `false` by default will make it easier for the users to work in such cases.
What's your opinion in that? Probably we can find an even better solution together. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9793/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9793/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9792/comments | https://api.github.com/repos/huggingface/transformers/issues/9792/events | https://github.com/huggingface/transformers/issues/9792 | 793,685,907 | MDU6SXNzdWU3OTM2ODU5MDc= | 9,792 | Strange start token in MT5 generation | {
"login": "tomdzh",
"id": 50083108,
"node_id": "MDQ6VXNlcjUwMDgzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/50083108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomdzh",
"html_url": "https://github.com/tomdzh",
"followers_url": "https://api.github.com/users/tomdzh/followers",
"following_url": "https://api.github.com/users/tomdzh/following{/other_user}",
"gists_url": "https://api.github.com/users/tomdzh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomdzh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomdzh/subscriptions",
"organizations_url": "https://api.github.com/users/tomdzh/orgs",
"repos_url": "https://api.github.com/users/tomdzh/repos",
"events_url": "https://api.github.com/users/tomdzh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomdzh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"One more thing: this behavior still persists after I fine tuned the model on my own dataset",
"Hi @tomdzh \r\n\r\nfirst of all, unlike the original T5, mT5 is not pre-trained on any supervised downstream task (like summarization, translation etc), so generation work without fine-tuning it.\r\n\r\nAlso, it would be hard to answer why it's happening in fine-tuned model without looking at any code.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?):1.7.1
### Who can help
Text Generation: @patrickvonplaten @TevenLeScao
T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): MT5
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small")
tokenizer = MT5Tokenizer.from_pretrained("google/mt5-small")
text = 'summarize: Bidirectional Encoder Representations from Transformers is a Transformer-based machine learning technique for natural language processing pre-training developed by Google'
inputs = tokenizer([text], max_length=512, truncation=True, return_tensors='pt')
summary_ids = model.generate(inputs['input_ids'])
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in summary_ids])
```
The output I got is ['<extra_id_0>.']
## Expected behavior
I tried a few input texts. The generated output always start with <extra_id_0>, which doesn't happen in t5 generation. Anyone knows how to solve it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9792/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9791/comments | https://api.github.com/repos/huggingface/transformers/issues/9791/events | https://github.com/huggingface/transformers/pull/9791 | 793,671,387 | MDExOlB1bGxSZXF1ZXN0NTYxMzIwMDE1 | 9,791 | Fix broken links in the converting tf ckpt document | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the following comment of my own can be left as a PR for the future.\r\nThis change seems to require a modification of the `convert` code, and I think it may be necessary to separate it from this broken link issue.\r\n\r\nI'm sorry for saying such a thing, even though the following is what I wrote.\r\n\r\n> I think there are some outdated explanations.\r\n> As I discussed in issue #9657, I think it should be better to explain `from_pretrained()` instead of `torch.save()`.\r\n> Hence, I think the explanation below should be updated.\r\n> \r\n> ```\r\n> You can then disregard the TensorFlow\r\n> checkpoint (the three files starting with ``bert_model.ckpt``\\ ) but be sure to keep the configuration file (\\\r\n> ``bert_config.json``\\ ) and the vocabulary file (\\ ``vocab.txt``\\ ) as these are needed for the PyTorch model too.\r\n> ```\r\n> \r\n\r\nI'll remove the WIP for now, but if you think this matter should be worked in this PR, please let me know."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR is to fix broken links in ["Converting TensorFlow Checkpoints"](https://huggingface.co/transformers/converting_tensorflow_models.html).
Advised by @LysandreJik in issue #9656, I updated the links.
I also refer to issue #8720 and it seems the issue is solved by this PR.
I think there are some outdated explanations.
As I discussed in issue #9657, I think it should be better to explain `from_pretrained()` instead of `torch.save()`.
Hence, I think the explanation below should be updated.
```
You can then disregard the TensorFlow
checkpoint (the three files starting with ``bert_model.ckpt``\ ) but be sure to keep the configuration file (\
``bert_config.json``\ ) and the vocabulary file (\ ``vocab.txt``\ ) as these are needed for the PyTorch model too.
```
I would be happy to get some advice on how to proceed with this PR.
Fixes #9656
Fixes #8720
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
albert, bert, XLM: @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9791",
"html_url": "https://github.com/huggingface/transformers/pull/9791",
"diff_url": "https://github.com/huggingface/transformers/pull/9791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9791.patch",
"merged_at": 1611650277000
} |
https://api.github.com/repos/huggingface/transformers/issues/9790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9790/comments | https://api.github.com/repos/huggingface/transformers/issues/9790/events | https://github.com/huggingface/transformers/pull/9790 | 793,665,244 | MDExOlB1bGxSZXF1ZXN0NTYxMzE1MDU4 | 9,790 | RagTokenForGeneration: Fixed parameter name for logits_processor | {
"login": "michaelrglass",
"id": 35044941,
"node_id": "MDQ6VXNlcjM1MDQ0OTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/35044941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelrglass",
"html_url": "https://github.com/michaelrglass",
"followers_url": "https://api.github.com/users/michaelrglass/followers",
"following_url": "https://api.github.com/users/michaelrglass/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelrglass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelrglass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelrglass/subscriptions",
"organizations_url": "https://api.github.com/users/michaelrglass/orgs",
"repos_url": "https://api.github.com/users/michaelrglass/repos",
"events_url": "https://api.github.com/users/michaelrglass/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelrglass/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
The parameter name for the beam_search and greedy_search functions of the GenerationMixin is (now?) logits_processor not pre_processor. This fix makes prefix_allowed_tokens_fn work (again?).
## Who can review?
Rag: @patrickvonplaten, @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9790/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9790",
"html_url": "https://github.com/huggingface/transformers/pull/9790",
"diff_url": "https://github.com/huggingface/transformers/pull/9790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9790.patch",
"merged_at": 1611675842000
} |
https://api.github.com/repos/huggingface/transformers/issues/9789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9789/comments | https://api.github.com/repos/huggingface/transformers/issues/9789/events | https://github.com/huggingface/transformers/pull/9789 | 793,645,614 | MDExOlB1bGxSZXF1ZXN0NTYxMjk4NzI5 | 9,789 | Allow RAG to output decoder cross-attentions | {
"login": "dblakely",
"id": 20539855,
"node_id": "MDQ6VXNlcjIwNTM5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20539855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dblakely",
"html_url": "https://github.com/dblakely",
"followers_url": "https://api.github.com/users/dblakely/followers",
"following_url": "https://api.github.com/users/dblakely/following{/other_user}",
"gists_url": "https://api.github.com/users/dblakely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dblakely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dblakely/subscriptions",
"organizations_url": "https://api.github.com/users/dblakely/orgs",
"repos_url": "https://api.github.com/users/dblakely/repos",
"events_url": "https://api.github.com/users/dblakely/events{/privacy}",
"received_events_url": "https://api.github.com/users/dblakely/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq Thanks for the suggestions! All the CI checks pass now."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR makes RAG output the generator model's decoder cross-attentions when `output_attentions=True`.
Motivation and context: before this PR, RAG's output objects had attributes for the generator's encoder self-attentions and decoder self-attentions, but no option for the encoder-decoder cross-attentions. So this simply allows cross-attentions to be extracted, as well as fixing a small bug where `output_attentions` wasn't being passed into the generator.
Fixes #9468
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Yes - #9468
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? - I don't believe any new tests are necessary. Existing tests pass.
## Who can review?
@patrickvonplaten, @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9789/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9789",
"html_url": "https://github.com/huggingface/transformers/pull/9789",
"diff_url": "https://github.com/huggingface/transformers/pull/9789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9789.patch",
"merged_at": 1611682367000
} |
https://api.github.com/repos/huggingface/transformers/issues/9788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9788/comments | https://api.github.com/repos/huggingface/transformers/issues/9788/events | https://github.com/huggingface/transformers/pull/9788 | 793,602,284 | MDExOlB1bGxSZXF1ZXN0NTYxMjYyNzM0 | 9,788 | Clean TF Bert | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I fully rework the keywords addition part to keep only those that seemed the most meaningful."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to clean the code base of BERT and the other models that depends of it because of the `#Copied from...`. It also clean the template accordingly to the same changes applied in BERT.
The other models will receive the same type of cleaning, but each model will have its own PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9788/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9788",
"html_url": "https://github.com/huggingface/transformers/pull/9788",
"diff_url": "https://github.com/huggingface/transformers/pull/9788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9788.patch",
"merged_at": 1611743292000
} |
https://api.github.com/repos/huggingface/transformers/issues/9787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9787/comments | https://api.github.com/repos/huggingface/transformers/issues/9787/events | https://github.com/huggingface/transformers/pull/9787 | 793,382,879 | MDExOlB1bGxSZXF1ZXN0NTYxMDgwMjU3 | 9,787 | Fix model parallel definition in superclass | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | The `model_parallel` attribute should be obtainable from every class instance that has the `is_parallelized` class attribute. Otherwise the following line in the trainer crashes:
https://github.com/huggingface/transformers/blob/626116b7d76efef5137c3b4a92e64e3bb57a6882/src/transformers/trainer.py#L244
cc @stas00 @alexorona | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9787/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9787/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9787",
"html_url": "https://github.com/huggingface/transformers/pull/9787",
"diff_url": "https://github.com/huggingface/transformers/pull/9787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9787.patch",
"merged_at": 1611591128000
} |
https://api.github.com/repos/huggingface/transformers/issues/9786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9786/comments | https://api.github.com/repos/huggingface/transformers/issues/9786/events | https://github.com/huggingface/transformers/issues/9786 | 793,336,279 | MDU6SXNzdWU3OTMzMzYyNzk= | 9,786 | Truncated Translations with mT5 model | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've read these similar issue\r\n\r\n[#5654](https://github.com/huggingface/transformers/issues/5656)\r\n[#7500](https://github.com/huggingface/transformers/issues/7500)\r\n\r\nbut couldn't get the information needed.\r\n\r\n@sshleifer @patil-suraj Any idea why this is happening? (Sorry for tagging everyone, I've been facing this issue for a while, looking for answers 🙂)",
"Hi @sumanthd17 ,\r\nI had this issue with standard T5 in which it was ignoring the `min_length` flag. I fixed this simply by upping the `num_beams` to 4.\r\n\r\nCould you test this and see if it's quick fix for you?",
"@FL33TW00D Thanks. Yeah the `min_length` flag is being ignored. \r\n\r\nThanks for the quick fix. I think we can close this issue for now, But it might be a good idea to know why the min_length is being ignored.",
"@sumanthd17 \r\nThis blog post from Patrick explains why min_length may not be satisfied by the beam_search:\r\nhttps://huggingface.co/blog/how-to-generate\r\n\r\nAlthough I do agree perhaps it should throw a warning message when it is unable to satisfy the min_length flag with the current generation parameters."
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2
- Platform: GCP
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Using GPU in script?: V100
- Using distributed or parallel set-up in script?: using pytorch-lightning `dp` setup
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): I'm trying to fine-tune mT5 for neural machine translation. I trained the model on 3M data points with a `max_seq_len` of `200`. But when I perform `model.generate` I'm getting truncated outputs.
The input sequence which I'm inferencing is not a very long input and no where close to 200 tokens.
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
I'm using my own dataset and training the model by wrapping the code around with pytorch-lightning.
I played around with the parameters in `generate` method but all combinations still endup giving truncated outputs
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behaviour
An example output generated by the model
Generated output:
`I hope that representatives from abroad will get some time to see Delhi's`
The quality of translation is good but the sentence ends abruptly.
Expected output:
`I hope that delegates from abroad will have some time to see the history and pride of Delhi.` (generated with google translate)
@patrickvonplaten tagging you for help as this is a T5 related model
Thanks in Advance
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9785/comments | https://api.github.com/repos/huggingface/transformers/issues/9785/events | https://github.com/huggingface/transformers/issues/9785 | 793,292,525 | MDU6SXNzdWU3OTMyOTI1MjU= | 9,785 | GPT2 MNLI training using run_glue.py | {
"login": "nlp-student",
"id": 76427077,
"node_id": "MDQ6VXNlcjc2NDI3MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nlp-student",
"html_url": "https://github.com/nlp-student",
"followers_url": "https://api.github.com/users/nlp-student/followers",
"following_url": "https://api.github.com/users/nlp-student/following{/other_user}",
"gists_url": "https://api.github.com/users/nlp-student/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nlp-student/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nlp-student/subscriptions",
"organizations_url": "https://api.github.com/users/nlp-student/orgs",
"repos_url": "https://api.github.com/users/nlp-student/repos",
"events_url": "https://api.github.com/users/nlp-student/events{/privacy}",
"received_events_url": "https://api.github.com/users/nlp-student/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As explained in the documentation: \"`run_glue.py`: This script can fine-tune the following models: BERT, XLM, XLNet and RoBERTa.\"\r\n\r\n=> GPT-2 is a Transformer decoder, which can learn to generate text in an autoregressive way. It is not aimed at GLUE tasks, which are sequence classification tasks. ",
"Hi! Actually we've recently added `GPT2ForSequenceClassification` to enable support for sequence classification tasks (like GLUE). The support was added to enable some models such as EDIT: linked wrong model. Updated: [DialogRPT](https://huggingface.co/microsoft/DialogRPT-updown)!\r\n\r\nHowever, as you have seen @nlp-student, the GPT-2 model isn't trainable out of the box with batch size > 1, as it has no padding token defined. Furthermore, MNLI is a three-way classification so you would need to set the number of labels appropriately.\r\n\r\nI invite you to run the following script to create a model that you can then use with the `run_glue.py` script, initialized from the GPT-2 weights:\r\n```py\r\nfrom transformers import GPT2ForSequenceClassification, GPT2Tokenizer\r\n\r\nmodel = GPT2ForSequenceClassification.from_pretrained(\"gpt2\", num_labels=3)\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\n# Define a padding token\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.config.pad_token_id = tokenizer.pad_token_id\r\n\r\n# Save the model and tokenizer in a directory\r\nmodel.save_pretrained(\"directory\")\r\ntokenizer.save_pretrained(\"directory\")\r\n```\r\n\r\nThen you can launch the `run_glue.py` script by specifying that checkpoint.\r\n\r\nHowever, as @NielsRogge pointed out, GPT-2 will not obtain as good results as a bi-directional encoders such as BERT.",
"Thank you so much for your responses. I will try that out.\r\n\r\nIs GPT-2 generally expected to perform worse than BERT on sequence classification ? \r\n\r\nI've seen a lot of examples of BERT finetuned for classification, but not really for GPT-2. Is it not common (or that useful) to finetune GPT-2 for classification?\r\n\r\nThank you for any help or direction!",
"> Is GPT-2 generally expected to perform worse than BERT on sequence classification ?\r\n\r\nNormally, yes. The reason for this is that GPT-2 is not really designed for such a task, whereas BERT is, as it has a special [CLS] token for classification tasks. GPT-2 is designed to process text autoregressively (i.e. left to right) in order to generate new text, whereas BERT is designed to process all tokens at once (hence creating a bidirectional representation of all input tokens), which is useful for tasks like sequence classification or extractive question answering for example. \r\n\r\nThat said, you can still use GPT-2 to perform sequence classification. You can simply let GPT-2 process a sentence word by word from left to right, and then train it to predict the class of the sentence by placing a linear layer on top of the hidden representation of the final token of the sentence, which is done by looking at the [code](https://github.com/huggingface/transformers/blob/285c6262a84490270d2f1a1c06ee9ccfc1b60e8f/src/transformers/models/gpt2/modeling_gpt2.py#L1233) of `GPT2ForSequenceClassification`.\r\n\r\n So maybe an interesting thing to do is compare `BERTForSequenceClassification` and `GPT2ForSequenceClassification` on the same dataset, and see which one performs best. ",
"Many thanks for your detailed response! It was very helpful for my understanding.",
"You're welcome, I'm closing this issue! Feel free to reopen if you have other issues down the road.",
"> Hi! Actually we've recently added `GPT2ForSequenceClassification` to enable support for sequence classification tasks (like GLUE). The support was added to enable some models such as EDIT: linked wrong model. Updated: [DialogRPT](https://huggingface.co/microsoft/DialogRPT-updown)!\r\n> \r\n> However, as you have seen @nlp-student, the GPT-2 model isn't trainable out of the box with batch size > 1, as it has no padding token defined. Furthermore, MNLI is a three-way classification so you would need to set the number of labels appropriately.\r\n> \r\n> I invite you to run the following script to create a model that you can then use with the `run_glue.py` script, initialized from the GPT-2 weights:\r\n> \r\n> ```python\r\n> from transformers import GPT2ForSequenceClassification, GPT2Tokenizer\r\n> \r\n> model = GPT2ForSequenceClassification.from_pretrained(\"gpt2\", num_labels=3)\r\n> tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n> \r\n> # Define a padding token\r\n> tokenizer.pad_token = tokenizer.eos_token\r\n> model.config.pad_token_id = tokenizer.pad_token_id\r\n> \r\n> # Save the model and tokenizer in a directory\r\n> model.save_pretrained(\"directory\")\r\n> tokenizer.save_pretrained(\"directory\")\r\n> ```\r\n> \r\n> Then you can launch the `run_glue.py` script by specifying that checkpoint.\r\n> \r\n> However, as @NielsRogge pointed out, GPT-2 will not obtain as good results as a bi-directional encoders such as BERT.\r\n\r\nHi, met the same error as the original post although but I've set the per_device_train_batch_size=1. \r\nI was running run_swag.py with GPT2 (I added a class GPT2ForMultipleChoice referring to BertForMultipleChoice), could you offer some help with this problem? Thanks a lot.",
"I follow the suggestion but got this error:\r\n\r\nRuntimeError: Error(s) in loading state\\_dict for GPT2ForSequenceClassification: size mismatch for score.weight: copying a param with shape torch.Size(\\[100, 768\\]) from checkpoint, the shape in current model is torch.Size(\\[2, 768\\]). You may consider adding \\`ignore\\_mismatched\\_sizes=True\\` in the model \\`from\\_pretrained\\` method.\r\n\r\nI then tried to flag `ignore_mismatched_sizes` as True like below but still, get the same error. \r\n\r\n```python\r\nfrom transformers import GPT2ForSequenceClassification, GPT2Tokenizer\r\n\r\nmodel = GPT2ForSequenceClassification.from_pretrained(\"gpt2\", num_labels=100, ignore_mismatched_sizes=True)\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\n# Define a padding token\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.config.pad_token_id = tokenizer.pad_token_id\r\n\r\n# Save the model and tokenizer in a directory\r\nmodel.save_pretrained(\"directory\")\r\ntokenizer.save_pretrained(\"directory\")\r\n```"
] | 1,611 | 1,666 | 1,612 | NONE | null | Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accumulation_steps 32\
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir models/gpt2/mnli/
```
I get the following error,
```
"Asking to pad but the tokenizer does not have a padding token. "
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
```
@LysandreJik : does the trainer need to be modified? Or am I supposed to be using additional commands for training using GPT2? (I have been following [this example](https://huggingface.co/transformers/v2.0.0/examples.html) using BERT)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9784/comments | https://api.github.com/repos/huggingface/transformers/issues/9784/events | https://github.com/huggingface/transformers/issues/9784 | 793,260,709 | MDU6SXNzdWU3OTMyNjA3MDk= | 9,784 | Translation Model in ONNX: Choosable Output Formats | {
"login": "oborchers",
"id": 26734737,
"node_id": "MDQ6VXNlcjI2NzM0NzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/26734737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oborchers",
"html_url": "https://github.com/oborchers",
"followers_url": "https://api.github.com/users/oborchers/followers",
"following_url": "https://api.github.com/users/oborchers/following{/other_user}",
"gists_url": "https://api.github.com/users/oborchers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oborchers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oborchers/subscriptions",
"organizations_url": "https://api.github.com/users/oborchers/orgs",
"repos_url": "https://api.github.com/users/oborchers/repos",
"events_url": "https://api.github.com/users/oborchers/events{/privacy}",
"received_events_url": "https://api.github.com/users/oborchers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hello, Thanks for you work on that @oborchers ! I also saw the notebook on SentenceTransformers and it helped a lot ! \r\n\r\nAny status about this feature ? I also need to run models in onnx but most of them need to call a `.generate` function which is for now not supported... (I could replicate all the generate code in nodejs but i'm sure there is a nicer solution)\r\nIs there any fix, status update or hack ? \r\n\r\nThanks a lot in advance,\r\nHave a great day. ",
"Hi @lerezell! Welcome! Glad the notebook helped.\r\n\r\nNot from my side, unfortunately. I had postponed the issue from our side, because we had more pressing stuff to work on. But lately, the need for this feature starts to become larger as well at our company. Does your solution of porting the `.generate` function work by placing it on top of the ONNX version?\r\n\r\n**Edit:**\r\nJust out of curiosity I went through the `.generate` code and it should be possible to place the existing `.generate` code on top of an `CausalLMOutput` model, very similar as done in the [notebook](https://github.com/oborchers/sentence-transformers/blob/master/examples/onnx_inference/onnx_inference.ipynb). This requires an extension of the forward method. \r\n\r\nIn an initial implementation, it should be perfectly sufficient to port just the `sample` section and see if it works. However, this does not necessarily apply to beam_search, which I haven't figured out how it works. And the raw implementation shouldn't be too complex, because one might strip away a set of the convenience functions/arguments.\r\n\r\nDownsides of this are, that, there needs to be some way of defining the arguments of `.generate` at runtime for inference. For example, the `min_length` and `max_length` and `eos_token_id` parameter should be included in the `forward` method arguments, because otherwise they would be static and defined via configuration at runtime. This may be sensible for some applications, but requires re-exporting the model every-time those change, which isn't really a nice way of doing this. Or at least if I didn't miss something completely\r\n\r\nBest regards and have a nice eastern",
"Hi @oborchers, \r\n\r\nI still haven't implemented \"my\" solution as I wanted to know if there was any other solution than writing all the logic again. \r\nI would rather not and exporting the logic in the forward (and then in the onnx model) seems to be the best solution. \r\n\r\nFor the `x_length` arguments, that a downside, passing them as optional in the forward method could do ? \r\n\r\nI need to focus on other things right now but I definitely keep an eye open for that ! \r\n\r\nHave a great day \r\n\r\n",
"Hi, any update on how to export full pipelines to onnx? \r\n\r\nFor now, we're still obliged to keep a custom/hugging face lib code to handle the \"post output embeddings\" logic.... \r\n\r\nThanks in advance, \r\nHave a great day",
"Hi @Ierezell!\r\n\r\nSorry for not coming back on the issue. To be honest, for our use case there are quite a few problems we've encountered in exporting full pipelines to ONNX:\r\n\r\n- How to best deal with caching (`past_key_values`)\r\n- Less than optimal performance when used with some generative models (https://github.com/microsoft/onnxruntime/issues/7238)\r\n- The problem of batching requests on inference servers which is very difficult due to the dynamic dimensions of `past_key_values`\r\n- Similar gains in inference time by using custom kernels (e.g. deepspeed inference) + regular pytorch\r\n\r\nThis blog post from Microsoft may help though:\r\n- https://cloudblogs.microsoft.com/opensource/2021/06/30/journey-to-optimize-large-scale-transformer-model-inference-with-onnx-runtime/\r\n- https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb",
"Hi @oborchers, thanks a lot for the feedback! \r\n\r\nOnnx is nice to be able to change stack for me (javascript etc...) but in the light of what you're saying it will be better to keep my GPU inference server. \r\n\r\nThanks a lot, \r\nHave a great day ! \r\n\r\n",
"Hi,\r\n\r\nIs there an alternative to onnx that you'd recommend? The able to keep and manipulate past_key_values is the most crucial part that I cannot find for many inference optimizations.\r\n\r\nThank you!"
] | 1,611 | 1,635 | null | NONE | null | # 🚀 Feature request
I am requesting to provide an option to specify the output format for the `translation_xx_to_yy` export to ONNX models. Currently, the output of [convert_graph_to_onnx.convert](https://github.com/huggingface/transformers/blob/6a346f0358a40f89ec384d441233bf54cac44f6a/src/transformers/convert_graph_to_onnx.py#L330) will provide the raw tensors as output (working prototype code under #9722)
## Motivation
When putting the models into production it would be great if one could chose, whether one wants to have the actual tensors or the output-tokens returned when exporting a translation pipeline to ONNX. Thereby, one is not forced to do a custom re-implementation of the [model.generate](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101) function, which then uses the ONNX model instead of the torch one.
As for now, the part which is could be replaced by an ONNX inference session lives under the [model.generate](https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L385) function. Using this in production would mean to keep a TranslationPipeline object with all corresponding model information and config plus an ONNX inference session.
## Your contribution
There may be multiple solutions to this problem:
1. User-specific re-implementation of model.generate (This is what Ill try to accomplish in the future)
2. Is it possible to rewrite the code under model.generate to full torch? Then it should be possible to create a custom model for all translation models, that just places this "generate layer" on top of it. I have provided an example [here](https://github.com/oborchers/sentence-transformers/blob/master/examples/onnx_inference/onnx_inference.ipynb) which adds a simple pooling layer on an already extant transformers model. (That would require more study from my side to develop a prototype and follows step 1)
3. Provide support for the [ort-customops](https://github.com/microsoft/ort-customops) library by Microsoft. Essentially, this enables ONNX to handle strings (but introduces dependency to a very experimental extension). For example, that way one can export the universal sentence encoder (including tokenizer) to ONNX. Example [here](https://github.com/onnx/tensorflow-onnx/issues/1260). I cannot provide anything useful here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9784/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9784/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9783/comments | https://api.github.com/repos/huggingface/transformers/issues/9783/events | https://github.com/huggingface/transformers/pull/9783 | 793,237,962 | MDExOlB1bGxSZXF1ZXN0NTYwOTU5NTky | 9,783 | Adding `skip_special_tokens=True` to FillMaskPipeline | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If it's a bug users relied on, then it's a feature, not a bug. \r\n\r\nAnyway I'll merge this.",
"Good point, but I don't think that's the case here. Do you think users rely on the previous behavior? If that's the case, then we should revert this PR until we change of major version."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
- It's backward incompatible.
- It makes for sense for pipelines to remove references to special_tokens
(all of the other pipelines do that).
- Keeping special tokens makes it hard for users to actually remove them
because all models have different tokens (`<s>`, `<cls>`,` [CLS]`, ....)
- It's actually closer to the docs specs :
```
- **sequence** (:obj:`str`) -- The corresponding input with the mask token prediction.
```
as the input does not include the special tokens.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Linked to : https://github.com/huggingface/transformers/issues/9518
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9783/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9783",
"html_url": "https://github.com/huggingface/transformers/pull/9783",
"diff_url": "https://github.com/huggingface/transformers/pull/9783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9783.patch",
"merged_at": 1611651988000
} |
https://api.github.com/repos/huggingface/transformers/issues/9782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9782/comments | https://api.github.com/repos/huggingface/transformers/issues/9782/events | https://github.com/huggingface/transformers/pull/9782 | 793,207,932 | MDExOlB1bGxSZXF1ZXN0NTYwOTM0ODg3 | 9,782 | Add BlenderbotSmallForCausalLM for EncoderDecoder | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.\r\n\r\nAlso we need a decoder-only test for this."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Implementing BlenderbotSmallForCausalLM
Issue #9066
PR #9128
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9782",
"html_url": "https://github.com/huggingface/transformers/pull/9782",
"diff_url": "https://github.com/huggingface/transformers/pull/9782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9782.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9781/comments | https://api.github.com/repos/huggingface/transformers/issues/9781/events | https://github.com/huggingface/transformers/pull/9781 | 793,204,830 | MDExOlB1bGxSZXF1ZXN0NTYwOTMyMzE0 | 9,781 | Implementation of BlenderbotForCausalLM | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.\r\n\r\nAlso we need a decoder-only test for this."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
implementing BlenderbotForCausalLM for EncoderDecoder use, like ProphetNetForCausalLM
Fixes # (issue)
#9066
#9128
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9781",
"html_url": "https://github.com/huggingface/transformers/pull/9781",
"diff_url": "https://github.com/huggingface/transformers/pull/9781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9781.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9780/comments | https://api.github.com/repos/huggingface/transformers/issues/9780/events | https://github.com/huggingface/transformers/issues/9780 | 793,196,334 | MDU6SXNzdWU3OTMxOTYzMzQ= | 9,780 | Calculating Confidence score for Question Answering Models | {
"login": "UmerTariq1",
"id": 32323864,
"node_id": "MDQ6VXNlcjMyMzIzODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32323864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UmerTariq1",
"html_url": "https://github.com/UmerTariq1",
"followers_url": "https://api.github.com/users/UmerTariq1/followers",
"following_url": "https://api.github.com/users/UmerTariq1/following{/other_user}",
"gists_url": "https://api.github.com/users/UmerTariq1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UmerTariq1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UmerTariq1/subscriptions",
"organizations_url": "https://api.github.com/users/UmerTariq1/orgs",
"repos_url": "https://api.github.com/users/UmerTariq1/repos",
"events_url": "https://api.github.com/users/UmerTariq1/events{/privacy}",
"received_events_url": "https://api.github.com/users/UmerTariq1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"model_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\n\r\ntrain_encoding = tokenizer(X_train, truncation=True, padding=True, max_length=512, return_tensors=\"pt\")\r\n\r\n\r\n# Training the model with PyTorch\r\n\r\n\r\nwith torch.no_grad():\r\n\r\n outputs = model(**train_encoding)\r\n \r\n # Normalize logits and spans \r\n start = F.softmax(outputs.start_logits, dim=1)\r\n end = F.softmax(outputs.end_logits, dim=1)\r\n\r\n\r\n # Getting the start and end index of the tensor to retrieve the answer\r\n start_index = torch.argmax((start), dim=1)\r\n end_index = torch.argmax((end), dim=1)\r\n\r\n\r\n\r\n# Computing score\r\n\r\nouter = np.matmul(np.expand_dims(start, -1), np.expand_dims(end, 1))\r\nmax_answer_len = 512\r\ncandidates = np.tril(np.triu(outer), max_answer_len - 1)\r\nidx_sorts = [np.argmax(candidates[i].flatten()) for i in range(len(candidates))]\r\nscores = [candidates[[i], start_indices[i], end_indices[i]][0] for i in range(len(candidates))]\r\n\r\n\r\n# Extracting answer\r\ntokenizer.decode(train_encoding[\"input_ids\"][start_index:end_index], skip_special_tokens=True)\r\n\r\n\r\n\r\n \r\n"
] | 1,611 | 1,663 | 1,614 | NONE | null | For QA task (extractive QA) the pipeline provides/returns 4 values.
1. a probability score
2. start index
3. end index
4. extracted answer
(https://huggingface.co/transformers/main_classes/pipelines.html#transformers.QuestionAnsweringPipeline)
But If I am using the model class directly (not using pipeline) like the code below then i am unable to find the probability score :
`
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForQuestionAnswering.from_pretrained(MODEL_NAME,return_dict=True)
_encoding = tokenizer.encode_plus(question, docText, return_tensors="pt",max_length=4096)
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
start_scores, end_scores = model(input_ids, attention_mask=attention_mask).values()
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))_
`
This answer object has only 3 values. extracted answer, start score and end score.I cant find any way or function to calculate the probability score. I want it because I want to sort multiple answers according to their probability/confidence score.
This is a duplicate of Issue #5768 but that issue has been marked close without any answer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9779/comments | https://api.github.com/repos/huggingface/transformers/issues/9779/events | https://github.com/huggingface/transformers/issues/9779 | 793,186,407 | MDU6SXNzdWU3OTMxODY0MDc= | 9,779 | I want to train a BART model for conditional text generation. I want to train the encoder and the decoder separately for a specific task. Can anyone help with the code? I am new to this. @patrickvonplaten | {
"login": "Sai-Ashish",
"id": 30151156,
"node_id": "MDQ6VXNlcjMwMTUxMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/30151156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sai-Ashish",
"html_url": "https://github.com/Sai-Ashish",
"followers_url": "https://api.github.com/users/Sai-Ashish/followers",
"following_url": "https://api.github.com/users/Sai-Ashish/following{/other_user}",
"gists_url": "https://api.github.com/users/Sai-Ashish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sai-Ashish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sai-Ashish/subscriptions",
"organizations_url": "https://api.github.com/users/Sai-Ashish/orgs",
"repos_url": "https://api.github.com/users/Sai-Ashish/repos",
"events_url": "https://api.github.com/users/Sai-Ashish/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sai-Ashish/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Sai-Ashish \r\n\r\nPlease use the forum https://discuss.huggingface.co/ for asking such questions. Use issues to report bugs, feature requests, etc. Thanks!\r\n\r\nClosing it",
"have you came up with solution?"
] | 1,611 | 1,631 | 1,611 | NONE | null | @## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9778/comments | https://api.github.com/repos/huggingface/transformers/issues/9778/events | https://github.com/huggingface/transformers/issues/9778 | 793,181,782 | MDU6SXNzdWU3OTMxODE3ODI= | 9,778 | Link Not Working | {
"login": "akshat311",
"id": 54106538,
"node_id": "MDQ6VXNlcjU0MTA2NTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/54106538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akshat311",
"html_url": "https://github.com/akshat311",
"followers_url": "https://api.github.com/users/akshat311/followers",
"following_url": "https://api.github.com/users/akshat311/following{/other_user}",
"gists_url": "https://api.github.com/users/akshat311/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akshat311/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akshat311/subscriptions",
"organizations_url": "https://api.github.com/users/akshat311/orgs",
"repos_url": "https://api.github.com/users/akshat311/repos",
"events_url": "https://api.github.com/users/akshat311/events{/privacy}",
"received_events_url": "https://api.github.com/users/akshat311/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for pointing it out, the files are now renames to `distil_marian_enro_teacher.sh`, `distil_marian_no_teacher.sh` and are available here \r\nhttps://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_enro_teacher.sh\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh\r\n\r\nFeel free to open a PR to fix the links if you want to contribute :) \r\n\r\nAlso, we now recommend using the new `Seq2SeqTrainer` for fine-tuning seq2seq models, https://github.com/huggingface/transformers/tree/master/examples/seq2seq",
"Thanks a lot. Also, there a way to train the MarianMT model on my dataset ?",
"Sure, the script let's you pass your own dataset as well, have a look at the readme.",
"Thanks a lot for the help."
] | 1,611 | 1,611 | 1,611 | NONE | null |
@patrickvonplaten
I was looking at the MarianMT docs, and in the "Examples" section, the link for "Fine-Tune on GPU" and "Fine-Tune on GPU with pytorch-lightning" is broken.
Kindly look into this issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9778/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9777/comments | https://api.github.com/repos/huggingface/transformers/issues/9777/events | https://github.com/huggingface/transformers/issues/9777 | 793,108,910 | MDU6SXNzdWU3OTMxMDg5MTA= | 9,777 | padding='max_length' allowing more than max length | {
"login": "SimplyLucKey",
"id": 35954092,
"node_id": "MDQ6VXNlcjM1OTU0MDky",
"avatar_url": "https://avatars.githubusercontent.com/u/35954092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SimplyLucKey",
"html_url": "https://github.com/SimplyLucKey",
"followers_url": "https://api.github.com/users/SimplyLucKey/followers",
"following_url": "https://api.github.com/users/SimplyLucKey/following{/other_user}",
"gists_url": "https://api.github.com/users/SimplyLucKey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SimplyLucKey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SimplyLucKey/subscriptions",
"organizations_url": "https://api.github.com/users/SimplyLucKey/orgs",
"repos_url": "https://api.github.com/users/SimplyLucKey/repos",
"events_url": "https://api.github.com/users/SimplyLucKey/events{/privacy}",
"received_events_url": "https://api.github.com/users/SimplyLucKey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you post a short code snippet so we can reproduce ?",
"> Could you post a short code snippet so we can reproduce ?\r\n\r\nThis is my code.\r\n\r\n```\r\nclass GPReviewDataset(Dataset):\r\n def __init__(self, reviews, targets, tokenizer, max_len):\r\n self.reviews = reviews\r\n self.targets = targets\r\n self.tokenizer = tokenizer\r\n self.max_len = max_len\r\n \r\n def __len__(self):\r\n return len(self.reviews)\r\n \r\n def __getitem__(self, item):\r\n review = str(self.reviews[item])\r\n target = self.targets[item]\r\n \r\n encoding = self.tokenizer.encode_plus(text=review, max_length=self.max_len,\r\n add_special_tokens=True, padding='max_length', \r\n return_attention_mask=True, \r\n return_token_type_ids=False, return_tensors='pt')\r\n \r\n return {'review': review,\r\n 'input_ids': encoding['input_ids'].flatten(), \r\n 'attention_mask': encoding['attention_mask'].flatten(),\r\n 'targets': torch.tensor(target, dtype=torch.long)}\r\n\r\n\r\nerror_list = []\r\n\r\nfor i in range(len(free_df)):\r\n if len(GPReviewDataset(free_df['content'].to_numpy(), free_df['score'].to_numpy(), tokenizer, 160).__getitem__(i)['attention_mask']) != 160:\r\n error_list.append((i, len(GPReviewDataset(free_df['content'].to_numpy(), free_df['score'].to_numpy(), tokenizer, 160).__getitem__(i)['input_ids'])))\r\n \r\nerror_list\r\n```\r\n\r\nand the results\r\n```\r\n[(95, 184),\r\n (948, 218),\r\n (1025, 162),\r\n (3679, 204),\r\n (3680, 164),\r\n (4150, 220),\r\n (6139, 185),\r\n (7139, 165),\r\n (7201, 166),\r\n (7237, 256),\r\n (7381, 181),\r\n (7599, 254),\r\n (7600, 204),\r\n (7679, 170),\r\n (8111, 202),\r\n (8378, 193),\r\n (8773, 583),\r\n (9041, 583),\r\n (9321, 161),\r\n (10466, 279)]\r\n```",
"You should set your truncation parameter as well, otherwise it won't truncate the texts that are too long. See the docs about the tokenizer methods [here (look for `truncation`)](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__)",
"> You should set your truncation parameter as well, otherwise it won't truncate the texts that are too long. See the docs about the tokenizer methods [here (look for `truncation`)](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__)\r\n\r\nThank you! That worked."
] | 1,611 | 1,611 | 1,611 | NONE | null | I compared my tokenized data with `pad_to_max_length=True` and `padding='max_length'`. My `max_length` was set to 160. However I noticed there were a couple of data that were tokenized more than 160 when I used `padding='max_length'` versus `pad_to_max_length=True` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9777/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9776/comments | https://api.github.com/repos/huggingface/transformers/issues/9776/events | https://github.com/huggingface/transformers/pull/9776 | 792,961,434 | MDExOlB1bGxSZXF1ZXN0NTYwNzI3NzE1 | 9,776 | Auto-resume training from checkpoint | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I was having a problem getting this to actually resume training on my system, and I had to make three small changes to the new code in trainer_utils.py:\r\n\r\n1. `checkpoints = [path for path in content if _re_checkpoint.search(path) is not None and os.path.isdir(path)]` was returning empty. I changed `os.path.isdir(path)` to `os.path.isdir(os.path.join(folder, path))` and now it returns a list of the checkpoint folders as expected.\r\n2. Similarly, the `get_last_checkpoint` function was returning the basename of the checkpoint folder, not the full path, which seems to be expected based on the updates to the example scripts. I changed the last line of the function to `return os.path.join(folder, max(checkpoints, key=lambda x: int(_re_checkpoint.search(x).groups()[0])))`\r\n3. After I made those update, it was resuming from the oldest checkpoint, not the newest. I noticed the checkpoint regex was only capturing the final digit in the directory name. I changed it to `_re_checkpoint = re.compile(r\"^\" + PREFIX_CHECKPOINT_DIR + r\"\\-(\\d+)$\")` with the `+` inside the capture group, and now `get_last_checkpoint` is giving me the newest checkpoint as expected.\r\n\r\nI'm just a novice, so I'm not sure if those tweaks would break anything other systems. Does `os.listdir()` return full paths instead of basenames under some OS/python combinations?\r\n\r\nBut with these changes I'm able to resume an aborted training from within the same folder. ",
"Oh, I went a bit too fast and all the issues you list are completely valid! You should make a PR with all your changes since you were the one to find the problems and fix them :-)",
"Got sucked back into work for my day job, but I'll try getting to that soonish.\r\n\r\nUnrelated, do you guys have an office in DUMBO? I think we're in the same building (at least until we all started working from home)",
"DUMBO offices are at 20 Jay street, you should come and say hi for a coffee when the pandemic is over if you're around :-) ",
"Yeah, I run a small animation studio up on the 10th floor. Turns out the fancy GPUs I have for animation are also good for messing around with machine learning :) And so my pandemic side project became teaching my computer to write poetry. \r\n\r\nSomeday when it's safe again I'll definitely stop by.",
"Haha small world @jncasey! I'm missing 20 Jay right now. What is your company's name? ",
"We're called Mixtape Club. We moved into the building from Manhattan last January, and were only there for about 10 weeks before everyone started working from home. Now my business partner is the only one who goes in, since he can't bring our whole sound studio back to his apartment. "
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
Feature suggested by @madlag.
In the examples scripts, change the current behavior so if checkpoints are present in the output_dir passed to the script command, the training resumes from there. If the `output_dir` exists, is nonempty but has no checkpoint, the same behavior as before is applied (error). If it exists with checkpoints inside, the last checkpoint is grabbed to resume training from there. If `--overwrite_output_dir` is passed, the folder is destroyed as before.
This avoids user having to pass `output_dir/checkpoint-xxx` as their model name or path to resume training from a checkpoint, which is a nice improvement. The bad surprise can be if you set that `output_dir` with a trained model you like by mistake, but at the same time, the training is resumed from the last checkpoint so shouldn't be too long (and will converge to the same model) and interrupting before the end will not erase the model inside the folder, so I think the pros outweigh the cons.
Tweak the `run_glue` example script for now, will expand it to all scripts if accepted. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9776",
"html_url": "https://github.com/huggingface/transformers/pull/9776",
"diff_url": "https://github.com/huggingface/transformers/pull/9776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9776.patch",
"merged_at": 1611594232000
} |
https://api.github.com/repos/huggingface/transformers/issues/9775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9775/comments | https://api.github.com/repos/huggingface/transformers/issues/9775/events | https://github.com/huggingface/transformers/issues/9775 | 792,910,031 | MDU6SXNzdWU3OTI5MTAwMzE= | 9,775 | New API for TensorFlow saved models not compatible with T5 and MarianMT | {
"login": "maziyarpanahi",
"id": 5762953,
"node_id": "MDQ6VXNlcjU3NjI5NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5762953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maziyarpanahi",
"html_url": "https://github.com/maziyarpanahi",
"followers_url": "https://api.github.com/users/maziyarpanahi/followers",
"following_url": "https://api.github.com/users/maziyarpanahi/following{/other_user}",
"gists_url": "https://api.github.com/users/maziyarpanahi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maziyarpanahi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maziyarpanahi/subscriptions",
"organizations_url": "https://api.github.com/users/maziyarpanahi/orgs",
"repos_url": "https://api.github.com/users/maziyarpanahi/repos",
"events_url": "https://api.github.com/users/maziyarpanahi/events{/privacy}",
"received_events_url": "https://api.github.com/users/maziyarpanahi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @jplu ",
"Hello!!\r\n\r\nYes this is a known issue for the Seq2Seq models, that is already fixed in master.",
"Hi @jplu \r\n\r\nI'll close the issue as the fix will be in the next release for sure. \r\n\r\nThanks again. "
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.2 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@jplu @patrickvonplaten
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): MarianMT and T5
I was trying to use the new Serving feature introduced in `4.2.0` (https://github.com/huggingface/transformers/pull/9419)
## To reproduce
Steps to reproduce the behavior:
1. `pip install tensorflow transformers sentencepiece`
2. Try to use `.save_pretrained("./serve_tf", saved_model=True)` with T5 or MarianMT
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code to reproduce the error with T5:
```python
import tensorflow as tf
from transformers import TFT5ForConditionalGeneration
TFT5ForConditionalGeneration.from_pretrained('t5-small')\
.save_pretrained("./serve_tf_t5", saved_model=True)
```
The error:
```
All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-c3e4b575f41e> in <module>()
4 from transformers import T5Tokenizer, TFT5ForConditionalGeneration
5
----> 6 TFT5ForConditionalGeneration.from_pretrained('t5-small').save_pretrained("./serve_tf_t5", saved_model=True)
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:126 signature_wrapper *
structured_outputs, signature_function.name, signature_key)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:174 _normalize_outputs **
.format(value, key, compat.as_str_any(function_name)))
ValueError: Got a dictionary containing non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:2' shape=(1, 6, 4, None, 8, None, 64) dtype=float32>,) for key past_key_values in the output of the function __inference_serving_25314 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
```
Code to reproduce the error with MarianMT:
```python
import tensorflow as tf
from transformers import TFMarianModel, TFMarianMTModel
# It seems `opus-mt-mt-en`, `opus-mt-en-zh` and `opus-mt-en-ROMANCE` are the only
# TF compatible models
TFMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-mt-en')\
.save_pretrained("./serve_tf_marian", saved_model=True)
```
The error:
```
All the layers of TFMarianMTModel were initialized from the model checkpoint at Helsinki-NLP/opus-mt-mt-en.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-28-0fe7915512e9> in <module>()
5 from transformers import TFMarianModel, TFMarianMTModel, MarianMTModel
6
----> 7 TFMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-mt-en', use_cache=False).save_pretrained("./serve_tf_marian", saved_model=True)
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:126 signature_wrapper *
structured_outputs, signature_function.name, signature_key)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_serialization.py:174 _normalize_outputs **
.format(value, key, compat.as_str_any(function_name)))
ValueError: Got a dictionary containing non-Tensor value (None,) for key past_key_values in the output of the function __inference_serving_109486 used to generate a SavedModel signature. Dictionaries outputs for functions used as signatures should have one Tensor output per string key.
```
I have tried to see the `use_cache` to `False` hoping `past_key_values` won't be used as output layer but it didn't help.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Like other models such as BERT or ALBERT, these two should be saved to be served later via the TensorFlow Serving environment. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9775/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9774/comments | https://api.github.com/repos/huggingface/transformers/issues/9774/events | https://github.com/huggingface/transformers/pull/9774 | 792,904,964 | MDExOlB1bGxSZXF1ZXN0NTYwNjg0OTQy | 9,774 | adding MarianForCausalLM for EncoderDecoder use | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.\r\n\r\nAlso we need a test for this.",
"Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.\r\n\r\nAlso we need a decoder-only test for this."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
adding MarianForCausalLM for EncoderDecoder use
Fixes #9066
PR #9128
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9774",
"html_url": "https://github.com/huggingface/transformers/pull/9774",
"diff_url": "https://github.com/huggingface/transformers/pull/9774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9774.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9773/comments | https://api.github.com/repos/huggingface/transformers/issues/9773/events | https://github.com/huggingface/transformers/issues/9773 | 792,881,080 | MDU6SXNzdWU3OTI4ODEwODA= | 9,773 | RagRetriever question_hidden_states shape | {
"login": "aqweteddy",
"id": 31977842,
"node_id": "MDQ6VXNlcjMxOTc3ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/31977842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aqweteddy",
"html_url": "https://github.com/aqweteddy",
"followers_url": "https://api.github.com/users/aqweteddy/followers",
"following_url": "https://api.github.com/users/aqweteddy/following{/other_user}",
"gists_url": "https://api.github.com/users/aqweteddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aqweteddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aqweteddy/subscriptions",
"organizations_url": "https://api.github.com/users/aqweteddy/orgs",
"repos_url": "https://api.github.com/users/aqweteddy/repos",
"events_url": "https://api.github.com/users/aqweteddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/aqweteddy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm pretty sure it's just a bad naming of that variable. It should be question_encoder_pooler_output.\r\n\r\nIndeed this variable is computed as the first output of a `DPRQuestionEncoder`. The DPR encoders don't return the hidden states (they basically skip it) to return the pooler output (aka DPR embeddings) of shape `(batch_size, vector_size)` instead.\r\n\r\nIf you want to contribute, feel free to open a PR to rename this variable :) ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-60-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
RAG: @patrickvonplaten, @lhoestq
## Information
The shape of RagRetriever's argument question_hidden_states mentioned in the document is `(batch_size, vector_size)`.
However, when RagModel calls self.retriever in forward function, it passes question_encoder_last_hidden_state`(batch_size, seq_len, vector_size)` to RagRetriever. Maybe question_encoder_last_hidden_state should be replaced with question_encoder_pooler_output?
The problem arises when using:
* [v] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [v] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
maybe a small bug? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9773/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9772/comments | https://api.github.com/repos/huggingface/transformers/issues/9772/events | https://github.com/huggingface/transformers/issues/9772 | 792,854,192 | MDU6SXNzdWU3OTI4NTQxOTI= | 9,772 | Add classes for Multi-label classification models? | {
"login": "gsnidero",
"id": 4365415,
"node_id": "MDQ6VXNlcjQzNjU0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4365415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsnidero",
"html_url": "https://github.com/gsnidero",
"followers_url": "https://api.github.com/users/gsnidero/followers",
"following_url": "https://api.github.com/users/gsnidero/following{/other_user}",
"gists_url": "https://api.github.com/users/gsnidero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsnidero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsnidero/subscriptions",
"organizations_url": "https://api.github.com/users/gsnidero/orgs",
"repos_url": "https://api.github.com/users/gsnidero/repos",
"events_url": "https://api.github.com/users/gsnidero/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsnidero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Hi @LysandreJik is this feature request being worked on by someone? If not, I would love to take it up. Thanks!",
"Hello! \r\n\r\nThere is existing support for this thanks to the `problem_type` configuration attribute for some Sequence classification models, see here: https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig\r\n\r\n(search for `problem_type`)\r\n\r\nIt could be better documented though by having an example for each model that supports this/more complete documentation on the model themselves. Would you like to try your hand at it?",
"Hi,\r\n\r\nIt seems like the documentation for this feature has already been merged so I think this issue should be closed. \r\n",
"Indeed! Thanks @astern21 "
] | 1,611 | 1,696 | 1,696 | CONTRIBUTOR | null | # 🚀 Feature request
It would be nice to add classes for Multi-label classification models to the library.
## Motivation
In my projects, I need to perform Multi-label classification. This problem setting is quite common in real-life modelling.
## Your contribution
As there isn't a class implemented for this in the library, I have implemented my own. I have modified, for example, BertForSequenceClassificationton to make it usable for multi-label modelling. Then, I train the model with the Trainer class as usual.
Is there any interest or plan to add this from the library maintainers? If so, I would be happy to collaborate or start working on a PR.
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9771/comments | https://api.github.com/repos/huggingface/transformers/issues/9771/events | https://github.com/huggingface/transformers/issues/9771 | 792,844,542 | MDU6SXNzdWU3OTI4NDQ1NDI= | 9,771 | TF loss function output inconsistent with Pytorch one for multiple tasks | {
"login": "janjitse",
"id": 16238701,
"node_id": "MDQ6VXNlcjE2MjM4NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/16238701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janjitse",
"html_url": "https://github.com/janjitse",
"followers_url": "https://api.github.com/users/janjitse/followers",
"following_url": "https://api.github.com/users/janjitse/following{/other_user}",
"gists_url": "https://api.github.com/users/janjitse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janjitse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janjitse/subscriptions",
"organizations_url": "https://api.github.com/users/janjitse/orgs",
"repos_url": "https://api.github.com/users/janjitse/repos",
"events_url": "https://api.github.com/users/janjitse/events{/privacy}",
"received_events_url": "https://api.github.com/users/janjitse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThis is the expected behavior, if you want any reduction on the loss, you have to do it yourself on your side, not inside the respective compute_loss function.",
"Hi, thanks for the explanation. \r\nI realized it was probably by design, it's just odd that it differs so much in behavior from the Pytorch version. Is there any plan to bring those more inline in this regards? Probably a breaking change, I don't have a clear overview of how much would break, even internally within the Transformers library.",
"Nothing planed to align this with Python and we won't. The reason is because when training with a distribute strategy, TensorFlow doesn't allow a reduction other than `None` or `Sum`. Knowing that we have our own custom trainer and we cannot apply the change you would like as it will make it fails for such cases.",
"Makes sense, didn't think about the incompatibility of the AUTO reduction with the distribute strategies and the custom trainer.\r\n \r\nI'll try to make a small patch over the weekend with an update to the documentation in the docstrings, as it's currently not in line with the actual (and intended) output.",
"That would be awesome! Thanks!"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.10.7-gentoo-x86_64-AMD_Ryzen_9_3950X_16-Core_Processor-with-glibc2.2.5
- Python version: 3.8.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@jplu,
## Information
Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I was converting the example of perplexity calculation of fixed-length models [perplexity calculation of fixed-length models](https://huggingface.co/transformers/perplexity.html) to Tensorflow, and ran into an inconsistency in the implementation of compute_loss, compared to the implementation in the Pytorch version of the model.
For Tensorflow, when calling a model with inputs and labels (model(input_ids = input_ids, labels = labels), there is no reduction being done on the output of SparseCategoricalCrossentropy loss function (i.e. it is called explicitly with reduction=tf.keras.losses.Reduction.NONE for all tasks), as defined in modeling_tf_utils.py, while for Pytorch, the loss function CrossEntropyLoss() is called with the standard reduction (just the mean), which seems a bit unexpected to me.
After modifying the code to do an explicit tf.math.reduce_mean on the outcome of the model, I was able to reproduce the Pytorch outcome exactly.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Tensorflow version:
`outputs = model(input_ids, labels = target_ids)`
`log_likelihood = tf.math.reduce_mean(outputs[0] * trg_len)`
Pytorch version:
`outputs = model(input_ids, labels=target_ids)`
`log_likelihood = outputs[0] * trg_len`
## Expected behavior
Outcome of TFGPT2LMHeadModel.call(input_ids=input_ids,labels=labels) to have same tensor shapes as outcome of GPT2LMHeadModel.call(input_ids=input_ids,labels=labels)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9770/comments | https://api.github.com/repos/huggingface/transformers/issues/9770/events | https://github.com/huggingface/transformers/issues/9770 | 792,828,371 | MDU6SXNzdWU3OTI4MjgzNzE= | 9,770 | TFBartForConditionalGeneration with labels padded with -100 gives Nan loss. | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found that TFBart models use ```padding token``` for masking ```decoder_input_ids``` instead of using ```-100 token```, which is different from T5 models. So this is not a bug, but a little confusing because some of the bart code and documents talk about ```-100 token```.\r\n\r\nAlso, losses from ```torch``` and ```tensorflow``` are different with the same dataset (text shown above).\r\nI also directly convert ```pytorch_model.bin``` to ```tf_model.h5``` instead of using uploaded model, but they shows different losses.",
"Hi @kiyoungkim1 \r\n\r\nIn both T5 and BART `decoder_input_values` can never contain -100, for both models the padding is done using pad token.\r\nIn labels, however, we usually replace pad tokens with -100 so as to not include them while computing the loss.\r\n\r\nBut what you pointed out is correct, TFBart uses pad token as the ignore index \r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bart/modeling_tf_bart.py#L1340-L1348\r\n\r\nWhere as TFT5 ignores -100\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/modeling_tf_utils.py#L146-L152\r\n\r\nnot an expert on TF side, so @jplu , @patrickvonplaten will know more",
"What says @patil-suraj is correct, in T5 we expect the pad token id to always be -100 while in TF BART it can be any digit assigned to `config.pad_token_id`.",
"Great catch @kiyoungkim1!\r\n\r\nIt's not very consistent what we are doing here...TFBart should have never ignored the `pad_token_id` as a default setting, but -100 as all other models do.\r\n\r\nTo fix the problem, I think we should add a couple of lines that check if -100 are in the labels and if yes replaces them with the `pad_token_id` to have consistency with PyTorch's Bart. It would be a pretty big breaking change to just replace `pad_token_id` with -100 so I think the first option is the better one. @kiyoungkim1 if you feel like opening a PR to correct this behavior we would be more than happy :-) ",
"We also plan to turn all the loss computation, not anymore as a method but as a layer, so it will be much easier to use, to configure and TensorFlow workflow compliant."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | I am pretraining T5 and Bart.
I noticed that the padding token for ```labels``` of these models should be -100 for ```decoder_input_ids```.
I change the padding token for labels for T5(pytorch, tensorflow) and Bart(pytorch), and it works well.
But, For Bart(tensorflow) gives Nan loss.
Because of this, I also get a error message for pretraining:
```tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of -100 which is outside the valid range of [0, 50265). Label values: 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 ...........```
## Environment info
- `transformers` version: 4.2.2
- Platform: ubuntu 18.04
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: yes (colab)
- Using distributed or parallel set-up in script?: no
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBartForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
import tensorflow as tf
from transformers import BartTokenizer, TFBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-base")
inputs = tokenizer("My dog is <mask>", return_tensors='tf', truncation=True, max_length=16, padding="max_length")
labels_ids = tokenizer("My dog is cute", return_tensors='tf', truncation=True, max_length=16, padding="max_length").input_ids
## labels padding_token = 1
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
## labels padding_token = -100
labels_ids = tf.where(
labels_ids == 1, tf.fill(tf.shape(labels_ids), tf.constant(-100, dtype='int32')), labels_ids
)
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
```
Resurts:
```
tf.Tensor(
[[ 0 2387 2335 16 11962 2 1 1 1 1 1 1
1 1 1 1]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7073401e-05 7.9230859e-04 6.1941872e+00
1.1058841e+00], shape=(6,), dtype=float32)
tf.Tensor(
[[ 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 -100
-100 -100 -100 -100]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8755410e-05 3.7073401e-05 7.9242775e-04 6.1941872e+00
1.1058841e+00 nan nan nan nan
nan nan nan nan nan
nan], shape=(16,), dtype=float32)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9769/comments | https://api.github.com/repos/huggingface/transformers/issues/9769/events | https://github.com/huggingface/transformers/issues/9769 | 792,814,321 | MDU6SXNzdWU3OTI4MTQzMjE= | 9,769 | saving best model only using modelcheckpoint keras | {
"login": "rohanshingade",
"id": 18469762,
"node_id": "MDQ6VXNlcjE4NDY5NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/18469762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohanshingade",
"html_url": "https://github.com/rohanshingade",
"followers_url": "https://api.github.com/users/rohanshingade/followers",
"following_url": "https://api.github.com/users/rohanshingade/following{/other_user}",
"gists_url": "https://api.github.com/users/rohanshingade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohanshingade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohanshingade/subscriptions",
"organizations_url": "https://api.github.com/users/rohanshingade/orgs",
"repos_url": "https://api.github.com/users/rohanshingade/repos",
"events_url": "https://api.github.com/users/rohanshingade/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohanshingade/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you taken a look at the [Keras docs](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint)?",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@LysandreJik Yes i looked the following doc and implemented it.\r\n\r\n```\r\nconfig = AutoConfig.from_pretrained(\r\n PRETRAINED_MODEL_VERSION, num_labels=len(classes), label2id=label2id, id2label=id2label,\r\n finetuning_task=\"text-classification\")\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_VERSION, config=config)\r\n\r\nloss = tf.keras.losses.CategoricalCrossentropy(from_logits=True)\r\nmetric = tf.keras.metrics.Accuracy()\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=2e-6, epsilon=1e-08)\r\nmodel.compile(loss=loss, optimizer=optimizer, metrics=[metric])\r\nprint(model.summary())\r\n\r\nes_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, verbose=1)\r\nchkpt_calllback = tf.keras.callbacks.ModelCheckpoint(MODEL_DIRECTORY+\"{epoch:02d}\",\r\n monitor='val_loss', verbose=2,\r\n save_best_only=True, save_weights_only=False)\r\n\r\nmodel.fit([input_ids_train, attention_mask_train], train_labels,\r\n validation_data=([input_ids_valid, attention_mask_valid], valid_labels),\r\n epochs=1, batch_size=32, callbacks=[es_callback, chkpt_calllback])\r\n```\r\nbut doing so saves the model checkpoint only and I'm not sure how to load these weights again into the **model**. As it only accepts **.h5** format. Facing difficulty in loading these weights\r\n```\r\ncheckpoint\r\n02.index\r\n02.data-00000-of-00001\r\n```"
] | 1,611 | 1,615 | 1,614 | NONE | null | while using model.fit for training a Transformer model, how to use `tf.keras.callbacks.ModelCheckpoint` with `'save_best_only=True'` to save only the best model?
`model.fit([input_ids_train, attention_mask_train], train_labels,
validation_data=([input_ids_valid, attention_mask_valid], valid_labels),
epochs=50, batch_size=32)`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9768/comments | https://api.github.com/repos/huggingface/transformers/issues/9768/events | https://github.com/huggingface/transformers/pull/9768 | 792,806,031 | MDExOlB1bGxSZXF1ZXN0NTYwNjA4NjM5 | 9,768 | PegasusForCausalLM, analog to `ProphetNetForCausalLM` | {
"login": "sadakmed",
"id": 18331629,
"node_id": "MDQ6VXNlcjE4MzMxNjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadakmed",
"html_url": "https://github.com/sadakmed",
"followers_url": "https://api.github.com/users/sadakmed/followers",
"following_url": "https://api.github.com/users/sadakmed/following{/other_user}",
"gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions",
"organizations_url": "https://api.github.com/users/sadakmed/orgs",
"repos_url": "https://api.github.com/users/sadakmed/repos",
"events_url": "https://api.github.com/users/sadakmed/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadakmed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @sadakmed, sorry I might have been a bit unclear here. Could you instead of opening new PRs add your added code directly to your PR here: https://github.com/huggingface/transformers/pull/9128? The `#Copy from` statements force us to have all the code in the same PR.\r\n\r\nAlso we need a decoder-only test for this."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
this PR is implementing PegasusForCausalLM for EncoderDecoder use like ProphetNetForCausalLM and it follow work of BartForCausalLM #9128
issue #9066
for a organisation matter, each one is in its own PR. @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9768/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9768",
"html_url": "https://github.com/huggingface/transformers/pull/9768",
"diff_url": "https://github.com/huggingface/transformers/pull/9768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9768.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9767/comments | https://api.github.com/repos/huggingface/transformers/issues/9767/events | https://github.com/huggingface/transformers/issues/9767 | 792,732,014 | MDU6SXNzdWU3OTI3MzIwMTQ= | 9,767 | tensorflow training problem | {
"login": "tang-ed",
"id": 61105590,
"node_id": "MDQ6VXNlcjYxMTA1NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/61105590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tang-ed",
"html_url": "https://github.com/tang-ed",
"followers_url": "https://api.github.com/users/tang-ed/followers",
"following_url": "https://api.github.com/users/tang-ed/following{/other_user}",
"gists_url": "https://api.github.com/users/tang-ed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tang-ed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tang-ed/subscriptions",
"organizations_url": "https://api.github.com/users/tang-ed/orgs",
"repos_url": "https://api.github.com/users/tang-ed/repos",
"events_url": "https://api.github.com/users/tang-ed/events{/privacy}",
"received_events_url": "https://api.github.com/users/tang-ed/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | NONE | null | import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from transformers import TFBartModel, BartConfig
from tensorflow import keras
npfile = np.load("train_dataset.npz")
inp_ids = npfile["arr_0"][:1000]
out_ids = npfile["arr_1"][:1000]
out_ids = pad_sequences(out_ids, padding="post", truncating="post", value=1, maxlen=inp_ids.shape[1])
config = BartConfig.from_json_file("config.json")
b_model = TFBartModel(config=config)
def models():
inp = keras.layers.Input(shape=[inp_ids.shape[1]], dtype="int32")
outputs = b_model(inp, training = True, use_cache = False)
logits = keras.layers.Dense(config.vocab_size, activation="softmax")(outputs[0])
return keras.models.Model(inp, logits)
model = models()
model.summary()
dataset = tf.data.Dataset.from_tensor_slices((tf.constant(inp_ids), tf.constant(out_ids))).shuffle(2000).batch(4)
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=["acc"])
model.fit(dataset, epochs=100)
model.save_weights("tf_model.h5")
I don’t think there is a problem with the code, but the loss just doesn’t go down. Therefore, I also specially reduced the training sample and changed it to 1000, but it still has no effect. I wonder if the bart network has no weight update, but just updated the self-defined self. What about the definition layer? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9766/comments | https://api.github.com/repos/huggingface/transformers/issues/9766/events | https://github.com/huggingface/transformers/issues/9766 | 792,729,259 | MDU6SXNzdWU3OTI3MjkyNTk= | 9,766 | [wip] [doc] Parallelism notes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
},
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
},
{
"id": 2682576896,
"node_id": "MDU6TGFiZWwyNjgyNTc2ODk2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Pipeline%20Parallel",
"name": "Pipeline Parallel",
"color": "1F75CB",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,611 | 1,625 | 1,625 | CONTRIBUTOR | null | Perhaps this will end up in a blog post and/or a new document, for now collecting notes. This is a work in progress. Please give me some time to write the bulk of it and then you'll be welcome to ask questions, add contributions, etc.
------------------------
## Parallelism overview
In the modern machine learning the various approaches to Parallelism are used to:
1. fit very large models onto limited hardware - e.g. t5-11b is 45GB in just model params
2. significantly speed up training - finish training that would take a year in hours
We will first discuss in depth various 1D parallelism techniques and their pros and cons and then look at how they can be combined into 2D and 3D parallelism to enable an even faster training and to support even bigger models.
While the main concepts most likely will apply to any other framework, this article is focused in pytorch-based implementations.
## Data Parallel
Most users with just 2 GPUs already enjoy the increased training speed up thanks to DataParallel (DP) and DistributedDataParallel (DDP) that are almost trivial to use.
## ZeRO Data Parallel
ZeRO-powered data parallelism (ZeRO-DP) is described on the following diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)

It can be difficult to wrap one's head around it, but in reality the concept is quite simple. This is just the usual DataParallel (DP), except, instead of replicating the full model params, gradients and optimizer states, each GPU stores only a slice of it. And then at run-time when the full layer params are needed just for the given layer, all GPUs synchronize to give each other parts that they miss - this is it.
Consider this simple model with 3 layers, where each layer has 3 params:
```
La | Lb | Lc
---|----|---
a0 | b0 | c0
a1 | b1 | c1
a2 | b2 | c2
```
Lx being the layer and we have 3 layers, and ax being the weights - 3 weights
If we have 3 GPUs, the Sharded DDP (= Zero DP) splits the model onto 3 GPUs like so:
```
GPU0:
La | Lb | Lc
---|----|---
a0 | b0 | c0
GPU1:
La | Lb | Lc
---|----|---
a1 | b1 | c1
GPU2:
La | Lb | Lc
---|----|---
a2 | b2 | c2
```
In a way this is horizontal slicing, if you imagine the typical DNN diagram. Vertical slicing is where one puts whole layer-groups on different GPUs. But it's just the starting point.
Now each of these GPUs will get the usual mini-batch as it works in DP:
```
x0 => GPU0
x1 => GPU1
x2 => GPU2
```
The inputs are unmodified - they think they are going to be processed by the normal model.
So the inputs first hit the first layer La.
Let's focus just on GPU0: x0 needs a0, a1, a2 params to do its forward path, but GPU0 has only a0 - so what it does is it gets sent a1 from GPU1 and a2 from GPU2. Now the forward step can happen.
In parallel GPU1 gets mini-batch x1 and it only has a1, but needs a0 and a2 params, so it gets those from GPU0 and GPU2.
Same happens to GPU2 that gets input x2. It gets a0 and a1 from GPU0 and GPU1.
As soon as the calculation is done, the data that is no longer needed gets dropped - it's only used during the calculation.
The same is repeated at every other stage.
And the whole larger thing is repeated for layer Lb, then Lc forward-wise, and then backward Lc -> Lb -> La.
To me this sounds like an efficient group backpacking weight distribution strategy:
1. person A carries the tent
2. person B carries the stove
3. person C carries the entertainment system
Now each night they all share what they have with others and get from others what the don't have, and in the morning they pack up their allocated type of gear and continue on their way. This is Sharded DDP / Zero DP.
Compare this strategy to the simple one where each person has to carry their own tent, stove and entertainment system, which would be far more inefficient. This is DataParallel in pytorch.
And I think pretty much everywhere I read Sharded == Partitioned, so I think those are synonyms in the context of distributed models.
If you pay close attention the way ZeRO partitions the model's data - it looks very similar to horizontal model parallelism which will be discussed later. This is because it partitions/shards each layer's data unlike vertical model parallelism which is discussed next.
Implementations:
- [DeepSpeed](https://www.deepspeed.ai/features/#the-zero-redundancy-optimizer) ZeRO-DP stages 1+2+3
- [Fairscale](https://github.com/facebookresearch/fairscale/#optimizer-state-sharding-zero) ZeRO-DP stages 1+2+3
- [`transformers` integration](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations)
## Naive Model Parallel (Vertical) and Pipeline Parallel
Naive Model Parallel (MP) is where one spreads groups of model layers across multiple GPUs. The mechanism is relatively simple - switch the desired layers `.to()` the desired devices and now whenever the data goes in and out those layers switch the data to the same device as the layer and leave the rest unmodified.
We refer to it as Vertical MP, because if you remember how most models are drawn, we slice the layers vertically. For example, if the following diagram shows an 8-layer model:
```
=================== ===================
| 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 |
=================== ===================
gpu0 gpu1
```
we just sliced it in 2 vertically, placing layers 0-3 onto gpu 0 and 4-7 to gpu 1.
Now while data travels from layer 0 to 1, 1 to 2 and 2 to 3 this is just the normal model. But when data needs to pass from layer 3 to layer 4 it needs to travel from gpu0 to gpu1 which introduces a communication overhead. If the participating GPUs are on the same node (e.g. same PC) this copying is pretty fast, but if the other gpus are on different nodes (e.g. another PC) the communication overhead could be significantly larger.
Then layers 4 to 5 to 6 to 7 are as a normal model would have and when the 7th layer completes we often need to send the data back to layer 0 where the labels are (or alternatively send the labels to the the last layer).
Problems:
- the main deficiency and why this one is called "naive", is that all but one GPU is idle at any given moment. So if 4 gpus are used - it's almost identical to quadrupling the amount of memory of a single GPU, and ignoring the rest of the hardware. Plus there is the overhead of copying the data between devices. So 4x 6GB cards will be able to accommodate the same size as 1x 24GB card using naive MP, except the latter will complete the training faster, since it doesn't have the data copying overhead. But, say, if you have 40GB cards and need to fit a 45GB model you can with 4x 40GB cards (barely because of the scheduler and optimizer data)
- shared embeddings may need to get copied back and forth between GPUs.
Pipeline Parallel (PP) is almost identical to a naive MP, but it solves the idling problem to a degree, by chunking the incoming batch into micro-batches and artificially creating a pipeline, which allows different GPUs to concurrently participate in the computation process.
The following illustration from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html) shows first the naive MP, the PP:

It's easy to see how PP has less dead zones where GPUs are idle.
PP introduces a new hyper-parameter to tune and it's `chunks` which defines how many pipeline stages are to be used. e.g. in the 2nd diagram of the image above you can see that `chunks=4`.
With `chunks=1` you end up with the naive MP. with a very large value you will find that the overhead of slicing the tensors will slow everything down. So one has to experiment to find the best value. It's also important to remember that to take advantage of the GPU, you need largish batches and ideally in multiples of 8.
So if the normal batch size `bs=64` and `chunks=8`, the each stage will receive a micro-batch of `8`. However if you're tight on memory in first place you may end up with a the normal `bs=8`, and then if you choose `chunks=4`, you will end up with `4` pipeline segments with a micro-batch of just `2` - which would be very inefficient. Also `bs=8` and `chunks=3` won't go too well together either, as you will end up with uneven micro-batches of `[3,3,2]`.
While the diagram shows that there is a bubble of "dead" time that can't be parallelized because the last `forward` stage has to wait for `backward` to complete the pipeline, the purpose of finding the best value for `chunks` is to enable a high concurrent GPU utilization across all participating GPUs.
Problems:
- have to modify the model quite heavily, because Pipeline requires one to rewrite the normal flow of modules into a `nn.Sequential` sequence of the same, which may require changes to the design of the model.
- currently the Pipeline API is very restricted. If you had a bunch of python variables being passed in the very first stage of the Pipeline, you will have to find a way around it. Currently, the pipeline interface requires either a single Tensor or a tuple of Tensors as the only input and output. These tensors must have batch size as the very first dimension, since pipeline is going to chunk the normal batch into micro-batches. Possible improvements are being discussed here https://github.com/pytorch/pytorch/pull/50693
- have to arrange each layer so that the output of one model becomes an input to the other model
Implementations:
- pytorch-1.8-to-be - no docs yet, but see [this](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)
- [fairscale](https://fairscale.readthedocs.io/en/latest/tutorials/pipe.html)
- [deepspeed](https://www.deepspeed.ai/tutorials/pipeline/)
Other approaches:
SageMaker introduces the concept of an [Interleaved Pipeline](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html)

Here the bubble (idle time) is further minimized by prioritizing backward passes.
According to [the same document](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html), it might be able to automate the conversion to pipeline.
The only problem is that this is currently only available at AWS, so you can't run it on your own hardware.
## Model Parallel (Horizontal)
Megatron-LM
## 2D Parallelism
The following diagram from the DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/) demonstrates how one combines DP with PP.

Here it's important to see how DP rank 0 doesn't see gpu2 and DP rank 1 doesn't see gpu3. To DP there is just gpus 0 and 1 where it feeds data as if there were just 2 gpus. gpu 0 "secretly" offloads some of its load to gpu 2 using PP. and gpu 1 does the same by enlisting gpu 3 to its aid.
XXX: will update this section once I get it working
## 3D Parallelism
## FlexFlow
[FlexFlow](https://github.com/flexflow/FlexFlow) is also solving the parallelization problem in a slightly different approach.
Paper: ["Beyond Data and Model Parallelism for Deep Neural Networks" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358)
It performs a sort of 4D Parallelism over Sample-Operator-Attribute-Parameter.
1. Sample = Data Parallelism
2. Operator = part vertical Layer Parallelism, but it can split the layer too - more refined level
3. Attribute = horizontal Model Parallelism (Megatron-LM style)
4. Parameter = Sharded model params
and they are working on Pipeline Parallelism. I guess ZeRO-DP is Sample+Parameter in this context.

The significance of this framework is that it takes resources like (1) GPU/TPU/CPU vs. (2) RAM/DRAM vs. (3) fast-intra-connect/slow-inter-connect and it automatically optimizes all these algorithmically deciding which parallelisation to use where.
On very important aspect is that FlexFlow is designed for optimizing DNN parallelizations for models with static and fixed workload, since models with dynamic behavior may prefer different parallelization strategies across iterations.
So the promise is very attractive - it runs say a 30min simulation on the cluster of choice and it comes up with the best strategy to utilise this specific environment. If you add/remove/replace any parts it'll run and re-optimize the plan for that. And then you can train. A different setup will have its own custom optimization.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9766/reactions",
"total_count": 11,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9765/comments | https://api.github.com/repos/huggingface/transformers/issues/9765/events | https://github.com/huggingface/transformers/pull/9765 | 792,710,668 | MDExOlB1bGxSZXF1ZXN0NTYwNTM3Mjk2 | 9,765 | [wip] [pipeline parallel] t5 - experiment | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2682576896,
"node_id": "MDU6TGFiZWwyNjgyNTc2ODk2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Pipeline%20Parallel",
"name": "Pipeline Parallel",
"color": "1F75CB",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"Thanks @stas00 , I am getting what looks like a torch error when I run this (I'm not sure if the \"Failed to look up the IP address for the hostname\" error is related -- I'm not able to find much on this except for an issue from a few days ago that mentions this: https://github.com/pytorch/pytorch/issues/50700 ):\r\n\r\n```\r\ngit clone https://www.github.com/huggingface/transformers.git\r\ncd transformers/\r\ngh pr checkout 9765\r\n\r\nconda create -y -n py38-pt18 python=3.8\r\nconda activate py38-pt18\r\npip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U\r\npip install -e .[dev]\r\npip install -r examples/_tests_requirements.txt\r\n\r\ncd examples/seq2seq/\r\nln -s ~/github/transformers/examples/seq2seq/wmt_en_ro wmt_en_ro\r\n\r\nexport BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 1000 --n_val 1000 --pipeline \"chunks=4 device_map=0:0-3,1:3-12\" --dataloader_num_workers 4 \r\n```\r\n\r\n\r\nOutput:\r\n```\r\nexport BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 50 --n_train 1000 --n_val 1000 --pipeline \"chunks=4 device_map=0:0-3,1:3-12\" --dataloader_num_workers 4 \r\n\r\n01/25/2021 11:39:51 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4, distributed training: False, 16-bits training: False\r\n01/25/2021 11:39:51 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='output_dir', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=160, per_device_eval_batch_size=160, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-06, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_steps=50, logging_dir='runs/Jan25_11-39-51_seahorse', logging_first_step=True, logging_steps=1000, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, pipeline='chunks=4 device_map=0:0-3,1:3-12', dataloader_drop_last=False, eval_steps=25000, dataloader_num_workers=4, past_index=-1, run_name='output_dir', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.1, adafactor=False, group_by_length=False, report_to=['tensorboard'], sortish_sampler=True, predict_with_generate=False)\r\n[INFO|configuration_utils.py:445] 2021-01-25 11:39:51,546 >> loading configuration file https://huggingface.co/t5-base/resolve/main/config.json from cache at /home/pajansen/.cache/huggingface/transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637\r\n[INFO|configuration_utils.py:481] 2021-01-25 11:39:51,547 >> Model config T5Config {\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 3072,\r\n \"d_kv\": 64,\r\n \"d_model\": 768,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_decoder_layers\": 12,\r\n \"num_heads\": 12,\r\n \"num_layers\": 12,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n \"transformers_version\": \"4.3.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 32128\r\n}\r\n\r\n[INFO|configuration_utils.py:445] 2021-01-25 11:39:51,740 >> loading configuration file https://huggingface.co/t5-base/resolve/main/config.json from cache at /home/pajansen/.cache/huggingface/transformers/91e9fe874e06c44883b535d6c950b8b89d6eaa3298d8e7fb3b2c78039e9f8b7b.66b9637a52aa11e9285cdd6e668cc0df14b3bcf0b6674cf3ba5353c542649637\r\n[INFO|configuration_utils.py:481] 2021-01-25 11:39:51,741 >> Model config T5Config {\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 3072,\r\n \"d_kv\": 64,\r\n \"d_model\": 768,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"feed_forward_proj\": \"relu\",\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_decoder_layers\": 12,\r\n \"num_heads\": 12,\r\n \"num_layers\": 12,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"task_specific_params\": {\r\n \"summarization\": {\r\n \"early_stopping\": true,\r\n \"length_penalty\": 2.0,\r\n \"max_length\": 200,\r\n \"min_length\": 30,\r\n \"no_repeat_ngram_size\": 3,\r\n \"num_beams\": 4,\r\n \"prefix\": \"summarize: \"\r\n },\r\n \"translation_en_to_de\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to German: \"\r\n },\r\n \"translation_en_to_fr\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to French: \"\r\n },\r\n \"translation_en_to_ro\": {\r\n \"early_stopping\": true,\r\n \"max_length\": 300,\r\n \"num_beams\": 4,\r\n \"prefix\": \"translate English to Romanian: \"\r\n }\r\n },\r\n \"transformers_version\": \"4.3.0.dev0\",\r\n \"use_cache\": true,\r\n \"vocab_size\": 32128\r\n}\r\n\r\n[INFO|tokenization_utils_base.py:1766] 2021-01-25 11:39:52,522 >> loading file https://huggingface.co/t5-base/resolve/main/spiece.model from cache at /home/pajansen/.cache/huggingface/transformers/684a47ca6257e4ca71f0037771464c5b323e945fbc58697d2fad8a7dd1a2f8ba.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d\r\n[INFO|tokenization_utils_base.py:1766] 2021-01-25 11:39:52,523 >> loading file https://huggingface.co/t5-base/resolve/main/tokenizer.json from cache at /home/pajansen/.cache/huggingface/transformers/90de37880b5ff5ac7ab70ff0bd369f207e9b74133fa153c163d14c5bb0116207.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529\r\n[INFO|modeling_utils.py:1027] 2021-01-25 11:39:52,809 >> loading weights file https://huggingface.co/t5-base/resolve/main/pytorch_model.bin from cache at /home/pajansen/.cache/huggingface/transformers/ab4e948915b067f5cb6e5105f6f85044fd717b133f43240db67899a8fc7b29a2.26934c75adf19ceac3c268b721ba353356b7609c45f5627550326f275a2163b4\r\n[INFO|modeling_utils.py:1143] 2021-01-25 11:39:58,232 >> All model checkpoint weights were used when initializing T5ForConditionalGeneration.\r\n\r\n[INFO|modeling_utils.py:1151] 2021-01-25 11:39:58,233 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-base.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.\r\n01/25/2021 11:39:58 - INFO - utils - setting model.config to task specific params for translation_en_to_ro:\r\n {'early_stopping': True, 'max_length': 300, 'num_beams': 4, 'prefix': 'translate English to Romanian: '}\r\n01/25/2021 11:39:58 - INFO - utils - note: command line args may override some of these\r\n[INFO|modeling_t5.py:1536] 2021-01-25 11:39:58,479 >> enabling pipeline with chunks=4\r\n[INFO|modeling_t5.py:1545] 2021-01-25 11:39:58,479 >> using user-provided device_map\r\n[INFO|modeling_t5.py:1563] 2021-01-25 11:39:58,479 >> using pipeline partitioning: {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}\r\n[W ProcessGroupGloo.cpp:532] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n[W tensorpipe_agent.cpp:63] Failed to look up the IP address for the hostname (EADDRNOTAVAIL: address not available), defaulting to 127.0.0.1\r\n01/25/2021 11:40:01 - INFO - __main__ - *** Train ***\r\n[INFO|trainer.py:807] 2021-01-25 11:40:01,659 >> ***** Running training *****\r\n[INFO|trainer.py:808] 2021-01-25 11:40:01,659 >> Num examples = 1000\r\n[INFO|trainer.py:809] 2021-01-25 11:40:01,659 >> Num Epochs = 1\r\n[INFO|trainer.py:810] 2021-01-25 11:40:01,659 >> Instantaneous batch size per device = 160\r\n[INFO|trainer.py:811] 2021-01-25 11:40:01,659 >> Total train batch size (w. parallel, distributed & accumulation) = 160\r\n[INFO|trainer.py:812] 2021-01-25 11:40:01,659 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:813] 2021-01-25 11:40:01,659 >> Total optimization steps = 7\r\n2021-01-25 11:40:01.766436: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\n 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"./finetune_trainer.py\", line 373, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 303, in main\r\n train_result = trainer.train(\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/trainer.py\", line 904, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/trainer.py\", line 1271, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/trainer.py\", line 1301, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py\", line 1704, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py\", line 1088, in forward\r\n outputs = block_pipe(inputs)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipe.py\", line 362, in forward\r\n self.pipeline.run(batches)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py\", line 117, in run\r\n self.compute(batches, schedule, skip_trackers)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py\", line 257, in compute\r\n raise exc_info[0].with_traceback(exc_info[1], exc_info[2])\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/worker.py\", line 79, in worker\r\n batch = task.compute()\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/worker.py\", line 60, in compute\r\n return self._compute()\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/pipeline.py\", line 222, in compute\r\n return batch.call(partition)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/distributed/pipeline/sync/microbatch.py\", line 70, in call\r\n return Batch(function(self.value))\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/container.py\", line 119, in forward\r\n input = module(input)\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/pajansen/stass-test1/transformers/src/transformers/models/t5/modeling_t5.py\", line 838, in forward\r\n layer_outputs = self.layer_module(hidden_states,\r\n File \"/home/pajansen/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'head_mask'\r\n```\r\n\r\n\r\n",
"It looks more like a warning as it recovers with a fallback, make sure you have:\r\n\r\n```\r\n$ cat /etc/hosts\r\n127.0.0.1 localhost\r\n```\r\n\r\nIt looks like I forgot to commit the last change. My apologies. Could you please update and try again?",
"Thanks -- appears to be working -- on t5-3B it spreads it evenly across the 4 A100s (13.0-13.3GB each with a batch size of 1). For t5-11B there's an out of memory error -- I suppose (naively) if 11b is ~3.7x larger than 3B then it would require ~49gb per card without some form of offloading?\r\n",
"Thank you for confirming that you were able to use it with t5-3b on your 4 gpus. \r\n\r\nWere you able to get a decent gpu utilization across the board? Or were they all under 25%?\r\n\r\n--------------\r\n\r\nPlease make sure you read my notes on the balancing in OP and experiment with the device map so that all gpus get a balanced GPU memory usage. gpu0 is already busy with many things, so I'd try a spread of 2/4/4/4 parts or perhaps 1/3/3/3 in your definition of:\r\n\r\n`--pipeline \"chunks=4 device_map=0:0-3,1:3-12\"`\r\n\r\nin this example we have 1/3 parts balance between gpu 0 and 1. i.e. 3 times more layers for gpu 1.\r\n\r\nOf course, it needs to be adjusted to 4 gpus and I don't remember how many encoder blocks t5-11b has, but as I mentioned if you look at the logs you will find a ready map there, just re-adjust it to balance things better. Please let me know if I communicated clearly what I'm trying to say - we want all 4 gpus to have about the same memory usage - then we maximize the chance to fit t5-11b on those 4 gpus.\r\n\r\n--------------\r\n\r\nNext we need to try to bolt DeepSpeed on it. So we will try to use 2 gpus for pipeline and 2 gpus for ZeRO-DP and perhaps some ZeRO-Offload too. I should get access to 4 gpus soon and I will start working on figuring that step out. I will post back once I have something practical to share.",
"Thanks -- the autobalancing (just \"chunks=4\") actually seemed to give nearly entirely even results on -3B (the ~13.0-3GB each), so I tried that with 11B instead of manually supplying the device map (since it seemed a bit uneven when I tested on -base) -- but I'll tinker on 11B and report back. ",
"> the autobalancing \r\n\r\nFYI, currently the automatic device map just tries to split `n_layers/n_gpus` per gpu, and not taking into an account gpu0's extra load. Once everything else is working we will come up with much better heuristics based on actual gpu capacity and each layer's real memory demands.\r\n\r\n",
"What's interesting is that I'm not generally observing GPU0 to have a higher load. Here's an example with unifiedqa-t5-3b (essentially just a further pre-trained t5-3b, not relevant here), chunks=4, autobalancing (with a different visualization tool). They all tend to show about the same RAM usage over time. The graph also shows the utilization (also generally under 30% most of the time): \r\n\r\n\r\n\r\nBTW -- I tinkered with different manual device_map settings for t5-11b, but it always quickly gave out of memory errors.",
"Oh, what tool is that? I want it too!\r\n\r\nIt looks like different GPUs behave differently, it will take some experimentation to make sense of it all.\r\n\r\nBut clearly you're also not seeing much benefit from the pipeline over the native MP. Same as I. Either my workaround to make it work slow everything down or there is another problem elsewhere. As I mentioned I'd like to redesign my implementation in hope \r\nto reduce the unnecessary logic and data-copying.\r\n\r\n> BTW -- I tinkered with different manual device_map settings for t5-11b, but it always quickly gave out of memory errors.\r\n\r\nThank you for the experimentation. I'm still waiting to get access to a 4-gpu setup and when it happens will immediately start experimenting with bolting DeepSpeed on it and then will get back to you.",
"Thanks -- this handy cool visualization tool is nvtop -- I just found it to plot the relative changes rather than stare at nvidia-smi and hope to keep it all in my brain. It's available with apt ( sudo apt-get install nvtop ). \r\n\r\nHappy to offer my rig for some testing if you need a 4 GPU setup sooner. :) ",
"Oh, yes, I had it and forgot about its usefulness. Thank you!\r\n\r\nI typically use \r\n```\r\nalias wn='watch -n 1 nvidia-smi'\r\n```\r\nbut this is a way better.\r\n\r\n> Happy to offer my rig for some testing if you need a 4 GPU setup sooner. :)\r\n\r\nIf don't find access by tomorrow I will gladly accept your generous offer, @PeterAJansen. Thank you!",
"hmm, how do you get a split screen per card in nvtop? for some reason my version reports both cards as one card. I don't see any command line options to configure that.",
"hmmm, it actually worked out-of-the-box for me (but looks very different depending on the dimensions of the terminal). Does it show only 1 GPU (with memory for both?), or two separate GPUs? ",
"It reports 2 gpus but shows the report only for gpu 0. could be a bug. I just saw that for you it showed all 4 gpus.\r\n\r\n",
"What happens if you make the window really tall/wide? It changes the display for me if I resize the terminal -- if I make it really tiny, it looks something like yours: \r\n\r\n\r\n",
"Sorry, I forgot to mentioned I tried this already to no avail. I give it a huge console.\r\n\r\nI even tried various terminals - same.\r\n\r\nI think it may have to do with my 2nd card being rtx-3090 - and it doesn't work with cuda < 11.1 - most likely nvtop was built against cuda-10, so while it replicates the nvidia-smi stats, it can't access nvml for that card and thus doesn't show the graph.\r\n\r\nYup, installed nvtop on a machine with 2 normal gpus and it shows them both in the same-size terminal. So it just can't handle rtx-30* unless it's rebuilt from source against cuda-11.1+\r\n\r\nBut even then when it works it gives no way to separate the 2 gpu other than colors and 4 lines often around the same magnitude for different things are impossible to make sense of. This is an odd design. ",
":-/ That's unfortunate (though I suppose the cost of using bleeding-edge hardware). The A100s are supported with CUDA 11.0, so they must just squeak in on the current version available with apt. \r\n\r\n(And, the usability is a little unusual, but with ASCII graphics there are strong limits... :) )",
"pytorch w/ cuda-11.2 nightly should be available any day now. cuda-11.2 has been out for a month now.\r\n\r\n>(And, the usability is a little unusual, but with ASCII graphics there are strong limits... :) )\r\n\r\nThis is a good point. But at least one could use a double line or asterisks or something to differentiate 4 different things. Perhaps some people can track 4 similar colors and remember which is which. Not me. I guess the source code is there, if I really need to I could probably hack it to do be more user-friendly.",
"Update: this overload of the term MP to mean totally different things is a big problem.\r\n\r\nI was sure I could easily combine non-DeepSpeed pipeline with Deepspeed after reading \r\nhttps://www.deepspeed.ai/features/#support-for-custom-model-parallelism\r\nExcept, I have just now realized that it's not PP but the super-confusing-mean-different-things-in-different-contexts abbreviation MP, which in this particular context means horizontal MP and not vertical MP/PP. And there are no instructions on how to integrate non-DeepSpeed PP. So I have been trying to fix the wrong thing. https://github.com/microsoft/DeepSpeed/issues/710\r\n\r\nSo this particular branch takes us nowhere closer to integration of PP with DeepSpeed.\r\n\r\nBack to the drawing board.",
"too long. closing.",
"We will test this branch soon.",
"There are probably some things that can be salvaged from this PR, but the main utility of it is to see the difficulties I run into. And of course, this is not a good solution not only because the code is absolutely nuts, but because it's very inefficient.\r\n\r\nAs I mentioned in the other thread, pytorch now has a better API, so some of the encoding/decoding of non-tensor inputs/outputs I did won't be needed anymore as it now supports non-tensor inputs/output."
] | 1,611 | 1,626 | 1,622 | CONTRIBUTOR | null | This PR is not ready for reviews.
I'm putting it up primarily for those who want an early preview of a possible Pipeline solution. @PeterAJansen, you wanted to see if you could get it working with 4x 40GB rig and t5-11b. Please give it a try.
-------------------
## Intention
We want to replace the naive model parallel (MP) implementation with a more efficient pipeline parallel (PP) implementation, which takes advantage of all participating gpus, and not not having one gpu run and the rest idling which is the case with the naive MP.
To give you a visual from the [GPipe paper](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html),

You will find a new argument `chunks`, which is how many pipeline stages you want to add, in the 2nd diagram of the image oabove you can see that `chunks=4`.
So with `chunks=1` you get the naive mp, but it'd be even slower than the naive MP because of the RPC overhead.
## Overview
Porting t5 to Pipeline Parallelism proved to be a study in hacking, due to the very restrictive original pipeline interface which only allows tensors or tuples of tensors as `input`/`output` arguments in `forward`, and in `transformers` we have a ton of very complex variables to pass to `forward` and return from it.
We are trying to change the Pipeline design to be much more user-friendly: https://github.com/pytorch/pytorch/pull/50693
This implementation tries to take advantage of 2 natural stacks, so I implemented it as 2 pipes:
```
T5ForConditionalGeneration->
T5Stack(encoder)->Pipe(Sequential([T5StackPipeSegment * 6])
T5Stack(decoder)->Pipe(Sequential([T5StackPipeSegment * 6])
```
6 for `t5-small`.
Please don't even bother looking at the code, it is one big hack which took many hours to come up with to make the pipeline work, so clearly it is not something very portable or readable.
## Setup
**important: you need pytorch-nightly to be able to use this.**
```
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
```
Just create another conda env not to mess up with your normal env, but pt-nightly is a solid peace of software, I use it all the time. here is a quick copy-n-paste of what you will need - just edit the location of the transformers checkout dir.
```
conda create -y -n py38-pt18 python=3.8
conda activate py38-pt18
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
git clone https://github.com/huggingface/transformers
cd transformers
gh pr checkout 9765 # or use whatever other method to checkout this PR
pip install -e .[dev]
pip install -r examples/_tests_requirements.txt
```
Down the road I will look at using also fairscale/deepspeed but for now pytorch is just more accessible and hopefully will be more flexible soon.
## Deployment: script
You can deploy PP directly via your own trainer/script, e.g. this is what I have been using while developing it:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
import transformers.models.t5.modeling_t5
import transformers.utils.logging
transformers.models.t5.modeling_t5.logger.setLevel(transformers.logging.INFO)
mname = "t5-large"
tokenizer = T5Tokenizer.from_pretrained(mname)
model = T5ForConditionalGeneration.from_pretrained(mname, return_dict=True)
model.to("cuda:0")
model.pipeline_enable(chunks=2, device_map=None)
texts = ["This is good", "This is bad", "This is really bad", "This is fantastic",]
texts = ["translate English to French: "+x for x in texts]
batch = tokenizer.prepare_seq2seq_batch(texts, return_tensors="pt")
batch.to("cuda:0")
outputs = model.generate(**batch)
for x in outputs:
decoded = tokenizer.decode(x, skip_special_tokens=True)
print(decoded)
model.pipeline_finalize()
```
## Deployment: HF Trainer
But you can also use HF trainer. I tweaked the trainer to activate PP with:
```
--pipeline "chunks=4"
```
This will let the program do the partitioning for you. But you can control the partitioning manually by passing:
```
--pipeline "chunks=4 device_map=0:0-3,1:3-12"
```
Here we basically pass the equivalent of a dict `{0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}` which btw, you can pass in your script as:
```
device_map = {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}
model.pipeline_enable(chunks=30, device_map=device_map)
```
The syntax is what you'd pass to `range`, so `device_map=0:0-3,1:3-12" is the same as:
```
device_map = {0: list(range(0, 3), 1: list(range(3, 12)}
```
the keys are the gpu ids.
The number of layers is at the moment just the depth of the encoder stack, so 12 for t5-base, 6 for t5-small, etc.
Later we should have a different way as well, where we define the desired balance, rather than the specific layers.
Since each `t5` model has a different number of blocks, the easiest way is to first run without the device map and then check the logger output which will show you which device map it's using. Then I recommend to re-balance it so that gpu0 has less layers than the remaining gpus.
## Benchmarks
example for 2x 24GB cards
```
export BS=160 MODEL=t5-base; rm -r output_dir; PYTHONPATH=src USE_TF=0 examples/seq2seq/run_seq2seq.py \
--model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps \
--label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 \
--max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--warmup_steps 50 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " \
--max_train_samples 10 --max_val_samples 10 \
--pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
```
Performance-wise:
- prediction speed is terrible - just as bad as the naive MP we have in t5 and others
- training/eval w/o prediction is slightly slower 20-40% than the baseline with just one process - this is primarily due to data copying and the current quite inefficient implementation due to the Pipeline api restrictions.
- the key is to find the value for `chunks` so that there is enough in the pipe so that the gpus don't idle, but not too big as performance goes down. But I wasn't able to overcome 50%/gpu utilization, so it's not much more different from the naive implementation - don't know yet why - probably data copying takes most of the overhead.
- I think on 4 gpus it'd be good to try an experiment and put the encoder stack on gpu 0+1 and decoder on gpu 2+3, instead of copying data between 4 devices as it's happening now - this will require a more complex device map, that I designed for the Bart MP, which has separate encoder and decoder sub-maps. But then it'd affect the pipeline as half the gpus will surely idle while encoder is running - so not great either. We will have to experiment with real data once I have access to a rig with 4 gpus and see. That's why I don't think this is urgent to work on. But such change would be easy to do. We will have to do it anyway for other models whose stacks aren't necessarily symmetrical.
Here are some stats on 2x 24GB Titan RTX:
Baseline: (1gpu)
```
export BS=64 MODEL=t5-base; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src USE_TF=0 \
examples/seq2seq/run_seq2seq.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval \
--do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--val_max_target_length 128 --warmup_steps 50 --max_train_samples 1000 --max_val_samples 1000 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: "
train_runtime = 6.9149
eval_loss = 3.5492
eval_runtime = 3.2802
```
XXX: need to re-test with rebased code-base
Now with pipeline:
- can run much higher batch-size
- note, that I'm using a user-provided device map that has more layers on gpu 1, since gpu 0 needs much more RAM
```
# device_map=0:0-3,1:3-12 - so splitting 1:4
# {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9, 10, 11]}
```
```
export BS=160 MODEL=t5-base; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=src USE_TF=0 \
examples/seq2seq/run_seq2seq.py --model_name_or_path $MODEL --output_dir output_dir --adam_eps 1e-06 --do_eval \
--do_train --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir \
--per_device_eval_batch_size $BS --per_device_train_batch_size $BS --eval_steps 25000 --sortish_sampler \
--val_max_target_length 128 --warmup_steps 50 --max_train_samples 1000 --max_val_samples 1000 \
--task translation_en_to_ro --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " \
--pipeline "chunks=4 device_map=0:0-3,1:3-12" --dataloader_num_workers 4
```
XXX: need to re-test with rebased code-base
## Future
I'm also trying to instrument this feature with reporting that will help users to finetune chunks/device_map
This is the `model.pipeline_finalize()` call. Things I'm thinking that would be useful:
* [ ] gpu utilization stats (average/peak) - probably need to fire off a thread that samples pynvml gpu utilization, then calculates average + peak
* [ ] peak memory usage per device report that I added seems to be too low - I think it has to do with pipeline threads - need to sort it out
Any other ideas/requests/needs?
@PeterAJansen, please let me know if you managed to run this on your 4x gpu setup.
Next, I think I'm going to scratch the current implementation and try a new one afresh.
Also this PR should be good enough to try to figure out how to use with DeepSpeed, once I get access to 4 gpus (need at least 4 gpus to do 2D parallelism).
I did warn you not to look at the code.
I also removed big chunks of MP code for now as it was getting in the way with the noise, will restore it when I sorted this all out. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9765/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9765/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9765",
"html_url": "https://github.com/huggingface/transformers/pull/9765",
"diff_url": "https://github.com/huggingface/transformers/pull/9765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9765.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9764/comments | https://api.github.com/repos/huggingface/transformers/issues/9764/events | https://github.com/huggingface/transformers/issues/9764 | 792,684,056 | MDU6SXNzdWU3OTI2ODQwNTY= | 9,764 | index mismatch in "offset_mapping" with TokenizerFast and pre-tokenized input | {
"login": "simoneorlando",
"id": 32615042,
"node_id": "MDQ6VXNlcjMyNjE1MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/32615042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simoneorlando",
"html_url": "https://github.com/simoneorlando",
"followers_url": "https://api.github.com/users/simoneorlando/followers",
"following_url": "https://api.github.com/users/simoneorlando/following{/other_user}",
"gists_url": "https://api.github.com/users/simoneorlando/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simoneorlando/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simoneorlando/subscriptions",
"organizations_url": "https://api.github.com/users/simoneorlando/orgs",
"repos_url": "https://api.github.com/users/simoneorlando/repos",
"events_url": "https://api.github.com/users/simoneorlando/events{/privacy}",
"received_events_url": "https://api.github.com/users/simoneorlando/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @simoneorlando\r\n\r\nThis is indeed intended behavior. The values in `offset_mapping` return a mapping to the original input, and when you provide pre-tokenized input, each of them is treated individually. In this case, you can use the word mapping to know where you should do your extraction:\r\n```python\r\nfrom transformers import BertTokenizerFast\r\n\r\ntokenizer = BertTokenizerFast.from_pretrained(\"bert-base-cased\")\r\n\r\n# Pre-tokenized input\r\nlabels = [\"This\", \"is\", \"not\", \"what\", \"i\", \"was\", \"expecting\"]\r\ntokenized_with_pre_tokenized_input = tokenizer(\r\n labels,\r\n is_split_into_words=True,\r\n return_offsets_mapping=True,\r\n)\r\nprint(tokenized_with_pre_tokenized_input[\"offset_mapping\"])\r\nfor token_id, offsets in enumerate(tokenized_with_pre_tokenized_input[\"offset_mapping\"]):\r\n word_id = tokenized_with_pre_tokenized_input.token_to_word(token_id)\r\n if word_id is not None:\r\n print(offsets, labels[word_id][offsets[0] : offsets[1]])\r\n```\r\nGives the following output:\r\n```python\r\n[(0, 0), (0, 4), (0, 2), (0, 3), (0, 4), (0, 1), (0, 3), (0, 9), (0, 0)]\r\n(0, 4) This\r\n(0, 2) is\r\n(0, 3) not\r\n(0, 4) what\r\n(0, 1) i\r\n(0, 3) was\r\n(0, 9) expecting\r\n```"
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Information
I am not sure if this is the expected behavior or not, but when i use BertTokenizerFast with pre-tokenized input (so i set the parameter "is_split_into_words" to True) i have a mismatch in the offsets_mapping. It considers every token standalone and restart the start index from zero.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
tokenized_one_string = tokenizer(
["This is not what i was expecting"],
return_offsets_mapping=True,
)
tokenized_with_pre_tokenized_input = tokenizer(
[["This", "is", "not", "what", "i", "was", "expecting"]],
is_split_into_words=True,
return_offsets_mapping=True,
)
print(tokenized_one_string["offset_mapping"])
print(tokenized_with_pre_tokenized_input["offset_mapping"])
```
and this is the output:
```
[[(0, 0), (0, 4), (5, 7), (8, 11), (12, 16), (17, 18), (19, 22), (23, 32), (0, 0)]]
[[(0, 0), (0, 4), (0, 2), (0, 3), (0, 4), (0, 1), (0, 3), (0, 9), (0, 0)]]
```
## Expected behavior
I was expected to get the same "offset_mapping" even from the pre-tokenized input.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9764/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9763/comments | https://api.github.com/repos/huggingface/transformers/issues/9763/events | https://github.com/huggingface/transformers/issues/9763 | 792,635,809 | MDU6SXNzdWU3OTI2MzU4MDk= | 9,763 | squad_v2 crashes during evaluation | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It might be because I didn't provide the version_2_with_negative argument. Sorry! :) ",
"https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering mentions it. Although it it likely to be ignored. : )"
] | 1,611 | 1,668 | 1,611 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.9.5-64-nvidia-418.43-x86_64-with-debian-jessie-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: examples/question-answering/run_qa.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQuAD **v2**
## To reproduce
Steps to reproduce the behavior:
Run:
```
python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad_v2 --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --evaluation_strategy steps --fp16
```
```
***** Running Evaluation *****
Num examples = 12134
Batch size = 8
01/23/2021 21:32:15 - INFO - utils_qa - Post-processing 11873 example predictions split into 12134 features.#######################################################6| 1514/1517 [01:04<00:00, 23.34it/s]
100%|##############################################################################################################################################################| 11873/11873 [00:40<00:00, 291.51it/s]
01/23/2021 21:32:56 - INFO - utils_qa - Saving predictions to /tmp/debug_squad/predictions.json.####################################################################| 1517/1517 [01:23<00:00, 23.34it/s]
01/23/2021 21:32:56 - INFO - utils_qa - Saving nbest_preds to /tmp/debug_squad/nbest_predictions.json.##########################################################9| 11868/11873 [00:40<00:00, 303.46it/s]
Traceback (most recent call last):
File "run_qa.py", line 495, in <module>
main()
File "run_qa.py", line 457, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 929, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1004, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/transformers/examples/question-answering/trainer_qa.py", line 63, in evaluate
metrics = self.compute_metrics(eval_preds)
File "run_qa.py", line 439, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/home/olab/kirstain/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/metric.py", line 398, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/squad.py", li$e 100, in _compute
score = evaluate(dataset=dataset, predictions=pred_dict)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/evaluate.py", line 68, in evaluate
exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths)
File "/specific/netapp5_3/rent_public/olab-01-08-2021/kirstain/.cache/huggingface/modules/datasets_modules/metrics/squad/4791a1e1b37b2b0b8d8d4b7d4793349432fe03a61be5b08c8b30c6b4d86363f1/evaluate.py", line 53, in metric_max_over_ground_truths
return max(scores_for_ground_truths)
ValueError: max() arg is an empty sequence
```
## Expected behavior
completing the evaluation without an exception
Thank you! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9763/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9762/comments | https://api.github.com/repos/huggingface/transformers/issues/9762/events | https://github.com/huggingface/transformers/pull/9762 | 792,635,238 | MDExOlB1bGxSZXF1ZXN0NTYwNDc5MjE5 | 9,762 | Fix a typo in `Trainer.hyperparameter_search` docstring | {
"login": "sorami",
"id": 595008,
"node_id": "MDQ6VXNlcjU5NTAwOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/595008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sorami",
"html_url": "https://github.com/sorami",
"followers_url": "https://api.github.com/users/sorami/followers",
"following_url": "https://api.github.com/users/sorami/following{/other_user}",
"gists_url": "https://api.github.com/users/sorami/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sorami/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sorami/subscriptions",
"organizations_url": "https://api.github.com/users/sorami/orgs",
"repos_url": "https://api.github.com/users/sorami/repos",
"events_url": "https://api.github.com/users/sorami/events{/privacy}",
"received_events_url": "https://api.github.com/users/sorami/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | `compute_objectie` => `compute_objective`
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a typo in `Trainer.hyperparameter_search` docstring: `"compute_objectie"` => `"compute_objective"`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9762/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9762",
"html_url": "https://github.com/huggingface/transformers/pull/9762",
"diff_url": "https://github.com/huggingface/transformers/pull/9762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9762.patch",
"merged_at": 1611574804000
} |
https://api.github.com/repos/huggingface/transformers/issues/9761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9761/comments | https://api.github.com/repos/huggingface/transformers/issues/9761/events | https://github.com/huggingface/transformers/pull/9761 | 792,516,370 | MDExOlB1bGxSZXF1ZXN0NTYwMzg5NzM3 | 9,761 | Fix broken [Open in Colab] links (#9688) | {
"login": "wilcoln",
"id": 24209192,
"node_id": "MDQ6VXNlcjI0MjA5MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24209192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilcoln",
"html_url": "https://github.com/wilcoln",
"followers_url": "https://api.github.com/users/wilcoln/followers",
"following_url": "https://api.github.com/users/wilcoln/following{/other_user}",
"gists_url": "https://api.github.com/users/wilcoln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilcoln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilcoln/subscriptions",
"organizations_url": "https://api.github.com/users/wilcoln/orgs",
"repos_url": "https://api.github.com/users/wilcoln/repos",
"events_url": "https://api.github.com/users/wilcoln/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilcoln/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ping @patil-suraj ",
"Thank you for fixing this!"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | Resolves https://github.com/huggingface/transformers/issues/9688 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9761/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9761",
"html_url": "https://github.com/huggingface/transformers/pull/9761",
"diff_url": "https://github.com/huggingface/transformers/pull/9761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9761.patch",
"merged_at": 1611394907000
} |
https://api.github.com/repos/huggingface/transformers/issues/9760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9760/comments | https://api.github.com/repos/huggingface/transformers/issues/9760/events | https://github.com/huggingface/transformers/pull/9760 | 792,413,521 | MDExOlB1bGxSZXF1ZXN0NTYwMzAwODk2 | 9,760 | fix text summarization evaluation bugs when calculate rouge | {
"login": "ShichaoSun",
"id": 13548568,
"node_id": "MDQ6VXNlcjEzNTQ4NTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/13548568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShichaoSun",
"html_url": "https://github.com/ShichaoSun",
"followers_url": "https://api.github.com/users/ShichaoSun/followers",
"following_url": "https://api.github.com/users/ShichaoSun/following{/other_user}",
"gists_url": "https://api.github.com/users/ShichaoSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShichaoSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShichaoSun/subscriptions",
"organizations_url": "https://api.github.com/users/ShichaoSun/orgs",
"repos_url": "https://api.github.com/users/ShichaoSun/repos",
"events_url": "https://api.github.com/users/ShichaoSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShichaoSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"only two lines changed but it take much effort to pass the check!\r\nI think it may be important for the evaluation result.\r\nThey are simple but true bugs",
"Great catch @ShichaoSun and thank you for working on this!\r\nWe are in the process of finishing the [new standalone seq2seq script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) with the same functionality but without depending on the utils or other helpers. The utils and other helpers will probably be removed once we completely test the new script.",
"Great job! Looking forward to seeing that."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null |
# What does this PR do?
fix text summarization evaluation bugs when calculate rouge
It is important for calculating rouge
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patil-suraj @sshleifer @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9760/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9760",
"html_url": "https://github.com/huggingface/transformers/pull/9760",
"diff_url": "https://github.com/huggingface/transformers/pull/9760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9760.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9759/comments | https://api.github.com/repos/huggingface/transformers/issues/9759/events | https://github.com/huggingface/transformers/pull/9759 | 792,411,386 | MDExOlB1bGxSZXF1ZXN0NTYwMjk5MjIw | 9,759 | fix a small bug | {
"login": "ShichaoSun",
"id": 13548568,
"node_id": "MDQ6VXNlcjEzNTQ4NTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/13548568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShichaoSun",
"html_url": "https://github.com/ShichaoSun",
"followers_url": "https://api.github.com/users/ShichaoSun/followers",
"following_url": "https://api.github.com/users/ShichaoSun/following{/other_user}",
"gists_url": "https://api.github.com/users/ShichaoSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShichaoSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShichaoSun/subscriptions",
"organizations_url": "https://api.github.com/users/ShichaoSun/orgs",
"repos_url": "https://api.github.com/users/ShichaoSun/repos",
"events_url": "https://api.github.com/users/ShichaoSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShichaoSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | swap pred_lins with tgt_lns
just a typo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9759/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9759",
"html_url": "https://github.com/huggingface/transformers/pull/9759",
"diff_url": "https://github.com/huggingface/transformers/pull/9759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9759.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9758/comments | https://api.github.com/repos/huggingface/transformers/issues/9758/events | https://github.com/huggingface/transformers/issues/9758 | 792,280,084 | MDU6SXNzdWU3OTIyODAwODQ= | 9,758 | save tokenizer and model from fine tuned LED model | {
"login": "mmoya01",
"id": 17535683,
"node_id": "MDQ6VXNlcjE3NTM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17535683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmoya01",
"html_url": "https://github.com/mmoya01",
"followers_url": "https://api.github.com/users/mmoya01/followers",
"following_url": "https://api.github.com/users/mmoya01/following{/other_user}",
"gists_url": "https://api.github.com/users/mmoya01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmoya01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmoya01/subscriptions",
"organizations_url": "https://api.github.com/users/mmoya01/orgs",
"repos_url": "https://api.github.com/users/mmoya01/repos",
"events_url": "https://api.github.com/users/mmoya01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmoya01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | NONE | null | Hello, I have been following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=jpUr9QeebZ-n) to fine the `patrickvonplaten/led-large-16384-pubmed` model on my own data. However,after training, when I tried doing:
```python
model.save_pretrained("new_model")
tokenizer.save_pretrained("new_model")
```
to save the model and tokenizer, I noticed when I check out the `config.json` for it, it says
`"_name_or_path": "patrickvonplaten/led-large-16384-pubmed",`
that said, is it actually saving the fine tuned model or just resaving `patrickvonplaten/led-large-16384-pubmed`? I'd greatly appreciate any feedback on this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9758/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9757/comments | https://api.github.com/repos/huggingface/transformers/issues/9757/events | https://github.com/huggingface/transformers/issues/9757 | 792,220,856 | MDU6SXNzdWU3OTIyMjA4NTY= | 9,757 | Extra indicators for BPE for Unicode Characters | {
"login": "lolipopshock",
"id": 22512825,
"node_id": "MDQ6VXNlcjIyNTEyODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22512825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lolipopshock",
"html_url": "https://github.com/lolipopshock",
"followers_url": "https://api.github.com/users/lolipopshock/followers",
"following_url": "https://api.github.com/users/lolipopshock/following{/other_user}",
"gists_url": "https://api.github.com/users/lolipopshock/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lolipopshock/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lolipopshock/subscriptions",
"organizations_url": "https://api.github.com/users/lolipopshock/orgs",
"repos_url": "https://api.github.com/users/lolipopshock/repos",
"events_url": "https://api.github.com/users/lolipopshock/events{/privacy}",
"received_events_url": "https://api.github.com/users/lolipopshock/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @n1t0 ",
"Thanks! I think it's something related to the Byte-Level Subwords trick (https://arxiv.org/pdf/1909.03341.pdf)? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,618 | 1,618 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-45-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The code
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
words = ['(cid:3)', '하셨습니까', '하다']
tokenizer.batch_encode_plus(
[words],
max_length=512,
truncation=True,
padding=True,
is_split_into_words=True,
return_offsets_mapping=True,
return_special_tokens_mask=True,
return_tensors="pt",
)
```
2. The `offset_mapping` in the output is
```python
tensor([[[0, 0], # [CLS]
[0, 1], # for '('
[1, 4], # for 'cid'
[4, 5], # for ':'
[5, 6], # for '3'
[6, 7], # for ')'
[0, 5], # for '하셨습니까'
[0, 1], # for '하'
[0, 1], # for '하'
[1, 2], # for '다'
[1, 2], # for '다'
[0, 0]]])
```
3. As you could find, it generates four tokens for `하다`. The output is correct according to Byte pair encoding. However, it generates duplicated `[0,1]` and `[1,2]`s, which changes the structure of the outputs (for regular tokens, it can only have one `[0,x]`, which can be used to project the encoded tokens back to their original positions). Therefore, we need extra indicators for positions where Byte-pair encoding is used.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. An additional output showing the mapping for input_ids -> original_token_ids . In this case, it should be something like:
```
[0, 1, 1, 1, 1, 1, 2, 3, 3, 3, 3 0]
```
Therefore, we could use this map to figure out byte code embedding is used for the 3rd token.
Updated - @n1t0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9756/comments | https://api.github.com/repos/huggingface/transformers/issues/9756/events | https://github.com/huggingface/transformers/pull/9756 | 792,182,791 | MDExOlB1bGxSZXF1ZXN0NTYwMTA4NzQ2 | 9,756 | Remove a TF usage warning and rework the documentation | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not in favor of this to be honest...Warnings / Print statements are super important IMO. It's weird that `tf.Print` is not supported in XLA -> how are logs / warnings / print statements then produced in XLA?",
"XLA/Graph/TFlite execution are not made to have things to be printed, this includes some of the assert usage as well.",
"@patrickvonplaten TF XLA has few other known issues https://www.tensorflow.org/xla/known_issues",
"> @patrickvonplaten TF XLA has few other known issues https://www.tensorflow.org/xla/known_issues\r\n\r\nThanks for sharing! I'm still very surprised by your message:\r\n\r\n> Nevertheless, the usage of tf.Print prevent all our TF models to be compiled and executed with XLA and quantized with TFLite\r\n\r\nTo me, this means that every TF Repo that wants to be executable with XLA has no `tf.Print(...)` ops. That's a pretty hard requirement no? ",
"> To me, this means that every TF Repo that wants to be executable with XLA has no tf.Print(...) ops. That's a pretty hard requirement no?\r\n\r\nI agree, it is, but I see very rarely `tf.print(..)` to be used. As far as I know I never seen it implemented in official TF models during runtime (you can easily check with a quick search on https://github.com/tensorflow/models), usually it is used when evaluating a model, which is a use case that is not XLA related.",
"Can we use\r\n```\r\nimport tensorflow as tf\r\ntf_logger = tf.get_logger()\r\ntf_logger.warn(xxx)\r\n```\r\nintead?",
"@sgugger yes, using the internal TF logger should work as expected, but I'm not sure if it will bring any conflict with the actual transformers logger in terms of configuration.\r\n\r\nDo you want me to use it instead, and we will see later if there is indeed any conflict?",
"Yeah I think it would be best to have a warning the user can't silence with our centralized logging that none.",
"Ok done! I restored the warning but with the internal TF logger."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Recently we moved the warning for boolean argument misused in TF models from `warnings.warning(...)` to `tf.Print` to avoid the overhelm of messages in the output. Nevertheless, the usage of `tf.Print` prevent all our TF models to be compiled and executed with XLA and quantized with TFLite because the `Print` operator is not supported in XLA and TFLite.
As a solution for both issues (logging overhelm + XLA compilation/execution) I propose to simply remove the logs and state the use case directly inside the documentation for all the TF models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9756",
"html_url": "https://github.com/huggingface/transformers/pull/9756",
"diff_url": "https://github.com/huggingface/transformers/pull/9756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9756.patch",
"merged_at": 1611740743000
} |
https://api.github.com/repos/huggingface/transformers/issues/9755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9755/comments | https://api.github.com/repos/huggingface/transformers/issues/9755/events | https://github.com/huggingface/transformers/pull/9755 | 792,141,673 | MDExOlB1bGxSZXF1ZXN0NTYwMDc1MzM5 | 9,755 | Fix a TF test | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Fix a miss changed test.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9755/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9755",
"html_url": "https://github.com/huggingface/transformers/pull/9755",
"diff_url": "https://github.com/huggingface/transformers/pull/9755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9755.patch",
"merged_at": 1611333616000
} |
https://api.github.com/repos/huggingface/transformers/issues/9754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9754/comments | https://api.github.com/repos/huggingface/transformers/issues/9754/events | https://github.com/huggingface/transformers/issues/9754 | 792,109,208 | MDU6SXNzdWU3OTIxMDkyMDg= | 9,754 | Improve the `run_xlni` example to use the Datasets library | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"Really good idea! I was looking at the `run_xnli` script the last days - because I wanted to test the \"Language-Agnostic\" models from [here](https://github.com/AIPHES/Language-Agnostic-Contextualized-Encoders) - and datasets integration would make experiments a lot more easier. "
] | 1,611 | 1,613 | 1,613 | COLLABORATOR | null | The [`run_xlni`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_xnli.py) example should be improved (following the model of [`run_glue`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)) to use the Datasets library to download and preprocess the datasets.
Ideally, copying the `run_glue` example and adapting the relevant parts should be the way to go. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9753/comments | https://api.github.com/repos/huggingface/transformers/issues/9753/events | https://github.com/huggingface/transformers/issues/9753 | 792,108,973 | MDU6SXNzdWU3OTIxMDg5NzM= | 9,753 | named_parameters not showing embedding matrix of RobertaLMHead (more a question than a bug) | {
"login": "ClaartjeBarkhof",
"id": 25668035,
"node_id": "MDQ6VXNlcjI1NjY4MDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25668035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClaartjeBarkhof",
"html_url": "https://github.com/ClaartjeBarkhof",
"followers_url": "https://api.github.com/users/ClaartjeBarkhof/followers",
"following_url": "https://api.github.com/users/ClaartjeBarkhof/following{/other_user}",
"gists_url": "https://api.github.com/users/ClaartjeBarkhof/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ClaartjeBarkhof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClaartjeBarkhof/subscriptions",
"organizations_url": "https://api.github.com/users/ClaartjeBarkhof/orgs",
"repos_url": "https://api.github.com/users/ClaartjeBarkhof/repos",
"events_url": "https://api.github.com/users/ClaartjeBarkhof/events{/privacy}",
"received_events_url": "https://api.github.com/users/ClaartjeBarkhof/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found out that the input embedding and output embedding weights are tied by default, which is why the output embedding weights end up in state_dict, but not in the named_parameters (they are overcomplete).",
"I've found this issue too when trying to log parameters/gradients to Weights & Biases. It doesn't log `roberta.lm_head.decoder`.\r\n\r\nI can't quite work out the logic of [this code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L1143-L1145) in `RobertaLMHead`:\r\n```py\r\n self.decoder = nn.Linear(config.hidden_size, config.vocab_size)\r\n self.bias = nn.Parameter(torch.zeros(config.vocab_size))\r\n self.decoder.bias = self.bias\r\n```\r\n\r\nIf I understand what @ClaartjeBarkhof is saying, this is the reason for `decoder` not showing up in `named_parameters()`. Is that right? I can't quite make the connection between tying weights and then the thing no longer being considered a named parameter. And follow up question, what does that code actually do? How does it differ from just having the first line?\r\n"
] | 1,611 | 1,682 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): RobertaForCausalLM
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load standard RobertaForCausalLM
2. Record the difference between .named_parameters() and .state_dict() of the model
```
from transformers import RobertaForCausalLM, RobertaConfig
import torch
config = RobertaConfig.from_pretrained("roberta-base")
config.is_decoder = True
model = RobertaForCausalLM.from_pretrained('roberta-base', config=config)
named_params = [n for n, _ in model.named_parameters()]
print("Difference: ", [n for n in list(model.state_dict().keys()) if n not in named_params])
```
This outputs:
```
Difference: ['roberta.embeddings.position_ids', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
```
## Expected behavior
I would expect the parameters: `'lm_head.decoder.weight', 'lm_head.decoder.bias'` to show up in `.named_parameters()`, why do they not? That `'roberta.embeddings.position_ids'` does not show up in `.named_parameters()` is expected as they are not learned parameters, but just of help with getting the position embedding, but this is always the same I believe.
I would like to tie my input embedding matrix weights to my output embedding matrix. But now I am not really sure how to go about this. I thought of just doing:
`model.lm_head.decoder.weight = model.roberta.embeddings.word_embeddings.weight` but because of this thing with the named_parameters, I am not sure if this will work as expected. Also, the output embedding has a bias `lm_head.decoder.bias`, while the the input embeddings don't. My goal is to have the initial hidden state space be the same space as the last hidden state space.
Thanks in advance,
Claartje
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9753/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9752/comments | https://api.github.com/repos/huggingface/transformers/issues/9752/events | https://github.com/huggingface/transformers/issues/9752 | 792,107,285 | MDU6SXNzdWU3OTIxMDcyODU= | 9,752 | Improve PyTorch examples for FP16 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @sgugger \r\nThis is not done, although closed, please have a look into https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py \r\nfor line-by-line padding is not multiple of 8, thanks "
] | 1,611 | 1,616 | 1,611 | COLLABORATOR | null | To get the full speed-up of FP16 training, every tensor passed through the model should have all its dimensions be a multiple of 8. In the new PyTorch examples, when using dynamic padding, the tensors are padded to the length of the biggest sentence of the batch, but that number is not necessarily a multiple of 8.
The examples should be improved to pass along the option `pad_to_multiple_of=8` when `fp16` is True, if using a data collator that applies padding (or replace the `None` passed along to `Trainer` for `data_collator` by a `DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9752/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9752/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9751/comments | https://api.github.com/repos/huggingface/transformers/issues/9751/events | https://github.com/huggingface/transformers/pull/9751 | 792,101,108 | MDExOlB1bGxSZXF1ZXN0NTYwMDQyODQ5 | 9,751 | AdaFactor: avoid updating group["lr"] attributes | {
"login": "ceshine",
"id": 674501,
"node_id": "MDQ6VXNlcjY3NDUwMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/674501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceshine",
"html_url": "https://github.com/ceshine",
"followers_url": "https://api.github.com/users/ceshine/followers",
"following_url": "https://api.github.com/users/ceshine/following{/other_user}",
"gists_url": "https://api.github.com/users/ceshine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceshine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceshine/subscriptions",
"organizations_url": "https://api.github.com/users/ceshine/orgs",
"repos_url": "https://api.github.com/users/ceshine/repos",
"events_url": "https://api.github.com/users/ceshine/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceshine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you provide evidence that supports the following:\r\n\r\n> Updating group[\"lr\"] makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. **This isn't supposed to happen.**\r\n\r\nThanks!\r\n\r\n\r\n",
"> Can you provide evidence that supports the following:\r\n> \r\n> > Updating group[\"lr\"] makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. **This isn't supposed to happen.**\r\n> \r\n> Thanks!\r\n\r\nHi, \r\n\r\nThanks for the quick reply. \r\n\r\nThis is taken from the AdaFactor paper:\r\n\r\n\r\n\r\n\r\n\r\nAs you can see, ρ only depends on the step number if we use relative steps. And if we switch to any other learning rate schedules (in my case, linear warmup + cosine decay), it doesn't make sense to make the ρ part depends on the scale of the other parameters, nor can I find any reference of this approach in the paper.\r\n\r\nIf we (loosely) factor the α<sub>t</sub> in the original implementation to α<sub>i,t</sub>, where `i` indicate the set of parameters corresponding to the `for p in group[\"params\"]` loop. The original implementation essentially made α<sub>i,t</sub> depended on α<sub>i-1,t</sub> (i.e., making ρ<sub>i,t</sub> = α<sub>i-1,t</sub>). \r\n",
"> I've observed weird behaviors when using Adafactor with relative_step=False and scale_parameter=True and an LR scheduler.\r\n\r\nI should probably clarify what I meant by \"weird behaviors.\" The model (T5 v1.1) never converged when trained Adafactor with `relative_step=False` and `scale_parameter=True`. After this patch, I managed to get convergence and even better results than the built-in LR schedule in the `relative_step=True` mode (with `warmup_init=True`).",
"cc @patrickvonplaten @patil-suraj \r\nThis looks like a reasonable change to me!",
"Thank you all for your time and for accepting the patch! Glad to have made a tiny contribution to this great library.\r\n\r\n> BTW, if you have some working code for how to train a `google/t5v1_1` model I think it would be super helpful to post it here, on the forum or as a community notebook! Many people have been asking for good t5v1_1 training scripts :-)\r\n\r\nI don't have anything that is sufficiently readable yet. Nonetheless, I have these notebooks published on Kaggle that use the patched Adafactor: one for [T5 v1.1](https://www.kaggle.com/ceshine/preprocess-and-finetune-t5-1-1-full/) and one for [mT5](https://www.kaggle.com/ceshine/preprocess-and-finetune-mt5). They are based on this [Github repo](https://github.com/ceshine/finetuning-t5/tree/mt5-classifier-trim-lm-head/mnli), which is quite messy at this moment. The part that set up the optimizer is located [here](https://github.com/ceshine/finetuning-t5/blob/de34e0c735568d00f9244e0b6f019c3f5cb64576/mnli/train.py#L314).\r\n"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | This affects Adafactor with `relative_step=False` and `scale_parameter=True`.
Updating `group["lr"]` makes the result of ._get_lr() depends on the previous call, i.e., on the scale of other parameters. This isn't supposed to happen.
# What does this PR do?
I've observed weird behaviors when using Adafactor with `relative_step=False` and `scale_parameter=True` and an LR scheduler. I think the problem is that the code [updates the `lr` attribute of the current parameter group](https://github.com/huggingface/transformers/blob/490b39e6142ca8f2ccb84c5436402899ae54e44f/src/transformers/optimization.py#L549), and then uses the updated attribute to [calculate the next attribute](https://github.com/huggingface/transformers/blob/490b39e6142ca8f2ccb84c5436402899ae54e44f/src/transformers/optimization.py#L469). I don't think this is supposed to happen.
A simple fix would be replacing the update operation with an assignment to a local variable.
I'm not entirely sure if I understand the problem correctly, so I apologize in advance if this is a stupid PR. I'd appreciate it if someone could point out where I am wrong. Thanks!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@moscow25 @sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9751/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9751/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9751",
"html_url": "https://github.com/huggingface/transformers/pull/9751",
"diff_url": "https://github.com/huggingface/transformers/pull/9751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9751.patch",
"merged_at": 1612184854000
} |
https://api.github.com/repos/huggingface/transformers/issues/9750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9750/comments | https://api.github.com/repos/huggingface/transformers/issues/9750/events | https://github.com/huggingface/transformers/issues/9750 | 791,987,957 | MDU6SXNzdWU3OTE5ODc5NTc= | 9,750 | ValueError: Couldn't instantiate the backend tokenizer while loading model tokenizer | {
"login": "rsanjaykamath",
"id": 18527321,
"node_id": "MDQ6VXNlcjE4NTI3MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsanjaykamath",
"html_url": "https://github.com/rsanjaykamath",
"followers_url": "https://api.github.com/users/rsanjaykamath/followers",
"following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}",
"gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions",
"organizations_url": "https://api.github.com/users/rsanjaykamath/orgs",
"repos_url": "https://api.github.com/users/rsanjaykamath/repos",
"events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsanjaykamath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @rsanjaykamath, \r\n\r\nI cannot reproduce the error on `master`. When running:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration\r\n\r\nmodel_name = \"allenai/unifiedqa-t5-small\" # you can specify the model size here\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = T5ForConditionalGeneration.from_pretrained(model_name)\r\n```\r\n\r\nI don't encounter any errors...could you try to update transformers to the newest version and try again?",
"Hi @patrickvonplaten ,\r\n\r\nThat's strange. I just tried it on Colab with the version 4.2.2 of transformers and the same error occurs again. \r\nHave you tried it on colab? or local machine? \r\n",
"I see it's the classic sentencepiece error - I should have better read your error message ;-)\r\n\r\nHere the colab to show how it works: https://colab.research.google.com/drive/1QybYdj-1bW0MHD0cutWBPWas5IFEhSjC?usp=sharing",
"Also see: https://github.com/huggingface/transformers/issues/8963",
"Ok got it. Installing sentencepiece and restarting the kernel did the trick for me. \r\n\r\nThanks for your help :) Closing the issue. ",
"I think the error message should be more clear ",
"> I see it's the classic sentencepiece error - I should have better read your error message ;-)\r\n> \r\n> Here the colab to show how it works: https://colab.research.google.com/drive/1QybYdj-1bW0MHD0cutWBPWas5IFEhSjC?usp=sharing\r\n\r\n\r\n\r\n\r\n"
] | 1,611 | 1,698 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Colab
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@mfuntowicz @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
T5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://github.com/allenai/unifiedqa Loading the model mentioned here for tokenizer does not work
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the instructions here https://github.com/allenai/unifiedqa to get the sample code
2. Copy paste it in Colab to run it.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return tokenizer.batch_decode(res, skip_special_tokens=True)
```
## Expected behavior
The following code should load the model without errors.
## Error
But the following error is obtained:
```
ValueError Traceback (most recent call last)
<ipython-input-4-ee10e1c1c77e> in <module>()
2
3 model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
----> 4 tokenizer = AutoTokenizer.from_pretrained(model_name)
5 model = T5ForConditionalGeneration.from_pretrained(model_name)
6
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs)
94 else:
95 raise ValueError(
---> 96 "Couldn't instantiate the backend tokenizer from one of: "
97 "(1) a `tokenizers` library serialization file, "
98 "(2) a slow tokenizer instance to convert or "
ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9750/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9749/comments | https://api.github.com/repos/huggingface/transformers/issues/9749/events | https://github.com/huggingface/transformers/pull/9749 | 791,934,362 | MDExOlB1bGxSZXF1ZXN0NTU5OTA1MzY2 | 9,749 | Use object store to pass trainer object to Ray Tune (makes it work with large models) | {
"login": "krfricke",
"id": 14904111,
"node_id": "MDQ6VXNlcjE0OTA0MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krfricke",
"html_url": "https://github.com/krfricke",
"followers_url": "https://api.github.com/users/krfricke/followers",
"following_url": "https://api.github.com/users/krfricke/following{/other_user}",
"gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krfricke/subscriptions",
"organizations_url": "https://api.github.com/users/krfricke/orgs",
"repos_url": "https://api.github.com/users/krfricke/repos",
"events_url": "https://api.github.com/users/krfricke/events{/privacy}",
"received_events_url": "https://api.github.com/users/krfricke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Tuning large models with Ray Tune did not work recently. By passing the trainer object via the object store we avoid serialization of the global object, fixing these issues.
I could reproduce the issue in #9146 on a AWS p2.xlarge node and could confirm it is resolved by these changes.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9146
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9749/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9749",
"html_url": "https://github.com/huggingface/transformers/pull/9749",
"diff_url": "https://github.com/huggingface/transformers/pull/9749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9749.patch",
"merged_at": 1611568916000
} |
https://api.github.com/repos/huggingface/transformers/issues/9748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9748/comments | https://api.github.com/repos/huggingface/transformers/issues/9748/events | https://github.com/huggingface/transformers/issues/9748 | 791,881,936 | MDU6SXNzdWU3OTE4ODE5MzY= | 9,748 | Trainer object empties dataset | {
"login": "AlexBella365",
"id": 22292468,
"node_id": "MDQ6VXNlcjIyMjkyNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22292468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexBella365",
"html_url": "https://github.com/AlexBella365",
"followers_url": "https://api.github.com/users/AlexBella365/followers",
"following_url": "https://api.github.com/users/AlexBella365/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexBella365/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexBella365/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexBella365/subscriptions",
"organizations_url": "https://api.github.com/users/AlexBella365/orgs",
"repos_url": "https://api.github.com/users/AlexBella365/repos",
"events_url": "https://api.github.com/users/AlexBella365/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexBella365/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your dataset is not really emptied (I agree it looks like this however). It's just viewed without the columns the model can accept. You can restore all your columns with\r\n```\r\ndataset[\"train\"].set_format(columns=list(dataset[\"train\"].features.keys())\r\n``` \r\nor more simply\r\n```\r\ndataset[\"train\"].reset_format()\r\n```\r\nI agree that this is strange and we're seeing how we can have the same behavior without changing the dataset you pass to Trainer.\r\n\r\nNote that you didn't preprocess your data, so you won't be able to train in this example.\r\n",
"Hi Sylvain,\r\nThanks so much for prompt response.\r\n\r\nI was not aware of this behaviour so was left very puzzled. I tried what you suggested and indeed got back all my data.\r\n\r\nYes, I agree with you, the data was not yet ready for training. I was just trying to pinpoint at which stage in my script the dataset became \"empty\".\r\n\r\nThanks again :)"
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.14.81.bm.20-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): XLM-Roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a multilingual dataset by concatenating a handful of languages from wikiann
2. Instantiate the model
3. Instantiate the Trainer object

## Expected behavior
The dataset object should not have been modified by the Trainer object | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9748/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9747/comments | https://api.github.com/repos/huggingface/transformers/issues/9747/events | https://github.com/huggingface/transformers/issues/9747 | 791,856,802 | MDU6SXNzdWU3OTE4NTY4MDI= | 9,747 | mT5 additional_special_tokens seems not work | {
"login": "PiggyFan",
"id": 26171212,
"node_id": "MDQ6VXNlcjI2MTcxMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/26171212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PiggyFan",
"html_url": "https://github.com/PiggyFan",
"followers_url": "https://api.github.com/users/PiggyFan/followers",
"following_url": "https://api.github.com/users/PiggyFan/following{/other_user}",
"gists_url": "https://api.github.com/users/PiggyFan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PiggyFan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PiggyFan/subscriptions",
"organizations_url": "https://api.github.com/users/PiggyFan/orgs",
"repos_url": "https://api.github.com/users/PiggyFan/repos",
"events_url": "https://api.github.com/users/PiggyFan/events{/privacy}",
"received_events_url": "https://api.github.com/users/PiggyFan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you either post a link to your notebook or your code as actual code? Images makes it impossible to copy/paste or for others to search with similar issues. Thanks.",
"> Hi! Could you either post a link to your notebook or your code as actual code? Images makes it impossible to copy/paste or for others to search with similar issues. Thanks.\r\n\r\nHere is colab notebook link. Thanks for your reply.\r\n\r\nhttps://colab.research.google.com/drive/1fbp7VvnUvbf5r8CSitOg2pDZyM47y9xj?usp=sharing",
"Hi! Indeed, this is a bit misleading. Special tokens are considered as those that were in the pre-training, that is: unknown tokens, bos tokens, eos tokens, etc.\r\n\r\nIf you want to use special tokens that you use as special tokens, I would argue it is better to define them as simple tokens. Therefore doing the following:\r\n\r\n```py\r\n>>> from transformers import MT5Tokenizer, MT5ForConditionalGeneration\r\n... import torch\r\n... special_tokens = ['<POS>', '<NEG>','<CON_START>','<START>','<END>'] # Set the special tokens\r\n... mt5_add_tokenizer = MT5Tokenizer.from_pretrained(\"google/mt5-small\")\r\n... mt5_add_tokenizer.add_tokens(special_tokens)\r\n... print(mt5_add_tokenizer.tokenize(\"<POS> <CON_START> the biscuits and gravy were . <START>\"))\r\n```\r\n\r\nYou'll get the following output:\r\n```out\r\n['<POS>', '<CON_START>', '▁the', '▁b', 'iscuit', 's', '▁and', '▁grav', 'y', '▁were', '▁', '.', '<START>']\r\n````\r\n\r\nLet me know if that makes sense.",
"Thank you very much, it works well.\r\nI was confused by [issue5940](https://github.com/huggingface/transformers/issues/5940) which mentioned \"special tokens are carefully handled by the tokenizer (they are never split)\". If use add_token() method, the additional simple tokens may still split?\r\nIn addition, the code in Colab notebook as above link, OpenaiTokenizer should use add_tokens method rather than add_special_tokens (define them as a simple tokens) ?",
"For tokens that cannot be identified as being either:\r\n- `eos_token`\r\n- `bos_token`\r\n- `cls_token`\r\n- `unk_token`\r\n- `pad_token`\r\n- `sep_token`\r\n- `mask_token`\r\n\r\nThen I would recommend using the `add_token` method. These tokens shouldn't split either:\r\n```py\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n>>> tokenizer.add_tokens([\"<CON_START>\", \"<CON_ST\"])\r\n2\r\n>>> tokenizer.tokenize(\"<CON_START>\")\r\n['<CON_START>']\r\n>>> tokenizer.tokenize(\"<CON_STAR>\")\r\n['<CON_ST', 'AR', '>']\r\n```",
"Thank for your help and recommend.",
"> For tokens that cannot be identified as being either:\r\n> \r\n> * `eos_token`\r\n> * `bos_token`\r\n> * `cls_token`\r\n> * `unk_token`\r\n> * `pad_token`\r\n> * `sep_token`\r\n> * `mask_token`\r\n> \r\n> Then I would recommend using the `add_token` method. These tokens shouldn't split either:\r\n> \r\n> ```python\r\n> >>> from transformers import BertTokenizer\r\n> >>> tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n> >>> tokenizer.add_tokens([\"<CON_START>\", \"<CON_ST\"])\r\n> 2\r\n> >>> tokenizer.tokenize(\"<CON_START>\")\r\n> ['<CON_START>']\r\n> >>> tokenizer.tokenize(\"<CON_STAR>\")\r\n> ['<CON_ST', 'AR', '>']\r\n> ```\r\n\r\nHi it seems `add_tokens` not working with `AutoTokenizer` , but work with specific-defined tokenizer like `BertTokenizer`, \r\n```python\r\nspecial_tokens = [\"-Title-\"]\r\n#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\r\ntokenizer.add_tokens(special_tokens)\r\ntokenizer.tokenize(\"-Title-\")\r\n# ['-', 'title', '-']\r\n```",
"He! Thanks for reporting. This should be fixed by #23909 "
] | 1,611 | 1,685 | 1,612 | NONE | null | I want add some special tokens such as <POS> <CON_START> . But T5tokenizer/MT5tokenizer both can't tokenize correctly after using additional_special_tokens parameter. It still split these special tokens to subwords.
<img width="1015" alt="截圖 2021-01-22 下午5 18 00" src="https://user-images.githubusercontent.com/26171212/105473581-3d88d100-5cd8-11eb-8568-6fedd19513e2.png">
It works when using OpenAIGPTTokenizer additional_special_tokens parameter. It's clear that after declare additional_special_tokens parameter, OpenAIGPTTokenizer tokenize <POS> as one word rather split it.
<img width="979" alt="截圖 2021-01-22 下午5 54 57" src="https://user-images.githubusercontent.com/26171212/105475970-049e2b80-5cdb-11eb-8470-576fd8f38999.png">
<img width="697" alt="截圖 2021-01-22 下午5 55 10" src="https://user-images.githubusercontent.com/26171212/105475992-0962df80-5cdb-11eb-9cae-205b57818e95.png">
The version of transformers is 4.2.2
And I'm not sure this problem is related with [issue624](https://github.com/google-research/text-to-text-transfer-transformer/issues/624) in T5 which talk about SentencePiece extra vocab.
Thank you for your feedback | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9746/comments | https://api.github.com/repos/huggingface/transformers/issues/9746/events | https://github.com/huggingface/transformers/pull/9746 | 791,847,620 | MDExOlB1bGxSZXF1ZXN0NTU5ODM1MjM3 | 9,746 | Fix an efficiency related bug the "prediction_loop" of trainer_tf.py | {
"login": "zhangzhenyu13",
"id": 22976165,
"node_id": "MDQ6VXNlcjIyOTc2MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22976165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangzhenyu13",
"html_url": "https://github.com/zhangzhenyu13",
"followers_url": "https://api.github.com/users/zhangzhenyu13/followers",
"following_url": "https://api.github.com/users/zhangzhenyu13/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangzhenyu13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangzhenyu13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangzhenyu13/subscriptions",
"organizations_url": "https://api.github.com/users/zhangzhenyu13/orgs",
"repos_url": "https://api.github.com/users/zhangzhenyu13/repos",
"events_url": "https://api.github.com/users/zhangzhenyu13/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangzhenyu13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | # the "numpy.append" method is not suitable for large evaluation/test dataset
It will cause "memory ops blocked issue" (i.e. system wait for a large area of continuous memory to expand the array) because numpy array requires continuous memory, and will try to re-allocate a ram space when the current capacity is not enough
### this procedure is quite slow(as if the prediction loop is blocked): tested when data set size is larger than 10K
### we need to use the concatenate strategy(for efficiency) or batch generator strategy(when the data size is quite large).
### The method I provided here will significantly boost the prediction speed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9746",
"html_url": "https://github.com/huggingface/transformers/pull/9746",
"diff_url": "https://github.com/huggingface/transformers/pull/9746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9746.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9745/comments | https://api.github.com/repos/huggingface/transformers/issues/9745/events | https://github.com/huggingface/transformers/issues/9745 | 791,688,069 | MDU6SXNzdWU3OTE2ODgwNjk= | 9,745 | fine tune patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 using LED updates | {
"login": "mmoya01",
"id": 17535683,
"node_id": "MDQ6VXNlcjE3NTM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17535683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmoya01",
"html_url": "https://github.com/mmoya01",
"followers_url": "https://api.github.com/users/mmoya01/followers",
"following_url": "https://api.github.com/users/mmoya01/following{/other_user}",
"gists_url": "https://api.github.com/users/mmoya01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmoya01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmoya01/subscriptions",
"organizations_url": "https://api.github.com/users/mmoya01/orgs",
"repos_url": "https://api.github.com/users/mmoya01/repos",
"events_url": "https://api.github.com/users/mmoya01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmoya01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @mmoya01 \r\n\r\nYou could fine-tune `longformer2roberta` model using the `EncoderDecoder` model class. `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16` is already fine-tuned on but as the model card says it was fine-tuned for just demo, so you should fine-tune a new `longformer2roberta`. You could follow the training script given the [model card ](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) or you can refer to this [notebook](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)\r\n\r\nAlso in your example, you are loading the `longformer2roberta` model using `LEDForConditionalGeneration` which doesn't seem right. It should be loaded using `EncoderDecoderModel`",
"hi @patil-suraj , thank you for the reply! So if I'm understanding this correctly, I would have to train a new `longformer2roberta` from scratch? I was trying to avoid that because the model card mentions how it took 90 hours to fine tune roberta on cnn-daily news\r\n\r\n\r\n\r\nThe reason I was trying to use `LEDForConditionalGeneration` is because I wanted to fine tune it where the pretrained `model` was `longformer2roberta` instead of `allenai/longformer-base-4096` \r\n\r\nso, to fine tune `longformer2roberta` model in the past, I tried pip installing the [more_general_trainer_metric](https://github.com/huggingface/transformers/archive/more_general_trainer_metric.zip) branch given the note about the trainer and then running\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport nlp\r\nimport logging\r\nfrom nlp import arrow_dataset\r\nfrom transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments\r\n\r\n\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\n\r\n\r\nmodel = EncoderDecoderModel.from_pretrained(\"patrickvonplaten/longformer2roberta-cnn_dailymail-fp16\")\r\ntokenizer = LongformerTokenizer.from_pretrained(\"allenai/longformer-base-4096\") \r\n\r\n\r\n#load dataset\r\ntrain_bytes = s3_client.get_object(train_uri)\r\ntrain = pq.read_table(BytesIO(train_bytes),columns=['reference_summary','extractive_summary'])\r\n\r\ntest_bytes = s3_client.get_object(test_uri)\r\ntest = pq.read_table(BytesIO(test_bytes),columns=['reference_summary','extractive_summary'])\r\n\r\n\r\ntrain_dataset = arrow_dataset.Dataset(train)\r\nval_dataset = arrow_dataset.Dataset(test)\r\n\r\n\r\n\r\n# enable gradient checkpointing for longformer encoder\r\nmodel.encoder.config.gradient_checkpointing = True\r\n\r\n# set decoding params\r\nmodel.config.decoder_start_token_id = tokenizer.bos_token_id\r\nmodel.config.eos_token_id = tokenizer.eos_token_id\r\nmodel.config.max_length = 142\r\nmodel.config.min_length = 56\r\nmodel.config.no_repeat_ngram_size = 3\r\nmodel.early_stopping = True\r\nmodel.length_penalty = 2.0\r\nmodel.num_beams = 4\r\n\r\nencoder_length = 2048\r\ndecoder_length = 128*2\r\nbatch_size = 16\r\n\r\n\r\n# map data correctly\r\ndef map_to_encoder_decoder_inputs(batch):\r\n # Tokenizer will automatically set [BOS] <text> [EOS]\r\n # cut off at Longformer at 2048\r\n inputs = tokenizer(batch[\"extractive_summary\"], padding=\"max_length\", truncation=True, max_length=encoder_length)\r\n # force summarization <= 256\r\n outputs = tokenizer(batch[\"reference_summary\"], padding=\"max_length\", truncation=True, max_length=decoder_length)\r\n\r\n batch[\"input_ids\"] = inputs.input_ids\r\n batch[\"attention_mask\"] = inputs.attention_mask\r\n\r\n # set 128 tokens to global attention\r\n batch[\"global_attention_mask\"] = [[1 if i < 128*2 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]]\r\n batch[\"decoder_input_ids\"] = outputs.input_ids\r\n batch[\"labels\"] = outputs.input_ids.copy()\r\n # mask loss for padding\r\n batch[\"labels\"] = [\r\n [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch[\"labels\"]\r\n ]\r\n batch[\"decoder_attention_mask\"] = outputs.attention_mask\r\n\r\n assert all([len(x) == encoder_length for x in inputs.input_ids])\r\n assert all([len(x) == decoder_length for x in outputs.input_ids])\r\n\r\n return batch\r\n\r\n\r\ndef compute_metrics(pred):\r\n labels_ids = pred.label_ids\r\n pred_ids = pred.predictions\r\n\r\n # all unnecessary tokens are removed\r\n pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)\r\n labels_ids[labels_ids == -100] = tokenizer.eos_token_id\r\n label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)\r\n\r\n rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=[\"rouge2\"])[\"rouge2\"].mid\r\n\r\n return {\r\n \"rouge2_precision\": round(rouge_output.precision, 4),\r\n \"rouge2_recall\": round(rouge_output.recall, 4),\r\n \"rouge2_fmeasure\": round(rouge_output.fmeasure, 4),\r\n }\r\n\r\n\r\n return {\r\n \"rouge2_precision\": round(rouge_output.precision, 4),\r\n \"rouge2_recall\": round(rouge_output.recall, 4),\r\n \"rouge2_fmeasure\": round(rouge_output.fmeasure, 4),\r\n }\r\n\r\n\r\n# make train dataset ready\r\ntrain_dataset = train_dataset.map(\r\n map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"extractive_summary\", \"reference_summary\"],\r\n)\r\ntrain_dataset.set_format(\r\n type=\"torch\", columns=[\"input_ids\", \"attention_mask\", \"global_attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n)\r\n\r\n# same for validation dataset\r\nval_dataset = val_dataset.map(\r\n map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=[\"extractive_summary\", \"reference_summary\"],\r\n)\r\nval_dataset.set_format(\r\n type=\"torch\", columns=[\"input_ids\", \"global_attention_mask\", \"attention_mask\", \"decoder_input_ids\", \"decoder_attention_mask\", \"labels\"],\r\n)\r\n\r\n# set training arguments - these params are not really tuned, feel free to change\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./\",\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n predict_from_generate=True,\r\n evaluate_during_training=True,\r\n do_train=True,\r\n do_eval=True,\r\n logging_steps=100,\r\n save_steps=100,\r\n eval_steps=100,\r\n overwrite_output_dir=True,\r\n warmup_steps=200,\r\n save_total_limit=3,\r\n fp16=False,\r\n \r\n)\r\n\r\n# instantiate trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n)\r\n\r\n# start training\r\ntrainer.train()\r\n```\r\n\r\n^but that gave me `TypeError: forward() got an unexpected keyword argument 'head_mask'` because The `EncoderDecoderModel` did not work with longformer whereas `LEDForConditionalGeneration` does\r\n\r\n\r\nbut I'm gathering, it is not possible to fine tune the `longfomer2roberta` like I can with `patrickvonplaten/led-large-16384-pubmed` [here](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) right? I would have to fine tune/create my own `longfomer2roberta` trained on cnn daily, then fine tune further with my `train` data listed above right? If so, should I stay away from using a tokenizer/model that uses `roberta-base` and instead use `\"allenai/led-base-16384\"`(which I think uses BART as the base model)\r\n\r\nThank you for your feedback either way, I greatly appreciate it",
"Hey @mmoya01, you don't have to train it from scratch - you can \"warm-start\" the model from the pretrained checkpoints. This blog post gives an in-detail explanation on how to do so: https://huggingface.co/blog/warm-starting-encoder-decoder ",
"Hi @patrickvonplaten thank you for your reply and the blog post. I was following your [notebook](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) and trying to adapt it to the [longformer2roberta-cnn_dailymail-fp16](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16) work using my own `train_data` and `val_data`. wondering, how could I warm-start from `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16`?\r\n\r\nI noticed I was able to do\r\n```python\r\nroberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained(\"allenai/longformer-base-4096\", \"roberta-base\")\r\n```\r\n\r\nBut I would love to do something like\r\n```python \r\nroberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained(\"patrickvonplaten/longformer2roberta-cnn_dailymail-fp16\")\r\n```\r\nor warm-start the `longformer2roberta-cnn_dailymail-fp16` checkpoint if possible rather than warm-start from `allenai/longformer-base-4096`? I'd greatly appreciate your feedback",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | I was wondering if there was anyway to fine tune the
`patrickvonplaten/longformer2roberta-cnn_dailymail-fp16` model instead of `patrickvonplaten/led-large-16384-pubmed`? When I tried fine tuning it in the past I ran into the
`TypeError: forward() got an unexpected keyword argument 'head_mask'` issue given that `EncoderDecoderModel` wasn't intended for longformer. So I'm now trying to see if I can use `LEDForConditionalGeneration` for it but I noticed when I try doing:
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
tokenizer = LEDTokenizer.from_pretrained("allenai/longformer-base-4096")
model = LEDForConditionalGeneration.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16")#.to("cuda").half()
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
```
I get strange results for that pretrained model
```
considerations considerations considerations lag lag lag Sith Sith Sith miracle miracle miracle Sith Sith Metropolitan Metropolitan Metropolitan Sith SithHERHERHER miracle miracle Hurt Hurt Hurt miracle miracle Joey Joey Joey Sith Sith ticking ticking ticking memorial memorial memorial tee tee tee miracle miracle Holder Holder Holder miracle miracle raspberry raspberry raspberry Sith Sithamoamoamo Sith Sith dominate dominate dominate miracle miracleDashDashDash miracle miracle scored scored scored dominate dominate Sith Sith (* (* (* dominate dominate Joey Joey miracle miracle hide hide hide miracle miracle characteristics characteristics characteristics miracle miracletighttighttight raspberry raspberry hal hal halomeveromeveromever miracle miracle ticking ticking dominate dominate Metropolitan Metropolitan dominate dominate Dek dominate dominate AWS AWS AWS sentencing sentencing sentencingCasCasCas customer customer customer Joey Joey dominate dominatetighttight miracle miracle AWS
```
if I try using `LEDForConditionalGeneration` instead of `EncoderDecoderModel` for model `patrickvonplaten/longformer2roberta-cnn_dailymail-fp16`. Is there something I'm missing? I'd greatly appreciate any feedback/help with this
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9745/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9744/comments | https://api.github.com/repos/huggingface/transformers/issues/9744/events | https://github.com/huggingface/transformers/issues/9744 | 791,666,023 | MDU6SXNzdWU3OTE2NjYwMjM= | 9,744 | Wrong offsets mapping in XLMRobertaTokenizerFast | {
"login": "nikitakit",
"id": 252225,
"node_id": "MDQ6VXNlcjI1MjIyNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/252225?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikitakit",
"html_url": "https://github.com/nikitakit",
"followers_url": "https://api.github.com/users/nikitakit/followers",
"following_url": "https://api.github.com/users/nikitakit/following{/other_user}",
"gists_url": "https://api.github.com/users/nikitakit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikitakit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikitakit/subscriptions",
"organizations_url": "https://api.github.com/users/nikitakit/orgs",
"repos_url": "https://api.github.com/users/nikitakit/repos",
"events_url": "https://api.github.com/users/nikitakit/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikitakit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The wrong alignments are caused by the `Precompiled` normalizer in `tokenizers`. This will be fixed in version 0.10.1 of the library.",
"This is fixed for any version of `transformers>=4.3.0`"
] | 1,611 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-124-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@mfuntowicz @thomwolf
## Information
Model I am using (Bert, XLNet ...): XLMRobertaTokenizerFast
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import XLMRobertaTokenizerFast
tokenizer = XLMRobertaTokenizerFast.from_pretrained('xlm-roberta-large')
text = "?……"
tokenized = tokenizer(text, return_offsets_mapping=True)
print('Text:', text)
print('Tokens:', tokenizer.convert_ids_to_tokens(tokenized.input_ids))
print('Mapped to:', [text[start:end] for start, end in tokenized.offset_mapping])
```
Observed behavior:
```
Text: ?……
Tokens: ['<s>', '▁?', '......', '</s>']
Mapped to: ['', '?', '?', '']
```
Expected behavior:
```
Text: ?……
Tokens: ['<s>', '▁?', '......', '</s>']
Mapped to: ['', '?', '……', '']
```
## Expected behavior
I'm using XLM-R for Chinese text, and I would expect offset mappings to work correctly even in the presence of various Unicode punctuation symbols. It looks like XLM-R tokenizes "?……" as two tokens ('▁?' and '......'), which I would expect to map back to the appropriate locations in the input. Instead, the offset mapping from these tokens is identical.
This example is an ending of an actual sentence in the Chinese Treebank -- I removed the sentence itself because it doesn't matter for reproducing the bug.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9743/comments | https://api.github.com/repos/huggingface/transformers/issues/9743/events | https://github.com/huggingface/transformers/issues/9743 | 791,624,172 | MDU6SXNzdWU3OTE2MjQxNzI= | 9,743 | DistilGPT2 extremely strange model behaviour | {
"login": "Octopirate1",
"id": 35666310,
"node_id": "MDQ6VXNlcjM1NjY2MzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/35666310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Octopirate1",
"html_url": "https://github.com/Octopirate1",
"followers_url": "https://api.github.com/users/Octopirate1/followers",
"following_url": "https://api.github.com/users/Octopirate1/following{/other_user}",
"gists_url": "https://api.github.com/users/Octopirate1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Octopirate1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Octopirate1/subscriptions",
"organizations_url": "https://api.github.com/users/Octopirate1/orgs",
"repos_url": "https://api.github.com/users/Octopirate1/repos",
"events_url": "https://api.github.com/users/Octopirate1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Octopirate1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Extra info:\r\n\r\ncommand: ``python3 -m torch.distributed.launch \\\r\n --nproc_per_node=$N_GPU_NODE \\\r\n --nnodes=$N_NODES \\\r\n --node_rank $NODE_RANK \\\r\n train.py \\\r\n --force \\\r\n --fp16 \\\r\n --n_epoch 3 \\\r\n --checkpoint_interval 2000 \\\r\n --batch_size 12 \\\r\n --n_gpu $WORLD_SIZE \\\r\n --student_type gpt2 \\\r\n --student_config training_configs/distilgpt2.json \\\r\n --teacher_type gpt2 \\\r\n --teacher_name gpt2-large \\\r\n --freeze_pos_embs \\\r\n --dump_path /root/distil/ \\\r\n --data_file data/binarized_text.gpt2-large.pickle \\\r\n --token_counts data/token_counts.gpt2-large.pickle``\r\n\r\nNot using a custom tokenizer, just the gpt2-large tokenizer",
"When using the original `distilgpt2` checkpoint, can you generate coherent text?",
"> When using the original `distilgpt2` checkpoint, can you generate coherent text?\r\n\r\nYep, works perfectly. Maybe worth it to note that I was using 2x 3090 GPUs (I know how amperes can be sometimes)\r\n\r\nSince my dataset is made up of a lot of `\\n`, my first thought was that it might have just copied that to attempt to minimize loss, so I set the `lm_seq_dataset.py` to remove any lines under 4 tokens. It did change the string, but it makes absolutely no sense to me that it would learn to copy a string like this.\r\n\r\nIMO (and I haven't really had the time to look into this so take it with a whole chunk of salt) this is probably a training issue, since the phrase is different on different data and the generation works on the default model. If it's a training issue, my best guess is that it might be iterating over the same line? But this is flawed because the line contains `[Nov-Nov`, which is incorrect.\r\n\r\nSuper strange bug all in all.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0a0+1606899 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 2
- Using distributed or parallel set-up in script?: Yes
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
examples/distillation: @VictorSanh
## Information
Model I am using (Bert, XLNet ...): DistilGPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
Had to slightly modify the official scripts to even get it to run - other than that, effectively no difference
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
Chat logs. In the format
```
[22-Jul-20 04:15 AM] User
Message
[22-Jul-20 04:15 AM] Another user
Message
[22-Jul-20 04:16 AM] Etc.
You get the point
```
## To reproduce
Steps to reproduce the behavior:
1. (Fix and) run example scripts for distillation with a custom GPT2 model.
When trying to generate, the model will generate the same text with no change, regardless of using ``pipeline`` or ``model.generate``. If attempting to use input texts, the network will output the texts immediately appended by the phrase. In the two times I've run this experiment, it was the space character twice and ``[Nov-Nov-20] User`` once (after I tried removing lines that were under a certain token value)
## Expected behavior
The model to work as a normal transformer.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9743/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9742/comments | https://api.github.com/repos/huggingface/transformers/issues/9742/events | https://github.com/huggingface/transformers/issues/9742 | 791,601,431 | MDU6SXNzdWU3OTE2MDE0MzE= | 9,742 | --fp16 fine-tuning appears to be taking more memory (4.3.0). | {
"login": "PeterAJansen",
"id": 3813268,
"node_id": "MDQ6VXNlcjM4MTMyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeterAJansen",
"html_url": "https://github.com/PeterAJansen",
"followers_url": "https://api.github.com/users/PeterAJansen/followers",
"following_url": "https://api.github.com/users/PeterAJansen/following{/other_user}",
"gists_url": "https://api.github.com/users/PeterAJansen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeterAJansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeterAJansen/subscriptions",
"organizations_url": "https://api.github.com/users/PeterAJansen/orgs",
"repos_url": "https://api.github.com/users/PeterAJansen/repos",
"events_url": "https://api.github.com/users/PeterAJansen/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeterAJansen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Note that the memory as reported by nvidia-smi is not a perfectly reliable source (@stas00 could explain better than me why), the reliable metric is the batch size at which you get OOM. It is known on our side that FP16 does not save any memory for this particular script (I was investigating a general memory regression for this script that I fixed today and could see the memory being the same with or without FP16) but only for that script (memory is indeed saved on `run_glue` or other maintained examples). It seems to have been there for at least three months, so it's not a recent regression, I think it was always like this.\r\n\r\nDidn't have time to investigate the source yet. Maybe it's something in the seq2seq models (this is on T5 here, on my side I noticed the usage being the same for mBART) or something in the script itself. Nevertheless, it's an issue to tackle indeed. I wonder if it only appears in a distributed setting or one GPU already, should run some tests to check that tomorrow.",
"> Note that the memory as reported by nvidia-smi is not a perfectly reliable source\r\n\r\n## How to reliably use nvidia-smi/pynvml to measure memory used by your application.\r\n\r\n1. when you just started the program and do `torch.ones(1)` you will see your `nvidia-smi` reporting a usage of 0.5-1.5GB depending on the card - this is the memory that CUDA allocates for its kernels. So this is not the memory used by your application. Therefore in any memory benchmark I first do `torch.ones(1)` to force the kernel preloading. And then you can somewhat rely on `nvidia-smi`\r\n2. but it won't show you any cached by pytorch memory, so these reports can be quite meaningless. Therefore you have to call `gc.collect(); torch.cuda.empty_cache()` before you take a snapshot using `nvidia-smi`. `gc.collect()` forces garbage collection - it's not immediate in python when you deleted some variable or existed a function.\r\n3. it's totally unreliable if you have multiple processes using the same gpu\r\n\r\nIn general for memory benchmarking it's better to use pynvml, since it's easier to use it. But it has the exact same issues as `nvidia-smi`.\r\n\r\nIf you were to follow the above 3 rules exactly, then you can use `nvidia-smi`/`pynvml` to reliably measure memory. \r\n\r\nOtherwise, use torch.cuda memory functions which give you exact numbers in any situation. https://pytorch.org/docs/stable/cuda.html#memory-management",
"I have been seeing this fp16 behavior for many months, but we blamed it on my hardware. Since I have one old 1070 card and one new but not fully supported 3090. Waiting for cuda-11.2 support.\r\n\r\nDo you notice any difference if you use apex instead of the native amp?\r\n\r\nDeepSpeed implements their own fp16.\r\n\r\n> Less memory usage with --fp16 (should it be about half? suggested from https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117\r\n\r\nWell, that was a huge bug in pytorch, related to autocast/fp16 but it has been fixed in pt-nightly and pt-1.7.1 - though I won't be surprised if there are still some bugs there - this is new in pytorch. That's why I suggest to test with apex.\r\n\r\nAlso you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still. \r\n\r\nIt's in 3 places, grep for `autocast`:\r\n```\r\nsrc/transformers/trainer.py: with autocast():\r\nsrc/transformers/trainer.py: with autocast():\r\nsrc/transformers/trainer_seq2seq.py: with autocast():\r\n```",
"Thanks both -- \r\n\r\n> I have been seeing this fp16 behavior for many months, but we blamed it on my hardware. Since I have one old 1070 card and one new but not fully supported 3090. Waiting for cuda-11.2 support.\r\n> \r\n> Do you notice any difference if you use apex instead of the native amp?\r\n\r\nThe original numbers (above) are with apex -- single-GPU --fp16 gives 15.0GB. It looks like if I uninstall nvidia-apex it reduces to 13.8GB. \r\n\r\n> > Less memory usage with --fp16 (should it be about half? suggested from [#8403 (comment)](https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117)\r\n> \r\n> Well, that was a huge bug in pytorch, related to autocast/fp16 but it has been fixed in pt-nightly and pt-1.7.1 - though I won't be surprised if there are still some bugs there - this is new in pytorch. That's why I suggest to test with apex.\r\n> \r\n> Also you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still.\r\n> \r\n> It's in 3 places, grep for `autocast`:\r\n> \r\n> ```\r\n> src/transformers/trainer.py: with autocast():\r\n> src/transformers/trainer.py: with autocast():\r\n> src/transformers/trainer_seq2seq.py: with autocast():\r\n> ```\r\n\r\nIt looks like disabling 'with autocast()' in those 3 places also brings the single-GPU --fp16 memory on T5-Large from 15.0gb down to 13.8gb (the same as without the --fp16 option) -- so they're at parity in terms of memory in that case. \r\n\r\n> DeepSpeed implements their own fp16.\r\n\r\nJust for completeness I tried it with DeepSpeed, and in the single-GPU settling I'm seeing 10.8GB with CPU-offloading enabled (about 3gb savings), and ~21.5GB without offloading (significantly higher... I'm not sure I know enough about DeepSpeed to know whether that would be expected, or whether it may be a config error on my part). (EDIT: And DeepSpeed, CPU offload, with 4 GPUs, appears to use 12.1-12.4GB per GPU). \r\n\r\n\r\nOn my last (optional) bit in the original post -- just as a sanity check, any sense of whether the memory allocation vs sequence length looks as expected, or whether there should be much larger differences as sequence length increases/decreases? (I'm trying to get a sense of whether it's likely for me to fit T5-11b in these 4x40gb cards in the near term, or whether I'll need to wait for the full model parallelism/fp16/deep speed offloading, but these new models are new territory for me) ",
"Sleeping on it I would like to amend my first statement. The components on GPU memory are the following:\r\n- the model weights\r\n- the forward activations saved for gradient computation\r\n- the gradients\r\n- the optimizer state\r\n\r\nIf we look at what's happening with FP16 training (mixed precision) we have:\r\n- the model in full precision so no memory saved there\r\n- the forward activations saved for gradient computation are in mixed precision\r\n- the gradients are computed in mixed precision *but* converted to full precision for the update, so no saving there\r\n- the optimizer state is in full precision as all the updates are done in full precision\r\n\r\nSo the saving only happen for the forward activations saved for the backward computation, and there is a slight overhead because the gradients are properly stored both in half and full precision. (This is probably over-simplified but I think it's enough to explain what follows.)\r\n\r\nNow let's look at a simple text-classification fine-tuning on 2 GPUs (I'm giving the command for reference):\r\n```\r\nexport BS=16\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node 2 examples/text-classification/run_glue.py \\\r\n --model_name_or_path bert-base-cased \\\r\n --task_name mrpc \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_device_train_batch_size $BS \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir /tmp/mrpc \\\r\n --overwrite_output_dir \\\r\n --fp16\r\n```\r\nSince the only savings we get are in the model activations saved for the backward passed, it's logical that the bigger those activations are, the bigger the saving will be. If we try different batch sizes, I indeed get (this is with nvidia-smi so not completely reliable as said above but it will be a fair comparison):\r\n| batch size | without --fp16 | with --fp16 | FP16 savings |\r\n|:-:|:-:|:-:|:-:|\r\n| 8 | 4247 | 4163 | 84 |\r\n| 16 | 4971 | 4793 | 178 |\r\n| 32 | 6827 | 6207 | 620 |\r\n| 64 | 10037 | 8061 | 1976 |\r\n\r\nSo there is only a real memory saving if we train at a high batch size (and it's not half) and at batch sizes lower than 8, you actually get a bigger memory footprint (because of the overhead mentioned above). The gain for FP16 training is that in each of those cases, the training with the flag `--fp16` is twice as fast, which does require every tensor to have every dimension be a multiple of 8 (so if your batch size is not a multiple of 8, you won't get that speed-up, and the script `finetune_trainer.py` does not pad the tensors to a sequence length that is a multiple of 8).\r\n\r\nTL;DR: FP16 with apex or AMP will only give you some memory savings with a reasonably high batch size.",
"Ahhh, very helpful, thanks. So right now --fp16 is mostly for speed, and since most things are stored in full precision, there are essentially no expected memory savings at BS=1. \r\n\r\nIs storing everything at fp16 in the long term plans, or are there technical reasons why that's not a good idea? (e.g. significant degradation in task performance?) ",
"You can't do the full training in FP16, it would not converge, which is why there is this mixed precision approach. Maybe DeepSpeed integrates further optimizations and helps to save more memory.",
"@sgugger, this is such a clear and well done foray into fp16 performance, let's not get it lost in the sea of Issues. \r\n\r\nI was thinking we should start a new doc `performance.md` (markdown pretty please) where we discuss each of these issues. And you have just written a perfect entry on fp16.",
"> The original numbers (above) are with apex -- single-GPU --fp16 gives 15.0GB. It looks like if I uninstall nvidia-apex it reduces to 13.8GB.\r\n\r\nAs you are on pt-1.7.1 it will always use the native amp, unless you specifically request to use `apex` with `--fp16_backend apex`\r\n \r\n> > Also you might want to try to disable `with autocast()`, perhaps you are hitting another caching bug - it will be slower since now fp16 will have to be reconverted many times, but you will be able to see if perhaps the memory overhead is due to caching still.\r\n\r\nMy apologies, this wasn't a good suggestion at all since it basically disabled the mixed-precision. I was trying to think how to disable just the caching to rule any leaks there, but I don't think there is a way. \r\n\r\nIf you want to read about autocast caching, this comment from its creator is excellent:\r\nhttps://discuss.pytorch.org/t/autocast-and-torch-no-grad-unexpected-behaviour/93475/3\r\n\r\n> On my last (optional) bit in the original post -- just as a sanity check, any sense of whether the memory allocation vs sequence length looks as expected, or whether there should be much larger differences as sequence length increases/decreases? (I'm trying to get a sense of whether it's likely for me to fit T5-11b in these 4x40gb cards in the near term, or whether I'll need to wait for the full model parallelism/fp16/deep speed offloading, but these new models are new territory for me)\r\n\r\nI'd have been nice to know when either DeepSpeed or fairscale get a chance to release ZeRO stage 3 support, but I'm not sure how to find it out - perhaps ask them if they have some possible projections? That would be the most desired news since then you will probably be able to fit a 45GB model over 4x40GB.\r\n\r\nI think it'd be great to calculate all the different components and their memory requirements - then we can do the math easier. That is calculating how many bytes each component takes and how many of those we need.\r\n\r\nOtherwise, I hope to have some Pipeline Parallelism working soon and perhaps we could try it on your 4x rig. \r\n\r\nAlso has anybody attempted to distill t5-11b? If you could shave off some weight from it w/o losing much quality, perhaps it'd have been much easier to fit.\r\n",
"> I was thinking we should start a new doc performance.md (markdown pretty please) where we discuss each of these issues. And you have just written a perfect entry on fp16.\r\n\r\nI agree, and there is the table in the text-classification example that summarizes the speed gains. I have no time to do this this week, so if you want to go ahead and start a PR, feel free to do so!",
"Done: https://github.com/huggingface/transformers/issues/9824",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: 4x A100-SXM4-40GB
- Using distributed or parallel set-up in script?: Yes
### Who can help
@alexorona @stas00 @sgugger
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] my own modified scripts: (give details below)
Official trainer, optionally modified by adding "model.parallelize() " after loading. (Results shown with and without).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Regular seq2seq on data.
Run script:
```
export BS=1; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1,2,3 ./finetune_trainer.py --model_name_or_path t5-large --output_dir output_dir --adam_eps 1e-06 --data_dir xsum --fp16 \
--do_train --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 \
--overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--warmup_steps 5 \
```
## Brief summary
1. When fine tuning T5, I'm observing memory usage to increase when using --fp16, though not as much as previously reported in #8403 .
2. (Optional) Possibly related: I'm trying to squeeze T5-11B in 4x40GB A100s using model parallelism. I seemed to be able to do it yesterday on 4.1.1 with a sequence length of 128, and I remember observing a fairly moderate seqlength vs memory usage dependence (as expected from the comment here ( https://github.com/huggingface/transformers/issues/8771#issuecomment-764058315 ), though I'm not an expert and I'm not sure if this increase only applies >512 tokens, and if what I saw yesterday was a fluke/error on my part somewhere). Today on a fresh env/pull I'm not observing this dependence (though I'm not sure why -- it might be my issue -- data is reported at the bottom side).
## To reproduce
Steps to reproduce the behavior:
1. 4.3.0, runscript as above, run with and without --fp16 option. Different model sizes (and with/without model.parallelize() added, since I wasn't sure if that was the issue)
## Data
Below are three cases of memory usage with/without --fp16:
1. with model.parallelize()
2. without model.parallelize() (but GPUs still visible -- extra info, I thought it was interesting it still takes up extra memory on the other GPUs)
3. without model.parallelize() (only 1 GPU visible)
-
```
*** WITH MODEL.PARALLELIZE() ***
t5-3b, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 77W / 400W | 13598MiB / 40537MiB | 33% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 29C P0 80W / 400W | 12874MiB / 40537MiB | 25% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 24C P0 81W / 400W | 12874MiB / 40537MiB | 4% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 25C P0 80W / 400W | 12874MiB / 40537MiB | 23% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
WITH --fp16: (takes more memory)
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 27C P0 108W / 400W | 15138MiB / 40537MiB | 6% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 28C P0 99W / 400W | 14214MiB / 40537MiB | 9% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 23C P0 85W / 400W | 14214MiB / 40537MiB | 12% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 25C P0 92W / 400W | 14216MiB / 40537MiB | 11% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
*** WITHOUT MODEL.PARALLELIZE, but all GPUs still visible ( CUDA_VISIBLE_DEVICES=0,1,2,3 ) ***
t5-large, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 93W / 400W | 20362MiB / 40537MiB | 1% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 27C P0 78W / 400W | 6046MiB / 40537MiB | 3% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 22C P0 78W / 400W | 6046MiB / 40537MiB | 3% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 24C P0 79W / 400W | 6022MiB / 40537MiB | 7% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
t5-large, --max_source_length 128 --max_target_length 128
WITH --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 91W / 400W | 20318MiB / 40537MiB | 2% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 27C P0 80W / 400W | 7304MiB / 40537MiB | 4% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 23C P0 78W / 400W | 7304MiB / 40537MiB | 5% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 24C P0 79W / 400W | 7280MiB / 40537MiB | 5% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
*** WITHOUT MODEL.PARALLELIZE, ONLY 1 GPU VISIBLE ( CUDA_VISIBLE_DEVICES=0 ) ***
t5-large, --max_source_length 128 --max_target_length 128
WITHOUT --fp16
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 29C P0 101W / 400W | 13790MiB / 40537MiB | 32% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 26C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 21C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 23C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
t5-large, --max_source_length 128 --max_target_length 128
WITH --fp16 (more memory)
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB On | 00000000:05:00.0 Off | 0 |
| N/A 28C P0 101W / 400W | 15012MiB / 40537MiB | 42% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 A100-SXM4-40GB On | 00000000:06:00.0 Off | 0 |
| N/A 26C P0 70W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
| N/A 21C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 A100-SXM4-40GB On | 00000000:08:00.0 Off | 0 |
| N/A 23C P0 71W / 400W | 0MiB / 40537MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
```
WRT sequence length vs data dependence, in a model parallel setting, I am observing today:
(varying --max_source_length and --max_target_length )
model, seq length, gpu0, gpu1, gpu2, gpu3
t5-large, 32 seq length, 5.7GB / 4.7GB / 4.7GB / 4.7GB
t5-large, 64 seq length, 5.7GB / 4.7GB / 4.7GB / 4.7GB
t5-large, 128 seq length, 5.8GB / 4.8GB / 4.8GB / 4.8GB
t5-large, 512 seq length, 6.0GB / 5.2GB / 5.2GB / 5.2GB
t5-3b, 64 seq length, 15.2GB / 14.3GB / 14.3GB / 14.3GB
t5-3b, 128 seq length, 15.2GB / 14.3GB / 14.3GB / 14.3GB
t5-3b, 256 seq length, 15.5GB / 14.7GB / 14.7GB / 14.7GB
t5-3b, 512 seq length, 16.2GB / 15.2GB / 15.2GB / 15.2GB
Essentially very minimal change in RAM requirements vs sequence length. Though perhaps I have misconfigured something here.
## Expected behavior
1. Less memory usage with --fp16 (should it be about half? suggested from https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117 )
2. (Optional) Nominally, smaller sequence length models taking up significantly less memory?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9742/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9741/comments | https://api.github.com/repos/huggingface/transformers/issues/9741/events | https://github.com/huggingface/transformers/pull/9741 | 791,595,977 | MDExOlB1bGxSZXF1ZXN0NTU5NjI4ODg5 | 9,741 | examples: fix XNLI url | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for fixing this!"
] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | Hi,
this PR fixes the URL for the XNLI dataset in the text classification example.
(/cc @sleepinyourhat :hugs: ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9741/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9741/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9741",
"html_url": "https://github.com/huggingface/transformers/pull/9741",
"diff_url": "https://github.com/huggingface/transformers/pull/9741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9741.patch",
"merged_at": 1611319433000
} |
https://api.github.com/repos/huggingface/transformers/issues/9740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9740/comments | https://api.github.com/repos/huggingface/transformers/issues/9740/events | https://github.com/huggingface/transformers/issues/9740 | 791,531,295 | MDU6SXNzdWU3OTE1MzEyOTU= | 9,740 | RAG Model without DPR | {
"login": "krishanudb",
"id": 11831343,
"node_id": "MDQ6VXNlcjExODMxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11831343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishanudb",
"html_url": "https://github.com/krishanudb",
"followers_url": "https://api.github.com/users/krishanudb/followers",
"following_url": "https://api.github.com/users/krishanudb/following{/other_user}",
"gists_url": "https://api.github.com/users/krishanudb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishanudb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishanudb/subscriptions",
"organizations_url": "https://api.github.com/users/krishanudb/orgs",
"repos_url": "https://api.github.com/users/krishanudb/repos",
"events_url": "https://api.github.com/users/krishanudb/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishanudb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? Feel free to tag @patrickvonplaten or @lhoestq on the forum, they'll probably be able to answer your quesitons :).\r\n\r\nThanks!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hello everyone,
I am interesting in studying how RAG generator answers questions without the DPR retriever but with other passages / contexts identified by other methods.
For example in the code below
```from transformers import RagRetriever
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
retriever = RagRetriever.from_pretrained(’./rag-token-nq’, indexed_dataset=dataset)
tokenizer = RagTokenizer.from_pretrained("./rag-token-nq")
model = RagTokenForGeneration.from_pretrained("./rag-token-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch(“How many people live in Paris?”, “In Paris, there are 10 million people.”, return_tensors=“pt”)
input_ids = input_dict[“input_ids”]
model = RagTokenForGeneration.from_pretrained(“facebook/rag-token-nq”, retriever=retriever)
generated_ids = model.generate(input_ids=input_ids, labels=input_dict[“labels”])
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_string)
```
In the line
```input_dict = tokenizer.prepare_seq2seq_batch(“How many people live in Paris?”, “In Paris, there are 10 million people.”, return_tensors=“pt”) ```
, I want to use “How many people live in Paris ?” as the question and “In Paris, there are 10 million people.” the passage/context which should be used to generate the answer.
Kindly let me know how to do this?
Is my understanding of the code correct and if not, how to go about it?
Thanks,
Krishanu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9740/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9739/comments | https://api.github.com/repos/huggingface/transformers/issues/9739/events | https://github.com/huggingface/transformers/issues/9739 | 791,459,025 | MDU6SXNzdWU3OTE0NTkwMjU= | 9,739 | Error using TFAutoModelForSequenceClassification with Tensorflow 2.2.0 | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu could answer here",
"Hello!\n\nThe last version os Transformers needs TensorFlow 2.3 as the min version.",
"Got it. I switched to Pytorch for testing it, so I have not faced the same issue. Out of curiousity, if I wanted to run huggingface transformers on TF 2.2.0, what version of transformers do I need to use? Thanks!",
"Not more than 4.1",
"Thanks."
] | 1,611 | 1,611 | 1,611 | NONE | null | Hello.
I am trying to implement `TFAutoModelForSequenceClassification` in my code following the example for sequence classification as shown [here](https://huggingface.co/transformers/task_summary.html#sequence-classification)
The code is as follows:
```
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="tf")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="tf")
paraphrase_classification_logits = model(paraphrase)[0]
not_paraphrase_classification_logits = model(not_paraphrase)[0]
paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
```
Here is the error readout:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-20-c48d70c01597> in <module>
2 import tensorflow as tf
3 tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
----> 4 model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
5 classes = ["not paraphrase", "is paraphrase"]
6 sequence_0 = "The company HuggingFace is based in New York City"
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1189
1190 if type(config) in TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING.keys():
-> 1191 return TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING[type(config)].from_pretrained(
1192 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1193 )
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1216
1217 # Instantiate model.
-> 1218 model = cls(config, *model_args, **model_kwargs)
1219
1220 if from_pt:
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, *inputs, **kwargs)
1369
1370 self.num_labels = config.num_labels
-> 1371 self.bert = TFBertMainLayer(config, name="bert")
1372 self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
1373 self.classifier = tf.keras.layers.Dense(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/modeling_tf_utils.py in wrapped_init(self, *args, **kwargs)
105 elif isinstance(config, PretrainedConfig):
106 if len(args) > 0:
--> 107 initializer(self, *args, **kwargs)
108 else:
109 initializer(self, config, *args, **kwargs)
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, add_pooling_layer, **kwargs)
590 self.return_dict = config.use_return_dict
591 self.embeddings = TFBertEmbeddings(config, name="embeddings")
--> 592 self.encoder = TFBertEncoder(config, name="encoder")
593 self.pooler = TFBertPooler(config, name="pooler") if add_pooling_layer else None
594
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
430 super().__init__(**kwargs)
431
--> 432 self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
433
434 def call(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in <listcomp>(.0)
430 super().__init__(**kwargs)
431
--> 432 self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
433
434 def call(
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
410 super().__init__(**kwargs)
411
--> 412 self.attention = TFBertAttention(config, name="attention")
413 self.intermediate = TFBertIntermediate(config, name="intermediate")
414 self.bert_output = TFBertOutput(config, name="output")
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
344 super().__init__(**kwargs)
345
--> 346 self.self_attention = TFBertSelfAttention(config, name="self")
347 self.dense_output = TFBertSelfOutput(config, name="output")
348
~/.conda/envs/myenv/lib/python3.8/site-packages/transformers-4.2.2-py3.8.egg/transformers/models/bert/modeling_tf_bert.py in __init__(self, config, **kwargs)
254 self.num_attention_heads = config.num_attention_heads
255 self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
--> 256 self.query = tf.keras.layers.experimental.EinsumDense(
257 equation="abc,cde->abde",
258 output_shape=(None, config.num_attention_heads, self.attention_head_size),
AttributeError: module 'tensorflow.keras.layers.experimental' has no attribute 'EinsumDense'
```
I cannot seem to find a lot of information on solving this on Google. Any ideas? I am using a dockerized version of tensorflow 2.2.0 with Jupyter. This is a fresh install of transformers.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9739/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9738/comments | https://api.github.com/repos/huggingface/transformers/issues/9738/events | https://github.com/huggingface/transformers/pull/9738 | 791,392,486 | MDExOlB1bGxSZXF1ZXN0NTU5NDU5MzY5 | 9,738 | [fsmt] onnx triu workaround | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm a bit worried that this fix will blow up the memory for very long sequences, *e.g.* applying this to a max_length of 16384 (which we already have for LED) would create a 16384 ** 2 = 1GB tensor",
"Wouldn't this work-around be better in terms of memory? https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232",
"Glad you have the deep understanding of this one, @patrickvonplaten \r\n\r\nwrt, https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232 - I tried it originally and while onnx was happy, everything else failed with it.\r\n\r\nPerhaps it can be re-written not to use `triu`? some suggest using `np.triu`",
"> Wouldn't this work-around be better in terms of memory? [pytorch/pytorch#32968 (comment)](https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232)\r\n\r\nI managed to fix this more efficient version to work with `-inf`. It's in the PR now.",
"> Did you run the FSMT integration tests to ensure that it doesn't diverge?\r\n\r\nYes, the function produces the same output as `triu` for the inputs we use.",
"This workaround seems to only work for squared-sized tensors, while `torch.triu` works for any shape (PyTorch 1.9.0). For example:\r\n\r\n```\r\na = torch.randn(8, 4)\r\ntriu_onnx(a)\r\n```\r\nraises an `RuntimeError: The size of tensor a (8) must match the size of tensor b (4) at non-singleton dimension 1`.\r\n\r\nIs there any additional workaround when working with non-squared shape tensors?\r\n\r\nThank you and best regards,\r\nGustavo.",
"Indeed, the workaround was written for that sort of inputs.\r\n\r\nAre you running into this problem with `transformers`? Could you please file a new issue then including the way to reproduce the problem?\r\n\r\nBut I also see that perhaps we can switch back to pytorch implementation as reported here:\r\nhttps://github.com/pytorch/pytorch/issues/32968#issuecomment-827054124\r\nit's called `trilu`. I think it should be in pt-1.9.0.\r\n\r\nhttps://github.com/onnx/onnx/blob/29e7aa7048809784465d06e897f043a4600642b2/docs/Operators.md#Trilu\r\n\r\nWould you like to experiment with it and see if it solves the problem? If it works you may consider creating a PR that switches to that version instead and we can work together on polishing out the details (as we need to support older pytorch as well).\r\n\r\nTo clarify: what I'm trying to propose is to try pytorch-1.9.0 and use its built-in `triu` and see if it now works. One way you could test is by reverting my original PR that introduced the workaround and see if it just works. alternatively, if you have the right know-how you can write some test code that tests torch's `triu` directly with onnx export.\r\n",
"update: \r\n\r\n> 'triu' support added in PT-ONNX exporter in opset14 https://github.com/pytorch/pytorch/pull/59486 \r\n\r\nwhich I suppose should be available in pytorch-1.10 when it comes out as it was merged on july-14. So once it's released we could re-do this workaround and fallback to it for pt<1.10."
] | 1,611 | 1,634 | 1,611 | CONTRIBUTOR | null | This PR
* solves
```
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.
```
as reported in https://github.com/huggingface/transformers/issues/9737.
It adds a workaround for `triu` not being supported by pytorch's onnx set, as proposed here: https://github.com/pytorch/pytorch/issues/32968#issuecomment-733240232 with some modifications to make it work with transformers. The original workaround couldn't handle a matrix with `-inf`.
* adds an onnx export test
The workaround fix is localized to fsmt, but transformers has a handful of those:
```
src/transformers/pipelines/question_answering.py: candidates = np.tril(np.triu(outer), max_answer_len - 1)
src/transformers/models/fsmt/modeling_fsmt.py: causal_mask = torch.triu(fill_with_neg_inf(torch.zeros(tgt_len, tgt_len)), 1).to(
src/transformers/models/transfo_xl/modeling_transfo_xl.py: dec_attn_mask = (torch.triu(all_ones, 1 + mlen) + torch.tril(all_ones, -mask_shift_len))[:, :, None] # -1
src/transformers/models/transfo_xl/modeling_transfo_xl.py: dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1 + mlen)[
src/transformers/models/xlnet/modeling_xlnet.py: mask_up = torch.triu(attn_mask, diagonal=1)
src/transformers/models/ctrl/modeling_ctrl.py: mask = torch.triu(torch.ones(seq_len + past_length, seq_len + past_length), 1).to(inputs_embeds.device)
src/transformers/models/prophetnet/modeling_prophetnet.py: left_block[stream_idx].triu_(-stream_idx + 1)
src/transformers/models/prophetnet/modeling_prophetnet.py: causal_mask = torch.triu(causal_mask, 1)
```
Perhaps the workaround wrapper should be somewhere in the common tools and used in other places?
Or merge this to let the user who needed this in first place move forward and then refactor to other places which haven't been spoken for. But actually if you look at https://github.com/pytorch/pytorch/issues/32968 many of the me-too comments talk about `transformers`.
https://github.com/pytorch/pytorch/issues/32968 proposes other solutions too. I tried them and either they didn't work, or were inefficient as spotted by @patrickvonplaten
Fixes: https://github.com/huggingface/transformers/issues/9737
@LysandreJik, @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9738",
"html_url": "https://github.com/huggingface/transformers/pull/9738",
"diff_url": "https://github.com/huggingface/transformers/pull/9738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9738.patch",
"merged_at": 1611583057000
} |
https://api.github.com/repos/huggingface/transformers/issues/9737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9737/comments | https://api.github.com/repos/huggingface/transformers/issues/9737/events | https://github.com/huggingface/transformers/issues/9737 | 791,380,873 | MDU6SXNzdWU3OTEzODA4NzM= | 9,737 | [fsmt] Exporting the operator triu to ONNX opset version 12 is not supported | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2357479466,
"node_id": "MDU6TGFiZWwyMzU3NDc5NDY2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt",
"name": "fsmt",
"color": "d0e884",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Applied a workaround in https://github.com/huggingface/transformers/pull/9738"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | As a follow up to https://github.com/huggingface/transformers/issues/9722
```
import torch
import transformers
from transformers import convert_graph_to_onnx
from pathlib import Path
convert_graph_to_onnx.convert(
framework="pt",
model="facebook/wmt19-en-de",
output=Path("encoder/en_de_trans.onnx"),
opset=12,
tokenizer="facebook/wmt19-en-de",
use_external_format= False,
pipeline_name= "translation_en_to_de",
)
```
after applying the fix from https://github.com/huggingface/transformers/pull/9736
it then crashes with:
```
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/transformers-master/porting/onnx2.py", line 26, in <module>
convert_graph_to_onnx.convert(
File "/hf/transformers-master/src/transformers/convert_graph_to_onnx.py", line 367, in convert
convert_pytorch(nlp, opset, output, use_external_format)
File "/hf/transformers-master/src/transformers/convert_graph_to_onnx.py", line 279, in convert_pytorch
export(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/__init__.py", line 271, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 86, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 671, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 450, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 204, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/__init__.py", line 309, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 970, in _run_symbolic_function
symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/utils.py", line 927, in _find_symbolic_in_registry
return sym_registry.get_registered_op(op_name, domain, opset_version)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 112, in get_registered_op
raise RuntimeError(msg)
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.
Process finished with exit code 1
```
Need to look at workarounds proposed at https://github.com/pytorch/pytorch/issues/32968 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9737/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9736/comments | https://api.github.com/repos/huggingface/transformers/issues/9736/events | https://github.com/huggingface/transformers/pull/9736 | 791,378,964 | MDExOlB1bGxSZXF1ZXN0NTU5NDQ4NDEx | 9,736 | [fsmt] token_type_ids isn't used | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2357479466,
"node_id": "MDU6TGFiZWwyMzU3NDc5NDY2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/fsmt",
"name": "fsmt",
"color": "d0e884",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR fixes a bug discovered in https://github.com/huggingface/transformers/issues/9722
`token_type_ids` was returned by default by the tokenizer, but it isn't used by the model.
The original fsmt port was a frankenstein of bart for the model and xlm for the tokenizer, hence the discrepancy.
I thought the tests had common onnx tests, but it doesn't seem to be the case. I added a local test in the related PR: https://github.com/huggingface/transformers/pull/9738
With this fix `convert_graph_to_onnx.convert` still fails with:
```
RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator
```
but that's a totally different issue. Fixed in https://github.com/huggingface/transformers/pull/9738
Fixes: https://github.com/huggingface/transformers/issues/9722
@LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9736",
"html_url": "https://github.com/huggingface/transformers/pull/9736",
"diff_url": "https://github.com/huggingface/transformers/pull/9736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9736.patch",
"merged_at": 1611376734000
} |
https://api.github.com/repos/huggingface/transformers/issues/9735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9735/comments | https://api.github.com/repos/huggingface/transformers/issues/9735/events | https://github.com/huggingface/transformers/pull/9735 | 791,342,737 | MDExOlB1bGxSZXF1ZXN0NTU5NDE4OTA3 | 9,735 | Add `report_to` training arguments to control the integrations used | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR introduces a new `report_to` training argument that controls which of the multiple reporting tools to use in a training round. Currently, `Trainer` automatically uses everything installed, which can cause trouble when:
- one platform is installed but not properly set up.
- one platform is installed but the user doesn't want to use it today.
In my opinion the current behavior is too magical and does not fit our philosophy. To avoid any breaking change, the current default for this `report_to` argument is to use everything installed, but I would like to switch this to an empty list at the next major release, so the user has to opt-in the platforms they want to use.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9735/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9735",
"html_url": "https://github.com/huggingface/transformers/pull/9735",
"diff_url": "https://github.com/huggingface/transformers/pull/9735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9735.patch",
"merged_at": 1611329674000
} |
https://api.github.com/repos/huggingface/transformers/issues/9734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9734/comments | https://api.github.com/repos/huggingface/transformers/issues/9734/events | https://github.com/huggingface/transformers/pull/9734 | 791,308,599 | MDExOlB1bGxSZXF1ZXN0NTU5MzkwOTUx | 9,734 | Fixes to run_seq2seq and instructions | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR fixes two issues in the new `run_seq2seq` script and adds instructions on how to run it. The fixes are:
- default of `val_max_target_length` to `max_target_length` (had forgotten to do this in the initial PR)
- add an optional prefix to the source text (for T5 models) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9734",
"html_url": "https://github.com/huggingface/transformers/pull/9734",
"diff_url": "https://github.com/huggingface/transformers/pull/9734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9734.patch",
"merged_at": 1611327838000
} |
https://api.github.com/repos/huggingface/transformers/issues/9733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9733/comments | https://api.github.com/repos/huggingface/transformers/issues/9733/events | https://github.com/huggingface/transformers/pull/9733 | 791,301,672 | MDExOlB1bGxSZXF1ZXN0NTU5Mzg1MjEx | 9,733 | [WIP] Small improvement in shape manipulation in t5, makes exporting | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
(torchscript + ONNX) easier because shape inference is not static.
- Still requires a test that would make sure there's not regression
there.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9733",
"html_url": "https://github.com/huggingface/transformers/pull/9733",
"diff_url": "https://github.com/huggingface/transformers/pull/9733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9733.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9732/comments | https://api.github.com/repos/huggingface/transformers/issues/9732/events | https://github.com/huggingface/transformers/issues/9732 | 791,225,529 | MDU6SXNzdWU3OTEyMjU1Mjk= | 9,732 | OSError: [Errno 116] Stale file handle | {
"login": "jimkim3",
"id": 57313992,
"node_id": "MDQ6VXNlcjU3MzEzOTky",
"avatar_url": "https://avatars.githubusercontent.com/u/57313992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimkim3",
"html_url": "https://github.com/jimkim3",
"followers_url": "https://api.github.com/users/jimkim3/followers",
"following_url": "https://api.github.com/users/jimkim3/following{/other_user}",
"gists_url": "https://api.github.com/users/jimkim3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimkim3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimkim3/subscriptions",
"organizations_url": "https://api.github.com/users/jimkim3/orgs",
"repos_url": "https://api.github.com/users/jimkim3/repos",
"events_url": "https://api.github.com/users/jimkim3/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimkim3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This probably seems more related to your system rather than `Trainer` or `finetune_trainer.py` ",
"thanks, but I just do not see why this happens, any idea is appreciated\n\nOn Thu, Jan 21, 2021 at 5:01 PM Suraj Patil <[email protected]>\nwrote:\n\n> This probably seems more related to your system rather than Trainer or\n> finetune_trainer.py\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/9732#issuecomment-764746041>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ANVIVSDVFWQUXCYIYLAOA23S3BFVFANCNFSM4WNBWFSQ>\n> .\n>\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,611 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: - only 1 GPU
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Hi, I am training finetune_trainer.py on on wmt dataset. I am getting the following error sometimes, do you have an idea what might cause it? thanks for any suggestion
```
File "finetune_trainer.py", line 342, in <module>
main()
File "finetune_trainer.py", line 256, in main
if (os.path.isdir(training_args.output_dir) and not training_args.optimize_from_scratch) else None,
File "/home/jim/trainer.py", line 814, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/jim/trainer.py", line 885, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/jim/trainer.py", line 916, in _save_checkpoint
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
File "/home/jim/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/serialization.py", line 374, in save
_legacy_save(obj, opened_file, pickle_module, pickle_protocol)
File "/home/jim/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/serialization.py", line 214, in __exit__
self.file_like.close()
OSError: [Errno 116] Stale file handle
```
Model I am using T5.
## To reproduce
this is not happening all the time, but it does happen
## Expected behavior
to run the codes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9732/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9731/comments | https://api.github.com/repos/huggingface/transformers/issues/9731/events | https://github.com/huggingface/transformers/issues/9731 | 791,119,669 | MDU6SXNzdWU3OTExMTk2Njk= | 9,731 | Mismatch of the mask token id of BART between fairseq and huggingface | {
"login": "twadada",
"id": 26453664,
"node_id": "MDQ6VXNlcjI2NDUzNjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26453664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/twadada",
"html_url": "https://github.com/twadada",
"followers_url": "https://api.github.com/users/twadada/followers",
"following_url": "https://api.github.com/users/twadada/following{/other_user}",
"gists_url": "https://api.github.com/users/twadada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/twadada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/twadada/subscriptions",
"organizations_url": "https://api.github.com/users/twadada/orgs",
"repos_url": "https://api.github.com/users/twadada/repos",
"events_url": "https://api.github.com/users/twadada/events{/privacy}",
"received_events_url": "https://api.github.com/users/twadada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"hi @twadada \r\n\r\n> Somehow the huggingface model has a smaller vocab size, and 51200 is out of index\r\n\r\n50,265 is the actual vocab size, the rest of the tokens are dummy tokens as you can see in this issue pytorch/fairseq#2242\r\n\r\nSo I don't think 51200 can be the `mask_token_id`",
"Hi @patil-suraj \r\n\r\nThank you for your quick reply!\r\n\r\n> 50,265 is the actual vocab size, the rest of the tokens are dummy tokens as you can see in this issue pytorch/fairseq#2242\r\n\r\nYes, I am aware of that. But, the embedding of the mask token in huggingface-BART is exactly the same as that of the dummy token \"madeupword0003\" in torch.hub-BART, as confirmed in the following line\r\n```\r\nassert bart.task.source_dictionary.indices[\"madeupword0003\"] == tokenizer.mask_token_id\r\nassert all(bart.model.encoder.embed_tokens.weight[tokenizer.mask_token_id] == model.model.encoder.embed_tokens.weight[tokenizer.mask_token_id])\r\n```\r\nHere, \"bart\" and \"model\" are the torch.hub and huggingface models, resp. \r\n\r\nAnd this embedding looks very similar to that of the other dummy tokens, and the \\<mask> token embedding in torch.hub-BART looks more accurate.\r\n\r\n```\r\nfor dummy in [\"madeupword0003\", \"madeupword0030\",\"madeupword0130\", \"madeupword0230\", \"<mask>\"]:\r\n tokenid = bart.task.source_dictionary.indices[dummy]\r\n emb_mean = bart.model.encoder.embed_tokens.weight[tokenid].mean().data\r\n emb_norm = bart.model.encoder.embed_tokens.weight[tokenid].norm().data\r\n print(emb_mean, emb_norm)\r\n\r\n# tensor(-0.0083) tensor(0.9653) madeupword0003 (= huggingface mask embedding)\r\n# tensor(-0.0087) tensor(0.9633) madeupword0030\r\n# tensor(-0.0085) tensor(0.9688) madeupword0130\r\n# tensor(-0.0084) tensor(0.9645) madeupword0230\r\n# tensor(-0.0010) tensor(1.6455) torch.hub <mask> embedding\r\n\r\n# torch.hub <mask> embedding is similar to that of other frequent words in terms of the norm.\r\n\r\nfor word in [\".\", \",\" , \"Ġthe\"]:\r\n tokenid = tokenizer.convert_tokens_to_ids(word)\r\n emb_mean = bart.model.encoder.embed_tokens.weight[tokenid].mean().data\r\n emb_norm = bart.model.encoder.embed_tokens.weight[tokenid].norm().data\r\n print(emb_mean, emb_norm)\r\n\r\n# tensor(0.0391) tensor(1.6430)\r\n# tensor(0.0512) tensor(1.8719)\r\n# tensor(0.0483) tensor(1.8041)\r\n```\r\n\r\n\r\n",
"Hi, any update on this? I've also tried loading BART from fairseq repository and confirmed that the model is identical to the one at torch.hub. Given that the original model is at fairseq, I assume there would be something wrong with the huggingface model. \r\n\r\nI think that registering a wrong embedding for the mask token is rather a serious bug and it needs fixing asap.",
"This issue has been stale for 1 month.",
"Has it been fixed?",
"I checked the HF Bart checkpoint and I agree with @twadada that the mismatch is a potential bug. Seems the HF model directly uses the first 50264 embeddings from fairseq. It should be the first 50263 embeddings concatenated with the last embedding (the embedding of `<mask>`). \r\n\r\nBut I guess it's not a severe bug unless `<mask>` token exists in the input text. ",
"@zhaochaocs Thanks for confirming that. I think it can be a critical bug when you use BART as a masked language model, such as when you use it as a mask-filling model (e.g. a cloze test \r\nfor probing), or when you fine-tune it on additional monolingual data using the MLM objective.\r\n",
"@twadada Yeah, I entirely agree. I tried some prompt ideas in the last project and found BART performed not as well as other PTMs even after post-training. Maybe the mismatch of the mask token is one of the reasons.\r\n\r\nHope HF can fix that someday, or we can temporarily replace the embedding of `<mask>` with the fairseq parameter.",
"Yea, we've gotta replace it with the correct embedding (assuming the fairseq parameter would be the correct one). Honestly, it was a little disappointing that this issue was sort of ignored and closed. Not sure whether this closed issue will draw attention from HF anymore.",
"Seems this issue was automatically closed by the Github bot. Maybe @patil-suraj or @patrickvonplaten can help us re-check if the embedding of `<mask>` token in BART should be fixed.",
"Leaving it to @patil-suraj as he has already looked at the PR, but I'm not so sure whether we have the wrong <mask> token simple because this example works quite well:\r\n\r\n```python\r\nfrom transformers import BartForConditionalGeneration, BartTokenizer\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\", force_bos_token_to_be_generated=True)\r\ntok = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nexample_english_phrase = \"UN Chief Says There Is No <mask> in Syria\"\r\nbatch = tok(example_english_phrase, return_tensors='pt')\r\ngenerated_ids = model.generate(batch['input_ids'])\r\nassert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']\r\n```",
"Taken from: https://huggingface.co/transformers/model_doc/bart.html#mask-filling",
"I think that's why this potential bug is overlooked. \r\n\r\nThe example is a good demo but not a reliable test case for this issue. If you replace the `<mask>` token of `example_english_phrase` as a random token (say `refin`), the generator will return the same sentence as using the `<mask>` token. In my machine it returns `UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria`\r\n\r\n",
"@patil-suraj I think we actually indeed use the wrong token for <mask> as can be verified by running the following code:\r\n\r\n```python \r\nimport torch\r\nfrom transformers import BartModel, BartTokenizer\r\n\r\n# fsq bart base\r\n\r\nbart = torch.hub.load('pytorch/fairseq', 'bart.base')\r\nmask_token_id = bart.task.source_dictionary.indices[\"<mask>\"]\r\n\r\nmask_token_weight_fairseq = bart.model.encoder.embed_tokens.weight[mask_token_id].detach()\r\n\r\n## Hf bart-base\r\n\r\nhf_tok = BartTokenizer.from_pretrained(\"facebook/bart-base\")\r\nmask_token_id_hf = hf_tok.mask_token_id\r\n\r\nhf_model = BartModel.from_pretrained(\"facebook/bart-base\")\r\nmask_token_weight_hf = hf_model.encoder.embed_tokens.weight[mask_token_id_hf].detach()\r\n\r\n(mask_token_weight_hf - mask_token_weight_fairseq).abs().max() # => gives value > 1.0\r\n```\r\n\r\n=> @zhaochaocs I'm afraid however that we can't do anything about it really...the original fairseq model weigths have a length of 51201 where as the HF model weights have only 50265 -> so we can't even access the \"real\" mask_token_id in HF. We could go into the official model weights and change the mask_token weights to the correct values, but this could lead to some ugly backward compatibility problems. And given how much bart is used in HF I'm not really in favor of doing this...The mask token is also only really relevant for pretraining and the mask-filling task IMO whereas for pretraining it's initialized from scratch anyways so this bug doesn't affect many use cases.\r\n\r\nKeen to hear your opinion here @patil-suraj \r\n\r\nAlso gently pinging @sshleifer - do you remember why we have different embedding shapes in HF vs. Fairseq?\r\n",
"@patrickvonplaten \r\n> given how much bart is used in HF I'm not really in favor of doing this...\r\n\r\nI don't think it is a good idea to leave the bug unfixed just because many people have already used it. Rather, I believe it should be the reason for the bug to be fixed asap.\r\n\r\n> The mask token is also only really relevant for pretraining and the mask-filling task\r\n\r\nI think it can be a critical bug because nowadays more people are using cloze tests\r\nto probe the linguistic knowledge stored in LM models. Some people also use MLM models for lexical substitution. It can also be a problem when you fine-tune BART on additional monolingual data using the MLM objective (only the mask token embedding is trained from scratch, with a small learning rate used for fine-tuning). ",
"Thanks a lot, @zhaochaocs and @twadada for bringing this to our attention :) \r\n\r\n@patrickvonplaten I tend to agree with what @twadada said. Leaving this unfixed might cause issues when fine-tuning BART for MLM and this is a very sneaky bug so would be hard to detect.\r\n\r\n\r\nIMO we could update the weights of the official models since BART is primarily used for downstream tasks mostly summarization and zero-shot classification which does not involve the mask token, so it wouldn't cause any issues for such models. Models which are already fine-tuned won't also be affected since we will only update the official pre-trained weights.\r\n\r\nThis should only probably break the mask-filling task, but will actually give better results as the current mask embeddings are incorrect. ",
"Here is a temporary solution.\r\nI replace HF's <mask> embedding with fairseq's <mask> embedding. \r\nHere is the model\r\nhttps://huggingface.co/liangtaiwan/bart-base-correct-mask-embedding\r\n\r\nYou can verify the new weight is corrected by the following script.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BartModel, BartTokenizer\r\n\r\n# fsq bart=base\r\n\r\nbart = torch.hub.load('pytorch/fairseq', 'bart.base')\r\nmask_token_id = bart.task.source_dictionary.indices[\"<mask>\"]\r\n\r\nmask_token_weight_fairseq = bart.model.encoder.embed_tokens.weight[mask_token_id].detach()\r\n\r\n# my bart-base\r\n\r\nhf_tok = BartTokenizer.from_pretrained(\"liangtaiwan/bart-base-correct-mask-embedding\")\r\nmask_token_id_hf = hf_tok.mask_token_id\r\nhf_model = BartModel.from_pretrained(\"liangtaiwan/bart-base-correct-mask-embedding\")\r\n\r\nmask_token_weight_hf = hf_model.encoder.embed_tokens.weight[mask_token_id_hf]\r\nassert torch.equal(mask_token_weight_hf - mask_token_weight_fairseq)\r\n\r\n# HF bart-base\r\nhf_original_model = BartModel.from_pretrained(\"facebook/bart-base\")\r\n\r\nhf_original_model_state_dict = hf_original_model.state_dict()\r\nhf_model_state_dict = hf_model.state_dict()\r\nembeddings = [\"shared.weight\", \"encoder.embed_tokens.weight\", \"decoder.embed_tokens.weight\"]\r\n\r\n# check weight \r\nfor k in hf_model_state_dict.keys():\r\n if k in embeddings:\r\n continue\r\n assert torch.equal(hf_model_state_dict[k], hf_original_model_state_dict[k])\r\n\r\n# check embedding\r\nfor k in embeddings:\r\n assert torch.equal(hf_model_state_dict[k][:-1], hf_original_model_state_dict[k][:-1])\r\n```\r\n\r\nHowever, I did some prompt language model experiments. The results are almost identical. The result of HF's one is even better sometimes. \r\n",
"> Thanks a lot, @zhaochaocs and @twadada for bringing this to our attention :)\r\n> \r\n> @patrickvonplaten I tend to agree with what @twadada said. Leaving this unfixed might cause issues when fine-tuning BART for MLM and this is a very sneaky bug so would be hard to detect.\r\n> \r\n> IMO we could update the weights of the official models since BART is primarily used for downstream tasks mostly summarization and zero-shot classification which does not involve the mask token, so it wouldn't cause any issues for such models. Models which are already fine-tuned won't also be affected since we will only update the official pre-trained weights.\r\n> \r\n> This should only probably break the mask-filling task, but will actually give better results as the current mask embeddings are incorrect.\r\n\r\nOk - I think I'm happy to go forward with this solution. Actually would be nice to get another opinion here. @sgugger, @LysandreJik - would it be ok for you to updated existing pre-trained checkpoints?",
"I'm okay with updating the weights in a new commit since this fixes issues, as people can still revert to the previous commit if they really need to.",
"@patil-suraj - let me know if you want to handle it or if I should do it (I have some time tomorrow or next week if you're very busy ;-)) ",
"Ok for me too!",
"Great! I will update the weights today :) ",
"I've updated the weights for all 3 checkpoints pt, tf, flax https://huggingface.co/facebook/bart-base/tree/main\r\n\r\nthis issue is only associated with `bart-base`, `bart-large` does not have this problem, so no need to change the weights there :)",
"read the thread. i am the one who pushed the bug. the fix sounds good!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Closing this issue now."
] | 1,611 | 1,635 | 1,635 | NONE | null | ## 🐛 Bug
The mask token id of BART is different between fairseq (torch.hub) and huggingface, and this discrepancy leads to different results in mask_filling. So I wonder which token id is actually correct.
(After checking the norm of the embedding at each mask token id, I feel that torch.hub might be correct. I have posted the same issue at fairseq github and been waiting for the reply.)
### To Reproduce
#### Code sample
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-base", force_bos_token_to_be_generated=True)
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
assert tokenizer.mask_token_id == 50264
example_english_phrase = "<mask> cat is <mask>."
batch = tokenizer(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'],return_dict_in_generate = True, num_beams=10, num_return_sequences=1, output_scores = True)
print(" ".join(tokenizer.convert_ids_to_tokens(generated_ids[0][0])))
# </s> <s> This Ġcat Ġis Ġadorable . </s>**
import torch
bart = torch.hub.load('pytorch/fairseq', 'bart.base')
bart.eval()
assert bart.task.source_dictionary.indices["<mask>"] == 51200
assert bart.task.source_dictionary.indices["madeupword0003"] == tokenizer.mask_token_id
# Somehow the huggingface model has a smaller vocab size, and 51200 is out of index
assert len(model.model.encoder.embed_tokens.weight) == 50265
assert len(bart.model.encoder.embed_tokens.weight) == 51201
# But the embedding at tokenizer.mask_token_id is the same between the two models
assert all(bart.model.encoder.embed_tokens.weight[tokenizer.mask_token_id] == model.model.encoder.embed_tokens.weight[tokenizer.mask_token_id])
def fill_mask(
model,
masked_inputs,
topk = 1,
match_source_len = False,
masked_token = '<mask>',
**generate_kwargs
):
batch_tokens = []
for masked_input in masked_inputs:
assert masked_token in masked_input, \
"please add one {} token for the input".format(masked_token)
text_spans = masked_input.split(masked_token)
text_spans_bpe = (' {0} '.format(masked_token)).join(
[model.bpe.encode(text_span.rstrip()) for text_span in text_spans]
).strip()
tokens = model.task.source_dictionary.encode_line(
'<s> ' + text_spans_bpe + ' </s>',
append_eos=False,
add_if_not_exist=False,
).long()
batch_tokens.append(tokens)
generate_kwargs['beam'] = max(
topk,
generate_kwargs.get('beam', -1),
)
generate_kwargs['match_source_len'] = match_source_len
batch_hypos = model.generate(batch_tokens, **generate_kwargs)
return batch_hypos
masked_inputs=[example_english_phrase]
generate_kwargs = {}
generate_kwargs['beam'] = 10
generate_kwargs['match_source_len'] = False
batch_hypos = fill_mask(bart,masked_inputs, **generate_kwargs)
print(" ".join(tokenizer.convert_ids_to_tokens(batch_hypos[0][0]["tokens"])))
# <s> The Ġcat Ġis Ġdead . </s>**
#### replace <mask> with madeupword0003 ####
example_english_phrase = "madeupword0003 cat is madeupword0003."
masked_inputs=[example_english_phrase]
batch_hypos = fill_mask(bart,masked_inputs, masked_token = "madeupword0003", **generate_kwargs)
print(" ".join(tokenizer.convert_ids_to_tokens(batch_hypos[0][0]["tokens"])))
# <s> This Ġcat Ġis Ġadorable . </s>
```
### Environment
- PyTorch Version: 1.5.1+cu101
- OS (e.g., Linux): Linux
- Python version: 3.6.10
- transformers version: 4.2.1
- CUDA version: 10.1
### Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9731/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.