url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/12344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12344/comments | https://api.github.com/repos/huggingface/transformers/issues/12344/events | https://github.com/huggingface/transformers/pull/12344 | 929,565,854 | MDExOlB1bGxSZXF1ZXN0Njc3Mzk3NDI2 | 12,344 | Update run_mlm.py | {
"login": "TahaAslani",
"id": 47432410,
"node_id": "MDQ6VXNlcjQ3NDMyNDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/47432410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TahaAslani",
"html_url": "https://github.com/TahaAslani",
"followers_url": "https://api.github.com/users/TahaAslani/followers",
"following_url": "https://api.github.com/users/TahaAslani/following{/other_user}",
"gists_url": "https://api.github.com/users/TahaAslani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TahaAslani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TahaAslani/subscriptions",
"organizations_url": "https://api.github.com/users/TahaAslani/orgs",
"repos_url": "https://api.github.com/users/TahaAslani/repos",
"events_url": "https://api.github.com/users/TahaAslani/events{/privacy}",
"received_events_url": "https://api.github.com/users/TahaAslani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Anytime! Thanks for accepting it!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Before the code could not be used for validation only because of this line:
extension = data_args.train_file.split(".")[-1]
was assuming that extension must be extracted from the training dataset. This line would run regardless of the training or validation options of the user. This would lead to an error if the user only wants to run an evaluation only and does not want to do train (because the training file does not exist). I modified it to extract extension from the training file if the user wants to do train and extract it from the validation file if the user wants to run eval. This way the code can be used for both training and validation separately.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12344",
"html_url": "https://github.com/huggingface/transformers/pull/12344",
"diff_url": "https://github.com/huggingface/transformers/pull/12344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12344.patch",
"merged_at": 1624880962000
} |
https://api.github.com/repos/huggingface/transformers/issues/12343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12343/comments | https://api.github.com/repos/huggingface/transformers/issues/12343/events | https://github.com/huggingface/transformers/pull/12343 | 929,155,016 | MDExOlB1bGxSZXF1ZXN0Njc3MDQ5MTM4 | 12,343 | [trainer] fix label smoothing for default compute_loss | {
"login": "kolakows",
"id": 34172905,
"node_id": "MDQ6VXNlcjM0MTcyOTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolakows",
"html_url": "https://github.com/kolakows",
"followers_url": "https://api.github.com/users/kolakows/followers",
"following_url": "https://api.github.com/users/kolakows/following{/other_user}",
"gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolakows/subscriptions",
"organizations_url": "https://api.github.com/users/kolakows/orgs",
"repos_url": "https://api.github.com/users/kolakows/repos",
"events_url": "https://api.github.com/users/kolakows/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolakows/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, if you leave the labels in the outputs, the model will then compute the loss without label smoothing which is inefficient (since it will re-compute the proper loss afterwards).",
"Oh you are right, I guess I was overeager with the PR, sorry.\r\n\r\nAfter looking more into the way arguments are passed to forward, I'll just need to shift tokens in 'labels' and add them to inputs as 'decoder_input_ids' (my confusion was from the fact that, for the model I'm using it is done automatically, but inside forward)\r\n\r\nThanks for the quick answer!",
"No worries! And let us know if there is a problem with the generation of `decoder_input_ids` for Pegasus as we would need to fix it. :-)"
] | 1,624 | 1,624 | 1,624 | NONE | null | # What does this PR do?
Keeps 'labels' in the inputs which are passed to model.
Without this change, the model I'm using (PegasusForConditionalGeneration) can't calculate loss and generate outputs. Change assumes that all other models also need 'labels' in inputs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
trainer: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12343/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12343",
"html_url": "https://github.com/huggingface/transformers/pull/12343",
"diff_url": "https://github.com/huggingface/transformers/pull/12343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12343.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12342/comments | https://api.github.com/repos/huggingface/transformers/issues/12342/events | https://github.com/huggingface/transformers/pull/12342 | 929,092,108 | MDExOlB1bGxSZXF1ZXN0Njc2OTk2NTg5 | 12,342 | Add flax/jax quickstart | {
"login": "marcvanzee",
"id": 180100,
"node_id": "MDQ6VXNlcjE4MDEwMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/180100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcvanzee",
"html_url": "https://github.com/marcvanzee",
"followers_url": "https://api.github.com/users/marcvanzee/followers",
"following_url": "https://api.github.com/users/marcvanzee/following{/other_user}",
"gists_url": "https://api.github.com/users/marcvanzee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcvanzee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcvanzee/subscriptions",
"organizations_url": "https://api.github.com/users/marcvanzee/orgs",
"repos_url": "https://api.github.com/users/marcvanzee/repos",
"events_url": "https://api.github.com/users/marcvanzee/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcvanzee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12342/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12342",
"html_url": "https://github.com/huggingface/transformers/pull/12342",
"diff_url": "https://github.com/huggingface/transformers/pull/12342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12342.patch",
"merged_at": 1624550658000
} |
https://api.github.com/repos/huggingface/transformers/issues/12341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12341/comments | https://api.github.com/repos/huggingface/transformers/issues/12341/events | https://github.com/huggingface/transformers/pull/12341 | 929,086,780 | MDExOlB1bGxSZXF1ZXN0Njc2OTkyMDEz | 12,341 | [examples/Flax] move the examples table up | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
move the examples table up in the readme | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12341",
"html_url": "https://github.com/huggingface/transformers/pull/12341",
"diff_url": "https://github.com/huggingface/transformers/pull/12341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12341.patch",
"merged_at": 1624530817000
} |
https://api.github.com/repos/huggingface/transformers/issues/12340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12340/comments | https://api.github.com/repos/huggingface/transformers/issues/12340/events | https://github.com/huggingface/transformers/pull/12340 | 929,086,206 | MDExOlB1bGxSZXF1ZXN0Njc2OTkxNTEz | 12,340 | [Flax] Move up examples | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing - I was too slow :-/"
] | 1,624 | 1,651 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Move up examples for better visibility. @marcvanzee @avital
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12340/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12340",
"html_url": "https://github.com/huggingface/transformers/pull/12340",
"diff_url": "https://github.com/huggingface/transformers/pull/12340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12340.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12339/comments | https://api.github.com/repos/huggingface/transformers/issues/12339/events | https://github.com/huggingface/transformers/issues/12339 | 929,072,405 | MDU6SXNzdWU5MjkwNzI0MDU= | 12,339 | How to get offset mapping then decoding wav2vec? | {
"login": "hadaev8",
"id": 20247085,
"node_id": "MDQ6VXNlcjIwMjQ3MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20247085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadaev8",
"html_url": "https://github.com/hadaev8",
"followers_url": "https://api.github.com/users/hadaev8/followers",
"following_url": "https://api.github.com/users/hadaev8/following{/other_user}",
"gists_url": "https://api.github.com/users/hadaev8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadaev8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadaev8/subscriptions",
"organizations_url": "https://api.github.com/users/hadaev8/orgs",
"repos_url": "https://api.github.com/users/hadaev8/repos",
"events_url": "https://api.github.com/users/hadaev8/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadaev8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12339/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/12338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12338/comments | https://api.github.com/repos/huggingface/transformers/issues/12338/events | https://github.com/huggingface/transformers/pull/12338 | 928,948,927 | MDExOlB1bGxSZXF1ZXN0Njc2ODc1ODEy | 12,338 | [ray] try fixing import error | {
"login": "richardliaw",
"id": 4529381,
"node_id": "MDQ6VXNlcjQ1MjkzODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richardliaw",
"html_url": "https://github.com/richardliaw",
"followers_url": "https://api.github.com/users/richardliaw/followers",
"following_url": "https://api.github.com/users/richardliaw/following{/other_user}",
"gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions",
"organizations_url": "https://api.github.com/users/richardliaw/orgs",
"repos_url": "https://api.github.com/users/richardliaw/repos",
"events_url": "https://api.github.com/users/richardliaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/richardliaw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
Addresses a tabulate import error for Ray Tune integration.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12338/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12338",
"html_url": "https://github.com/huggingface/transformers/pull/12338",
"diff_url": "https://github.com/huggingface/transformers/pull/12338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12338.patch",
"merged_at": 1624522398000
} |
https://api.github.com/repos/huggingface/transformers/issues/12337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12337/comments | https://api.github.com/repos/huggingface/transformers/issues/12337/events | https://github.com/huggingface/transformers/issues/12337 | 928,907,417 | MDU6SXNzdWU5Mjg5MDc0MTc= | 12,337 | ValueError: expected sequence of length 133 at dim 1 (got 80) | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.0
- Platform:
- Python version:
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?):
- Using GPU in script?: non
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Models:
albert, bert, xlm: @LysandreJik
Library:
tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): FlauBert
The problem arises when using:
* [x]my own modified scripts: (give details below)
```
input_ids = []
attention_masks = []
for sent in texte:
encoded_sent = flaubert_tokenizer.encode_plus(sent, add_special_tokens=True, truncation=True, padding=True, return_attention_mask=True)
# Add the outputs to the lists
input_ids.append(encoded_sent.get('input_ids'))
attention_masks.append(encoded_sent.get('attention_mask'))
# Convert lists to tensors
print("len", len(input_ids))
input_ids = torch.tensor(input_ids)
attention_mask = torch.tensor(attention_masks)
hidden_state = flaubert(input_ids=input_ids, attention_mask=attention_mask)
# Extract the last hidden state of the token `[CLS]` for classification task
last_hidden_state_cls = outputs[0][:, 0, :]
print(last_hidden_state_cls)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below) extracting first hidden state/embeddings produce by the model and give it to a classic classifier (smv)
## To reproduce
Steps to reproduce the behavior:
1. install transformers, pandas, numpy and torch (1.5.0 or others )
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
stacktrace error :
```
---Filename in processed................ corpus_ix_originel_FMC_train
etiquette : [2 1 0]
Embeddings bert model used.................... : small_cased
Some weights of the model checkpoint at flaubert/flaubert_small_cased were not used when initializing FlaubertModel: ['pred_layer.proj.weight', 'pred_layer.proj.bias']
- This IS expected if you are initializing FlaubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing FlaubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
<class 'numpy.ndarray'>
len 34
Traceback (most recent call last):
File "/16uw/test/expe_5/train/test.py", line 63, in <module>
main()
File "/16uw/test/expe_5/train/test.py", line 46, in main
dic_acc, dic_report, dic_cm, s = cross_validation(data_train, data_label_train, models_list, name, language_model_dir)
File "/16uw/test/expe_5/train/../traitements/processin_test.py", line 197, in cross_validation
features, s = get_flaubert_layer(features, lge_model)
File "16uw/test/expe_5/train/../traitements/processin_test.py", line 107, in get_flaubert_layer
input_ids = torch.tensor(input_ids)
ValueError: expected sequence of length 133 at dim 1 (got 80)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect to get the inputs_ids and attention_mask to pass it to the model to get the cls_embedding
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12336/comments | https://api.github.com/repos/huggingface/transformers/issues/12336/events | https://github.com/huggingface/transformers/pull/12336 | 928,862,237 | MDExOlB1bGxSZXF1ZXN0Njc2ODAyNjY1 | 12,336 | Fix torchscript tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | With the new non persistent buffers, the TorchScript tests fail. This PR updates the TorchScript tests to allow for non-persistent buffers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12336/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12336",
"html_url": "https://github.com/huggingface/transformers/pull/12336",
"diff_url": "https://github.com/huggingface/transformers/pull/12336.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12336.patch",
"merged_at": 1624542748000
} |
https://api.github.com/repos/huggingface/transformers/issues/12335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12335/comments | https://api.github.com/repos/huggingface/transformers/issues/12335/events | https://github.com/huggingface/transformers/pull/12335 | 928,807,677 | MDExOlB1bGxSZXF1ZXN0Njc2NzU2NTY4 | 12,335 | [WIP] FNet | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am still working on this PR in a branch and will create another PR when it's somewhat ready."
] | 1,624 | 1,628 | 1,628 | CONTRIBUTOR | null | This PR adds FNet in PyTorch.
- Paper: https://arxiv.org/pdf/2105.03824v2.pdf
- Code and Checkpoints: https://github.com/google-research/google-research/tree/master/f_net
- Authors: @jtainslie @ilyaeck @santiaontanon-google
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12335/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12335/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12335",
"html_url": "https://github.com/huggingface/transformers/pull/12335",
"diff_url": "https://github.com/huggingface/transformers/pull/12335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12335.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12334/comments | https://api.github.com/repos/huggingface/transformers/issues/12334/events | https://github.com/huggingface/transformers/pull/12334 | 928,752,454 | MDExOlB1bGxSZXF1ZXN0Njc2NzExNjcy | 12,334 | Add additional variables without shape | {
"login": "MadhumitaSushil",
"id": 8028802,
"node_id": "MDQ6VXNlcjgwMjg4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8028802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MadhumitaSushil",
"html_url": "https://github.com/MadhumitaSushil",
"followers_url": "https://api.github.com/users/MadhumitaSushil/followers",
"following_url": "https://api.github.com/users/MadhumitaSushil/following{/other_user}",
"gists_url": "https://api.github.com/users/MadhumitaSushil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MadhumitaSushil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MadhumitaSushil/subscriptions",
"organizations_url": "https://api.github.com/users/MadhumitaSushil/orgs",
"repos_url": "https://api.github.com/users/MadhumitaSushil/repos",
"events_url": "https://api.github.com/users/MadhumitaSushil/events{/privacy}",
"received_events_url": "https://api.github.com/users/MadhumitaSushil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for opening a PR!\r\n\r\nCould you run `make fixup` at the root of your clone to apply to code quality fixes?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | These additional training variables without shape are present in the NVIDIA implementation for training BERT models: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT . The conversion works as expected after this change.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12334/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12334",
"html_url": "https://github.com/huggingface/transformers/pull/12334",
"diff_url": "https://github.com/huggingface/transformers/pull/12334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12334.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12333/comments | https://api.github.com/repos/huggingface/transformers/issues/12333/events | https://github.com/huggingface/transformers/issues/12333 | 928,746,062 | MDU6SXNzdWU5Mjg3NDYwNjI= | 12,333 | Missing tokenizer_class for `mbart-large-50-many-to-one-mmt` model | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Good catch @Mehrad0711 ! Thank you for reporting this. I just fixed it https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt/blob/main/config.json"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Hi @patil-suraj,
I noticed all `mbart-50` models have their tokenizer_class set to "MBart50Tokenizer" in their config file except for `mbart-large-50-many-to-one-mmt`. This causes the wrong tokenizer to be loaded for this model (`tokenization_mbart` instead of `tokenization_mbart50`). Please check. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12332/comments | https://api.github.com/repos/huggingface/transformers/issues/12332/events | https://github.com/huggingface/transformers/pull/12332 | 928,677,675 | MDExOlB1bGxSZXF1ZXN0Njc2NjQ4MTU0 | 12,332 | Cast logits from bf16 to fp32 at the end of TF_T5 | {
"login": "szutenberg",
"id": 37601244,
"node_id": "MDQ6VXNlcjM3NjAxMjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/37601244?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szutenberg",
"html_url": "https://github.com/szutenberg",
"followers_url": "https://api.github.com/users/szutenberg/followers",
"following_url": "https://api.github.com/users/szutenberg/following{/other_user}",
"gists_url": "https://api.github.com/users/szutenberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szutenberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szutenberg/subscriptions",
"organizations_url": "https://api.github.com/users/szutenberg/orgs",
"repos_url": "https://api.github.com/users/szutenberg/repos",
"events_url": "https://api.github.com/users/szutenberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/szutenberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 ",
"Hey! Firstly, good catch with this issue - this seems like a good PR. Two questions before we merge it, though:\r\n\r\n- You tested with tensorflow-cpu - I'm guessing this means you were running on TPU, correct? (It's not like most CPUs support `bfloat16`, after all)\r\n- The code only checks for the dtype `bfloat16` and not `float16`. I'm guessing the same issue might occur on GPUs with float16 dtype, so we should probably cast to float32 in either case. If you don't have access to a GPU or GPU instance, would you like to send me your code or a notebook so I can test it?",
"Hi @Rocketknight1 ,\r\n\r\nI used Kaggle Code to run my script on TPU and:\r\n- performance was the same on fp32 and bf16 (_TPU uses bf16 under the hood_)\r\n- accuracy issue did not occur (I suspect that TPU modifies SparseSoftmaxCrossEntropyWithLogits precision: bf16->fp32)\r\n- inference issue was reproduced with mixed_bfloat16\r\n\r\nI used `os.environ['TF_XLA_FLAGS'] = '--tf_xla_enable_xla_devices'` in order to run bfloat16 on CPU. The loss curve in my PR comes from such execution.\r\n\r\nI guess that adding cast to `float32` is required for `float16` too but I was getting `loss = nan` while attempting to run my script with `mixed_float16`. Maybe something is still broken or I do the conversion to fp16 incorrectly.\r\n\r\nYou can find my script in https://gist.github.com/szutenberg/80f30b980c15e384200d86ae242a1067\r\n\r\nOutput on TPU:\r\n```\r\n 1/10 [==>...........................] - ETA: 14:11 - accuracy: 0.2245 - loss: 12.9219\r\nstep 1: 94563.1 ms\r\n 2/10 [=====>........................] - ETA: 0s - accuracy: 0.3733 - loss: 7.3379 \r\nstep 2: 104.5 ms\r\n 3/10 [========>.....................] - ETA: 0s - accuracy: 0.4637 - loss: 5.2188\r\nstep 3: 84.6 ms\r\n 4/10 [===========>..................] - ETA: 0s - accuracy: 0.5263 - loss: 4.1115\r\nstep 4: 86.1 ms\r\n 5/10 [==============>...............] - ETA: 0s - accuracy: 0.5731 - loss: 3.4062\r\nstep 5: 84.6 ms\r\n 6/10 [=================>............] - ETA: 0s - accuracy: 0.6100 - loss: 2.9387\r\nstep 6: 85.4 ms\r\n 7/10 [====================>.........] - ETA: 0s - accuracy: 0.6399 - loss: 2.5959\r\nstep 7: 85.9 ms\r\n 8/10 [=======================>......] - ETA: 0s - accuracy: 0.6644 - loss: 2.3573\r\nstep 8: 85.7 ms\r\n 9/10 [==========================>...] - ETA: 0s - accuracy: 0.6849 - loss: 2.1528\r\nstep 9: 85.7 ms\r\n10/10 [==============================] - 95s 88ms/step - accuracy: 0.7167 - loss: 1.9842\r\n...\r\nInvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Sub]\r\n```\r\n\r\nOutput on XLA_CPU:\r\n```\r\n 1/10 [==>...........................] - ETA: 10:50 - accuracy: 0.2245 - loss: 15.4375\r\nstep 1: 72235.4 ms\r\n 2/10 [=====>........................] - ETA: 5:28 - accuracy: 0.3718 - loss: 9.4609\r\nstep 2: 41093.1 ms\r\n 3/10 [========>.....................] - ETA: 4:43 - accuracy: 0.4576 - loss: 7.0469\r\nstep 3: 40026.9 ms\r\n 4/10 [===========>..................] - ETA: 4:02 - accuracy: 0.5160 - loss: 5.7227\r\nstep 4: 39995.5 ms\r\n 5/10 [==============>...............] - ETA: 3:20 - accuracy: 0.5595 - loss: 4.8281\r\nstep 5: 38984.9 ms\r\n 6/10 [=================>............] - ETA: 2:39 - accuracy: 0.5933 - loss: 4.3073\r\nstep 6: 38997.0 ms\r\n 7/10 [====================>.........] - ETA: 1:59 - accuracy: 0.6205 - loss: 3.9721\r\nstep 7: 39454.5 ms\r\n 8/10 [=======================>......] - ETA: 1:19 - accuracy: 0.6420 - loss: 3.7764\r\nstep 8: 38780.1 ms\r\n 9/10 [==========================>...] - ETA: 39s - accuracy: 0.6598 - loss: 3.5469\r\nstep 9: 39538.0 ms\r\n10/10 [==============================] - 428s 40s/step - accuracy: 0.6877 - loss: 3.3266\r\n...\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Mul]\r\n```\r\n\r\nYou can see that loss behaves differently. After applying my patch everything is ok:\r\n```\r\n 1/10 [==>...........................] - ETA: 10:00 - accuracy: 0.2245 - loss: 12.3656\r\nstep 1: 66688.5 ms\r\n 2/10 [=====>........................] - ETA: 5:13 - accuracy: 0.3773 - loss: 7.0029 \r\nstep 2: 39146.4 ms\r\n 3/10 [========>.....................] - ETA: 4:32 - accuracy: 0.4685 - loss: 4.9963\r\nstep 3: 38585.8 ms\r\n 4/10 [===========>..................] - ETA: 3:51 - accuracy: 0.5314 - loss: 3.9416\r\nstep 4: 38006.6 ms\r\n 5/10 [==============>...............] - ETA: 3:12 - accuracy: 0.5789 - loss: 3.2643\r\nstep 5: 37900.7 ms\r\n 6/10 [=================>............] - ETA: 2:33 - accuracy: 0.6164 - loss: 2.8159\r\nstep 6: 38225.8 ms\r\n 7/10 [====================>.........] - ETA: 1:55 - accuracy: 0.6465 - loss: 2.4948\r\nstep 7: 38605.7 ms\r\n 8/10 [=======================>......] - ETA: 1:16 - accuracy: 0.6711 - loss: 2.2820\r\nstep 8: 38495.9 ms\r\n 9/10 [==========================>...] - ETA: 38s - accuracy: 0.6914 - loss: 2.0957 \r\nstep 9: 38342.7 ms\r\n10/10 [==============================] - 413s 38s/step - accuracy: 0.7229 - loss: 1.9338\r\n...\r\nWe went on a trip to Europe. We had our breakfast at 7 am in the morning at the nearby coffee shop. Wore a dark blue over coat for our first visit to Louvre Museum to experience history and art.\r\nAt what time did we had breakfast?\r\nAnswer: <pad> 7 am in the morning</s>\r\n```",
"Thanks for the detailed log! I have a 30-series GPU here, so I'll try mixed_float16 and mixed_bfloat16 with your script when I get a chance and see if I get the same issues.",
"Hi @Rocketknight1,\r\n\r\nAny updates?\r\n\r\nI managed to get rid of nans on mixed_float16 by adding fp32 casts in:\r\n* TFT5MainLayer for calculating extended_attention_mask:\r\n```\r\nif extended_attention_mask.dtype == tf.float16:\r\n extended_attention_mask = tf.cast(extended_attention_mask, tf.float32)\r\n\r\nextended_attention_mask = (1.0 - extended_attention_mask) * -1e9\r\n```\r\n* TFT5Attention before softmax:\r\n```\r\nif scores.dtype == tf.float16:\r\n scores = tf.cast(scores, tf.float32)\r\n\r\nscores += position_bias\r\nweights = tf.nn.softmax(scores, axis=-1)\r\n```\r\n\r\nbut it's still not training (accuracy is 1.0 and loss around 10 - they are the same in each step, it seems that forward part is broken).\r\n\r\nWhat do you think about merging my fix for bf16 and fixing fp16 later, by a separate PR?",
"I tried testing this. 'mixed_bfloat16' doesn't actually get run on the GPU on Keras, even though I believe 30-series GPUs support it. Very few CPUs support bfloat16 arithmetic, so I presume that no bfloat16 operations are used on CPU either, and that 'mixed_bfloat16' only actually runs bfloat16 operations on TPUs. As such, I'm confused about what's going on here - I suspect the differences you see with the 'mixed_bfloat16' policy on CPU are caused by some other side effect rather than true bfloat16 computation.\r\n\r\nI'd like to resolve this before approving the bfloat16 PR - if it turns out that TPUs already handle this issue and no other hardware actually runs bfloat16, then this PR isn't necessary, although your effort is appreciated anyway!\r\n\r\nAlso the float16 fix might still be very useful - if you get it working, and you notice a difference on GPU with and without the cast to float32, please let us know!",
"@Rocketknight1, bfloat16 is not supported by GPU in TF. Ampere supports bfloat16 but the support for this datatype wasn't added to TensorFlow. For example [this article](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/) says only about adding TensorFloat32 and using bf16 directly from CUDA, not TF. \r\n\r\nI'm working on a custom accelerator which does support bf16 and I have exactly the same issue as on CPU (broken loss and broken inference). Inference is broken on a graph level (the model does not follow the rule _make sure the (model) output is float32_).\r\n\r\nDid you run my reproducer on CPU? It should work on tensorflow-cpu==2.4.1. You can decrease batch_size to 1 and still be able to see the difference. In order to run on TPU (you can use free Kaggle code) set \"use_tpu=True\" in the script.",
"Hi, I'm sorry for the delay here! I'm probably explaining myself badly, though - what I want to know before I merge anything here is what exactly your reproducer is doing on CPU. The reason I'm asking is that bfloat16 is not supported except by a few rare CPUs, and I don't think bfloat16 on CPU is supported by TF/Keras at all. So I don't really understand what's happening when you run that code on CPU - I realize something changes, but I don't know what or why!",
"Hi @Rocketknight1 \r\n\r\nSorry for the delay caused by the summer season ;) \r\n\r\nMy reproducer allows using bfloat16 by enabling XLA_CPU device which registers kernels for bfloat16 too. In the current TF version, bfloat16 kernels are not being registered for the CPU device. This is just to show that something is wrong also with the training accuracy. In my opinion, proof that something is wrong with inference is enough to accept this change.\r\n\r\nOther models and templates should be reviewed as well. What do you think?",
"Hi @szutenberg - after the conversation in #12898 I think I'm happy to accept this PR. Could you change it to check if the dtype is either `float16` or `bfloat16`? Alternatively, you could just run the casts without an `if` statement - it will do nothing if the dtype is already `float32`.",
"Hi @Rocketknight1 , thanks! This PR is ready for merge.",
"Done! Thank you for this, and thanks for your patience with the review process too!"
] | 1,624 | 1,628 | 1,628 | CONTRIBUTOR | null | # What does this PR do?
This change enables tf.keras.mixed_precision with bf16
I found that T5 model does not follow the [official TF guidelines regarding mixed precision](https://www.tensorflow.org/guide/mixed_precision). Therefore it's impossible to use
`tf.keras.mixed_precision.set_global_policy('mixed_bfloat16')` which is the recommended way of training on bfloat16.
I took a notebook [snapthat/TF-T5-text-to-text](https://github.com/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-%20Training.ipynb) and added `tf.keras.mixed_precision.set_global_policy('mixed_bfloat16')`.
Experiments were done on tensorflow-cpu == 2.4.1, datasets == 1.8.0, transformers == 4.6.1.
The first issue is with the loss curve:

And the second is with inference (also included in the notebook):
```
File "bf16_experiment.py", line 136, in <module>
max_length=decoder_max_len, top_p=0.95, top_k=50, repetition_penalty=2)
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 417, in generate
use_cache=use_cache,
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 472, in _generate_no_beam_search
next_token_logits = tf.math.multiply(next_token_logits, next_token_logits_penalties)
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 518, in multiply
return gen_math_ops.mul(x, y, name)
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6068, in mul
_ops.raise_from_not_ok_status(e, name)
File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 6862, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Mul]
```
This fix solves both issues. I see that the problem may also occur in other models. I can check and prepare fixes if this PR is approved.
## Who can review?
@LysandreJik @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12332/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12332",
"html_url": "https://github.com/huggingface/transformers/pull/12332",
"diff_url": "https://github.com/huggingface/transformers/pull/12332.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12332.patch",
"merged_at": 1628017379000
} |
https://api.github.com/repos/huggingface/transformers/issues/12331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12331/comments | https://api.github.com/repos/huggingface/transformers/issues/12331/events | https://github.com/huggingface/transformers/issues/12331 | 928,674,521 | MDU6SXNzdWU5Mjg2NzQ1MjE= | 12,331 | Default Parameters for training DistillBERT and DistillGPT2 | {
"login": "umgupta",
"id": 4678394,
"node_id": "MDQ6VXNlcjQ2NzgzOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4678394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/umgupta",
"html_url": "https://github.com/umgupta",
"followers_url": "https://api.github.com/users/umgupta/followers",
"following_url": "https://api.github.com/users/umgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/umgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/umgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/umgupta/subscriptions",
"organizations_url": "https://api.github.com/users/umgupta/orgs",
"repos_url": "https://api.github.com/users/umgupta/repos",
"events_url": "https://api.github.com/users/umgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/umgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | https://github.com/huggingface/transformers/blob/cf3c9198aad5e2ea02e778aa9b04d27c216d1a35/examples/research_projects/distillation/train.py#L188
Hi @VictorSanh ,
I was going through your distillation code. Can you share what are the most suitable hyperparameters for training Distillation models, mainly DistillGPT2? Are the default parameters best to use? I am confused because the default batch size of 5 and 50 gradient accumulation step (i.e., 5X8X50 = 2000 examples) do not align with number reported in the [paper](https://arxiv.org/abs/1910.01108) (4K examples).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12331/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12330/comments | https://api.github.com/repos/huggingface/transformers/issues/12330/events | https://github.com/huggingface/transformers/pull/12330 | 928,458,537 | MDExOlB1bGxSZXF1ZXN0Njc2NDU5NzEy | 12,330 | Fixing the pipeline optimization by reindexing targets (V2) | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik pulled your tests, thanks !",
"Yeah it looks good, thanks!",
"Thanks @guyrosin!"
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Linked to #12329
Other version that keeps the scores original (meaning you could have mostly 0.0 for unprobable tokens.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12330/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12330",
"html_url": "https://github.com/huggingface/transformers/pull/12330",
"diff_url": "https://github.com/huggingface/transformers/pull/12330.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12330.patch",
"merged_at": 1625756296000
} |
https://api.github.com/repos/huggingface/transformers/issues/12329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12329/comments | https://api.github.com/repos/huggingface/transformers/issues/12329/events | https://github.com/huggingface/transformers/pull/12329 | 928,448,930 | MDExOlB1bGxSZXF1ZXN0Njc2NDUxOTMw | 12,329 | Fixing the pipeline optimization by rescaling the logits first. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Chosen https://github.com/huggingface/transformers/pull/12330 instead"
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12329/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12329",
"html_url": "https://github.com/huggingface/transformers/pull/12329",
"diff_url": "https://github.com/huggingface/transformers/pull/12329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12329.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12328/comments | https://api.github.com/repos/huggingface/transformers/issues/12328/events | https://github.com/huggingface/transformers/pull/12328 | 928,438,778 | MDExOlB1bGxSZXF1ZXN0Njc2NDQzNjA0 | 12,328 | UpdateDescription of TrainingArgs param save_strategy | {
"login": "sam-writer",
"id": 47401552,
"node_id": "MDQ6VXNlcjQ3NDAxNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/47401552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-writer",
"html_url": "https://github.com/sam-writer",
"followers_url": "https://api.github.com/users/sam-writer/followers",
"following_url": "https://api.github.com/users/sam-writer/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-writer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-writer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-writer/subscriptions",
"organizations_url": "https://api.github.com/users/sam-writer/orgs",
"repos_url": "https://api.github.com/users/sam-writer/repos",
"events_url": "https://api.github.com/users/sam-writer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-writer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12315
TrainingArguments parameter docs: mention in `save_strategy` param description that `load_best_model_at_end` can override.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? N/A
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12328",
"html_url": "https://github.com/huggingface/transformers/pull/12328",
"diff_url": "https://github.com/huggingface/transformers/pull/12328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12328.patch",
"merged_at": 1624469983000
} |
https://api.github.com/repos/huggingface/transformers/issues/12327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12327/comments | https://api.github.com/repos/huggingface/transformers/issues/12327/events | https://github.com/huggingface/transformers/pull/12327 | 928,417,246 | MDExOlB1bGxSZXF1ZXN0Njc2NDI1Njkz | 12,327 | [Flax T5] Fix weight initialization and fix docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The failing hub test is unrelated IMO"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Apply fixes to Flax T5 according to comments on https://github.com/huggingface/transformers/pull/12150 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12327/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12327",
"html_url": "https://github.com/huggingface/transformers/pull/12327",
"diff_url": "https://github.com/huggingface/transformers/pull/12327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12327.patch",
"merged_at": 1624466362000
} |
https://api.github.com/repos/huggingface/transformers/issues/12326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12326/comments | https://api.github.com/repos/huggingface/transformers/issues/12326/events | https://github.com/huggingface/transformers/pull/12326 | 928,410,671 | MDExOlB1bGxSZXF1ZXN0Njc2NDIwMDg0 | 12,326 | Changed modeling_fx_utils.py to utils/fx.py for clarity | {
"login": "michaelbenayoun",
"id": 25418079,
"node_id": "MDQ6VXNlcjI1NDE4MDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/25418079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelbenayoun",
"html_url": "https://github.com/michaelbenayoun",
"followers_url": "https://api.github.com/users/michaelbenayoun/followers",
"following_url": "https://api.github.com/users/michaelbenayoun/following{/other_user}",
"gists_url": "https://api.github.com/users/michaelbenayoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaelbenayoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaelbenayoun/subscriptions",
"organizations_url": "https://api.github.com/users/michaelbenayoun/orgs",
"repos_url": "https://api.github.com/users/michaelbenayoun/repos",
"events_url": "https://api.github.com/users/michaelbenayoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaelbenayoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | Moved **modeling_fx_utils.py** to **utils/fx.py** to make it clear that it is not "modeling_flax_utils.py".
Since there are a modeling_utils.py for PyTorch, and a modeling_tf_utils.py for TensorFlow, modeling_fx_utils.py could be believed to be the Flax counterpart, but it is actually related to the torch.fx feature, hence the pull request. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12326/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12326/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12326",
"html_url": "https://github.com/huggingface/transformers/pull/12326",
"diff_url": "https://github.com/huggingface/transformers/pull/12326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12326.patch",
"merged_at": 1624464985000
} |
https://api.github.com/repos/huggingface/transformers/issues/12325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12325/comments | https://api.github.com/repos/huggingface/transformers/issues/12325/events | https://github.com/huggingface/transformers/issues/12325 | 928,331,614 | MDU6SXNzdWU5MjgzMzE2MTQ= | 12,325 | How to assign gpu when using run_language_modeling.py | {
"login": "lierik",
"id": 50125651,
"node_id": "MDQ6VXNlcjUwMTI1NjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/50125651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lierik",
"html_url": "https://github.com/lierik",
"followers_url": "https://api.github.com/users/lierik/followers",
"following_url": "https://api.github.com/users/lierik/following{/other_user}",
"gists_url": "https://api.github.com/users/lierik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lierik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lierik/subscriptions",
"organizations_url": "https://api.github.com/users/lierik/orgs",
"repos_url": "https://api.github.com/users/lierik/repos",
"events_url": "https://api.github.com/users/lierik/events{/privacy}",
"received_events_url": "https://api.github.com/users/lierik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should be able to control this with the `CUDA_VISIBLE_DEVICES` environment variable. See this [stackoverflow issue](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on) for example",
"> You should be able to control this with the `CUDA_VISIBLE_DEVICES` environment variable. See this [stackoverflow issue](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on) for example\r\n\r\nThank you for your fast reply.\r\nI will try the solution mentioned in above.\r\n\r\nBy the way, when initializing Trainer in run_language_modeling.py, the errors Trainer object has not attribute \"prediction_loss_only\" and after training before evaluation Trainer object has not attribute ‘is_world_master' are still here, I just delete this two line, is this a bug or i miss some necessary parameters that should be input?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux n188-182-130 4.14.81.bm.23-amd64
- Python version: 3.7.3
- PyTorch version (GPU?):1.7.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Model I am using (gpt2.):
The tasks I am working on is:
* my own task or dataset: a txt file, each line is regarded as a sample because I set --line_by_line
Dear all,
there are 8 GPU in my workspace, but from 0 to 3 have been occupied. So i can only use 4 GPUs from Rank4 to Rank7.
How could I set the parameters when using run_language_modeling.py?
And could someone please explain it clearer, what the parameter --local_rank and --tpu_num_cores can do if I want to select only part of my GPU in this training.
Besides, i also found some bug in Trainer.
for example:
1. In initialization process, Trainer object has not attribute "prefiction_loss_only" line 318
2. After training, when start doing evaluation, Trainer object has not attribute ‘is_world_master', I set --do_eval and give a eval set as input.
Here is the script I use in shell:
python3 run_language_modeling.py \
--output_dir $output \
--model_type 'gpt2' \
--model_name_or_path 'gpt2' \
--tokenizer_name 'bert-base-uncased' \
--cache_dir $pretrained_config \
--do_train true \
--train_data_file $train_file \
--do_eval true \
--eval_data_file $test_file \
--line_by_line true \
--mlm true \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--per_device_train_batch_size 128 \
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12325/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12324/comments | https://api.github.com/repos/huggingface/transformers/issues/12324/events | https://github.com/huggingface/transformers/pull/12324 | 928,317,533 | MDExOlB1bGxSZXF1ZXN0Njc2MzM5NDg0 | 12,324 | fill-mask pipeline: fix handling topk() indices | {
"login": "guyrosin",
"id": 1250162,
"node_id": "MDQ6VXNlcjEyNTAxNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyrosin",
"html_url": "https://github.com/guyrosin",
"followers_url": "https://api.github.com/users/guyrosin/followers",
"following_url": "https://api.github.com/users/guyrosin/following{/other_user}",
"gists_url": "https://api.github.com/users/guyrosin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyrosin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyrosin/subscriptions",
"organizations_url": "https://api.github.com/users/guyrosin/orgs",
"repos_url": "https://api.github.com/users/guyrosin/repos",
"events_url": "https://api.github.com/users/guyrosin/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyrosin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for letting us know! We just reverted the PR in the meantime.",
"Fixed by #12330 "
] | 1,624 | 1,625 | 1,625 | CONTRIBUTOR | null | # What does this PR do?
Fixes #12113, where indices in the targets array were used instead of their corresponding token ids.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12324",
"html_url": "https://github.com/huggingface/transformers/pull/12324",
"diff_url": "https://github.com/huggingface/transformers/pull/12324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12324.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12323/comments | https://api.github.com/repos/huggingface/transformers/issues/12323/events | https://github.com/huggingface/transformers/pull/12323 | 928,266,843 | MDExOlB1bGxSZXF1ZXN0Njc2Mjk1NzUw | 12,323 | Conda build | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Will need to rebase this PR on `master` once https://github.com/huggingface/transformers/pull/12187 is merged"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12323/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12323",
"html_url": "https://github.com/huggingface/transformers/pull/12323",
"diff_url": "https://github.com/huggingface/transformers/pull/12323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12323.patch",
"merged_at": 1624460828000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/12322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12322/comments | https://api.github.com/repos/huggingface/transformers/issues/12322/events | https://github.com/huggingface/transformers/issues/12322 | 928,232,472 | MDU6SXNzdWU5MjgyMzI0NzI= | 12,322 | Generate text with `model.generate` on TPU does not work | {
"login": "stekiri",
"id": 13682691,
"node_id": "MDQ6VXNlcjEzNjgyNjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13682691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stekiri",
"html_url": "https://github.com/stekiri",
"followers_url": "https://api.github.com/users/stekiri/followers",
"following_url": "https://api.github.com/users/stekiri/following{/other_user}",
"gists_url": "https://api.github.com/users/stekiri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stekiri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stekiri/subscriptions",
"organizations_url": "https://api.github.com/users/stekiri/orgs",
"repos_url": "https://api.github.com/users/stekiri/repos",
"events_url": "https://api.github.com/users/stekiri/events{/privacy}",
"received_events_url": "https://api.github.com/users/stekiri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"The issue still seems to be unresolved. Maybe the generation is not supposed to be supported on TPUs? In that case, a short note in the documentation could be helpful 😄 ",
"@stekiri - I think we indeed don't support PyTorch-XLA generation yet and yes a comment in the docs would be good! Would you be interested in making such a PR? I've put \"enabling TPU generation for PyTorch-XLA\" on my TODO list now so hope to tackle this sometime in 3-4 weeks",
"I have the same situation on GCP Cloud TPU v2-8 (summarization pipeline with T5ForConditionalGeneration).\r\nI'm eagerly waiting for the support.",
"This is still broken",
"@patil-suraj it worked for you on TPU no?",
"It's possible to use PT `generate` on TPU with `accelerate`, here's a colab which uses GPT2 as an example\r\nhttps://colab.research.google.com/drive/1OqCLWuEbWLp4fLLWcT-vteEJZHHZ3SZ5?usp=sharing",
"> It's possible to use PT `generate` on TPU with `accelerate`, here's a colab which uses GPT2 as an example https://colab.research.google.com/drive/1OqCLWuEbWLp4fLLWcT-vteEJZHHZ3SZ5?usp=sharing\r\n\r\nThis doesn't seem faster than GPU. I tried to run it on 400 examples and it was slower than GPU. Seems like you are just unloading the model back to CPU, which is actually slower, and we can't really leverage the power of TPU.",
"Is there any update on this?",
"> Is there any update on this?\r\n\r\nI had an exchange with @gante about it and it seems like the code will need major refactoring for this. https://huggingface.co/spaces/joaogante/tf_xla_generate_benchmarks/discussions/1#62eb9350985a691200cf2921",
"@mikcnt @divyanshuaggarwal The previous TF generate function was almost a (reduced) copy of the current PT generate function. We had to do a major rework of the TF generate function to make it compatible with XLA, so yeah... PT needs the same treatment if we want to use it with XLA :D \r\n\r\nI've shared a twitter thread today about the subject: https://twitter.com/joao_gante/status/1555527603716444160",
"@gante thanks a lot for the super exhaustive explanation. Do you think we can expect a refactoring for PT some time soon?\n\nOtherwise, do you know of a temporary workaround to use the generate method on TPU?",
"@mikcnt we don't have refactoring PT's generate in our short-term plans -- it is a very labor-intensive refactor whose main benefit is to enable TPU usage (i.e. a niche usage :) ). For context, the TF refactor took me >2 months of dedicated effort (contrarily to PT, the old TF implementation was slow on GPUs, and it was much smaller than PT's generate).\r\n\r\nThere are no immediate alternatives -- not being fully compatible with XLA implies that it can't get on a TPU effectively. Maybe the model you want to use exists on FLAX/TF, whose generate is compatible with TPUs.\r\n\r\nI don't want to clip your wings, so here's my suggestion: our efforts go towards what the majority of the community wants. If you open an issue in `transformers` and attract some attention to PT generation on TPU, the odds of it happening soon increase significantly!"
] | 1,624 | 1,659 | 1,632 | NONE | null | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 (Ubuntu 20.04.2 LTS)
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- PyTorch XLA version: 1.8.1
- Using GPU in script?: No, using TPU
- Using distributed or parallel set-up in script?: No, using a single TPU core
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): `facebook/m2m100_1.2B`, but other text generating models have the same problem.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
On a machine with a TPU run:
```python
import torch_xla.core.xla_model as xm
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model_name = 'facebook/m2m100_1.2B'
source_lang = 'en'
target_lang = 'de'
docs = [
"This is some document to translate.",
"And another document to translate."
]
device = xm.xla_device()
model = M2M100ForConditionalGeneration.from_pretrained(model_name).to(device)
tokenizer = M2M100Tokenizer.from_pretrained(model_name, src_lang=source_lang)
encoded_docs = tokenizer(docs, return_tensors='pt', padding=True).to(device)
generated_tokens = model.generate(**encoded_docs, forced_bos_token_id=tokenizer.get_lang_id(target_lang))
```
The call to `model.generate()` runs without ever terminating. It seems to be stuck somewhere in the beam search.
The same code runs perfectly fine on CPUs and GPUs.
## Expected behavior
I'd expect that the generation of text works in the same way as for CPUs and GPUs.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12322/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12322/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12321/comments | https://api.github.com/repos/huggingface/transformers/issues/12321/events | https://github.com/huggingface/transformers/pull/12321 | 928,193,015 | MDExOlB1bGxSZXF1ZXN0Njc2MjMxNjM5 | 12,321 | [Proposal] Image segmentation pipeline | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"@NielsRogge can you give this pipeline a look when you get a chance?",
"Thanks for this proposal! So the output of the API would be `[{mask, score, label}, ...]`, right? And the shape of `mask` is (height, width)?\r\n\r\nI wonder whether this can support any image segmentation model. I'm currently working on implementing [SegFormer](https://arxiv.org/abs/2105.15203) (a new model by NVIDIA), which is a semantic segmentation model. It takes an image as input and produces a segmentation map as output (i.e. it assigns a class to each pixel). This model will only have `outputs.logits`, and they are of shape (batch_size, num_labels, height/4, width/4) - the logits are produced at 1/4th of the original image size. The SegFormer model does not have `outputs.pred_masks`, for example. \r\n\r\nThis pipeline should support all forms of image segmentation, right? Panoptic, semantic, instance, etc? I think that if we want that, then we should first define what models for each of these forms of segmentation should produce in their `outputs`.",
"@NielsRogge \r\n\r\nI think pipelines should support as much different models as possible, regardless of the model output. (and clean error when model is not supported)\r\nProposed implem uses `pred_masks` because that's what currently available for DetR but if some other arch have different outputs, it should be the pipeline's role to be able to use `.logits` instead and still produce the same outputs.\r\n\r\nThose are different kind of models, right ? (Like XXXXForSegmentation vs XXXXForPanopticSegmentation)\r\nIf yes, then I think it's kind of routine to have switches on those for the pipelines. Less desirable would be to switch on actual model outputs (.logits vs .pred_masks) , and something that we should really strive to avoid is actually looking at the model arch to decide.\r\n\r\nRegardless, I think we can recover masks from raw logits, right ? If yes, then I think that proves that current output is good as it would enable pipeline to support SegFormer too.\r\n\r\nWould you agree? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,630 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
- Currently very low-level results, simply classification masks + score
+ label for each detected class
- Could support panoptic, instance segmentation, bounding boxes and even
part of instance segmentation (might require adding the "parent" info in
addition but that's about it.)
- Happy to hear some thoughts about the design.
- Future could maybe add a "aggregation_strategy" like, which could maybe
output a single image with color filters on top of the classes and so on
(or leave this reduction outside the pipeline). Getting final images like
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=8IRGo8d0qkgR
is still a bit involved for users that simply want to "see" outputs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12321/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12321",
"html_url": "https://github.com/huggingface/transformers/pull/12321",
"diff_url": "https://github.com/huggingface/transformers/pull/12321.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12321.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12320/comments | https://api.github.com/repos/huggingface/transformers/issues/12320/events | https://github.com/huggingface/transformers/pull/12320 | 928,018,762 | MDExOlB1bGxSZXF1ZXN0Njc2MDg0ODY5 | 12,320 | Add mention of the huggingface_hub methods for offline mode | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You mean the environment variable mentioned 10 lines above?",
"Lol, it was not showing in the diff, in my defense ;-) \r\nThanks for adding this!"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | Adding mention of the `huggingface_hub` for offline mode | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12320/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12320",
"html_url": "https://github.com/huggingface/transformers/pull/12320",
"diff_url": "https://github.com/huggingface/transformers/pull/12320.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12320.patch",
"merged_at": 1624455930000
} |
https://api.github.com/repos/huggingface/transformers/issues/12319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12319/comments | https://api.github.com/repos/huggingface/transformers/issues/12319/events | https://github.com/huggingface/transformers/issues/12319 | 927,958,901 | MDU6SXNzdWU5Mjc5NTg5MDE= | 12,319 | `fill-mask` pipeline cannot load tokenizer's `config.json` (fixed in 4.8.0) | {
"login": "rspreafico-absci",
"id": 83304116,
"node_id": "MDQ6VXNlcjgzMzA0MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/83304116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rspreafico-absci",
"html_url": "https://github.com/rspreafico-absci",
"followers_url": "https://api.github.com/users/rspreafico-absci/followers",
"following_url": "https://api.github.com/users/rspreafico-absci/following{/other_user}",
"gists_url": "https://api.github.com/users/rspreafico-absci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rspreafico-absci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rspreafico-absci/subscriptions",
"organizations_url": "https://api.github.com/users/rspreafico-absci/orgs",
"repos_url": "https://api.github.com/users/rspreafico-absci/repos",
"events_url": "https://api.github.com/users/rspreafico-absci/events{/privacy}",
"received_events_url": "https://api.github.com/users/rspreafico-absci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The config it asks for is the model config, not the tokenizer config. The fact the tokenizer can be loaded independently of the model has been fixed recently, so you should try on a source install.",
"I will try with a source install, however the error message says that the `config.json` file is missing from the file path specified with the `tokenizer` parameter, not from the file path specified with the `model` argument. My bad that I didn't report the full error message before, here it is:\r\n\r\n```\r\nOSError: Can't load config for '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta'. Make sure that:\r\n\r\n- '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta' is the correct path to a directory containing a config.json file\r\n```",
"Yes, that was the bug: the tokenizer required to have the model saved in the same directory to be reloaded in a pipeline.",
"Gotcha, thank you!",
"I cloned the `transformers` repo as of 5 min ago and installed from source, but I am getting the same error message. `transformers-cli env` confirms that I am using the `dev` version of `transformers`:\r\n\r\n```\r\n- `transformers` version: 4.8.0.dev0\r\n- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.5\r\n- PyTorch version (GPU?): 1.9.0+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"I'm trying to reproduce but it all works fine on my end. Since I don't have your model and tokenizer, here is the code I execute:\r\n```\r\nfrom transformers import RobertaTokenizerFast, RobertaForMaskedLM, pipeline\r\n\r\ntokenizer = RobertaTokenizerFast.from_pretrained(\"roberta-base\")\r\ntokenizer.save_pretrained(\"test-tokenizer\") # Only the tokenizer files are saved here\r\n\r\nmodel = RobertaForMaskedLM.from_pretrained(\"roberta-base\")\r\nmodel.save_pretrained(\"test-model\") # Only the model files are saved there\r\n\r\nfill_mask = pipeline(\r\n \"fill-mask\",\r\n model=\"test-model\",\r\n tokenizer=\"test-tokenizer\",\r\n)\r\n\r\nfill_mask(\"My <mask> is Sylvain.\")\r\n```",
"Ok, found it. \r\n\r\nI was merely re-running `fill_mask = pipeline(...)` upon installing the dev version of transformers. This is insufficient to get rid of the error.\r\n\r\nConversely, I needed to re-run the whole notebook, most crucially `tokenizer.save_pretrained(...)`. In `4.8.0.dev0` this adds an additional field to `tokenizer_config.json` which is missing in `4.7.0`, namely `\"tokenizer_class\": \"RobertaTokenizer\"`. Without this field (either because the tokenizer was saved with `4.7.0` or because one manually removes it from a file generated with `4.8.0.dev0`), the error message pops up.\r\n\r\nThanks for looking into this!",
"Ah yes, good analysis!",
"@rspreafico-absci FYI there was an issue with the fill-mask pipeline with the `targets` argument on `master` recently, so if you're running on a source installation I suggest to update it to a more recent version",
"Thanks @LysandreJik ! I saw that the official 4.8.0 was released yesterday, so I switched to using the PyPI version now. Can you confirm that 4.8.0 on PyPI is ok to use? Thank you.",
"Version v4.8.0 on PyPi is indeed ok to use and should work perfectly well for the fill-mask pipeline. :) ",
"In my program the `fill-mask` is requiring the **tokenizer_config.json** file. However when I run `tokenizer.save_model` I only get 2 files: vocab.json and merges.txt for my own `ByteLevelBPETokenizer`. How can I generate automatically the tokenizer_config.json file?",
"For anyone stumbling here because their tokenizer only saved vocab.config and merges.txt, you need to load your tokenizer and pass it instead of the config.\r\n\r\n```python\r\npipeline(args..., tokenizer=TokenizerClass.from_pretrained('path_to_saved_files'))\r\n```"
] | 1,624 | 1,651 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
@LysandreJik
## Information
Model I am using: RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: see details below
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: see details below
## To reproduce
Following official notebook to train from scratch RoBERTa (tokenizer and model alike). The only addition is to save the RoBERTa tokenizer
```
tokenizer = RobertaTokenizerFast.from_pretrained("/path/to/BPE/tokenizer", return_special_tokens_mask=True, model_max_length=32) # BPE tokenizer previously trained using the tokenizer library, as per docs, then vocab and merges loaded from transfromers' RobertaTokenizerFast
tokenizer.save_pretrained("/path/to/roberta_tk") # resaving the tokenizer, full model now
```
Saving outputs the following:
```
('/path/to/roberta_tk/tokenizer_config.json',
'/path/to/roberta_tk/special_tokens_map.json',
'/path/to/roberta_tk/vocab.json',
'/path/to/roberta_tk/merges.txt',
'/path/to/roberta_tk/added_tokens.json',
'/path/to/roberta_tk/tokenizer.json')
```
Note that there is no `config.json` file, only `tokenizer_config.json`
Then try to load the tokenizer:
```
fill_mask = pipeline(
"fill-mask",
model="/path/to/model",
tokenizer="/path/to/roberta_tk"
)
```
Errors out, complaining that `config.json` is missing. Symlinking `tokenizer_config.json` to `config.json` solves the issues.
## Expected behavior
File name match between tokenizer save output and pipeline input.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12318/comments | https://api.github.com/repos/huggingface/transformers/issues/12318/events | https://github.com/huggingface/transformers/issues/12318 | 927,920,402 | MDU6SXNzdWU5Mjc5MjA0MDI= | 12,318 | Downloading the models is getting slower than before | {
"login": "stepbystep88",
"id": 47015435,
"node_id": "MDQ6VXNlcjQ3MDE1NDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/47015435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stepbystep88",
"html_url": "https://github.com/stepbystep88",
"followers_url": "https://api.github.com/users/stepbystep88/followers",
"following_url": "https://api.github.com/users/stepbystep88/following{/other_user}",
"gists_url": "https://api.github.com/users/stepbystep88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stepbystep88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stepbystep88/subscriptions",
"organizations_url": "https://api.github.com/users/stepbystep88/orgs",
"repos_url": "https://api.github.com/users/stepbystep88/repos",
"events_url": "https://api.github.com/users/stepbystep88/events{/privacy}",
"received_events_url": "https://api.github.com/users/stepbystep88/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nothing changed on our side.\r\n\r\nCan you paste the output of e.g. `wget -O /dev/null https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin`?",
"\r\n\r\nOf course, please check the above picture.",
"Yep, that's downloading directly from [AWS Cloudfront](https://aws.amazon.com/cloudfront/), there's not much we can do on our side unfortunately\r\n\r\ncc @osanseviero ",
"Okay, thanks for your attention to this issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null |
Downloading the models (using the **model.pre_trained('')** function) is getting slower than before, now the download speed is stable at 1MB/s, but it could reach 10MB/s a month ago.
I wonder if the official limits the download speed, please let me know if so.
Thanks for your supports. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12318/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12317/comments | https://api.github.com/repos/huggingface/transformers/issues/12317/events | https://github.com/huggingface/transformers/issues/12317 | 927,862,132 | MDU6SXNzdWU5Mjc4NjIxMzI= | 12,317 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: not relevant
- Using distributed or parallel set-up in script?: not relevant
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
-->
## Information
Model I am using "xlm-roberta"
The problem arises when using:
I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)
`get_dataset `is just a simple wrapping for `load_dataset`
and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
The tasks I am working on is:
Xtreme udpos dataset (or potentially any other multilingual token classification task)
## To reproduce
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`:

Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1ZGj-4LzhnjrDv3PC5nlmi-BVj9vgvFTp?usp=sharing)
```
/content/MLM-disentangle/experiment_datasets/xtreme_ds.py in __getitem__(self, id_absolute)
605 labels[
606 (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0) & (ids[:] != 6)
--> 607 ] = self.dataset[lan][id]["pos_tags"]
608 return {
609 "tokens": torch.from_numpy(ids).long(),
ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true
```
It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. And both the tokens, 'コ' and 'ト', have True value when evaluating the `(arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0) & (ids[:] != 6) ` logic , which breaks the alignment of `return_offsets_mapping`.
## Expected behavior
One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`.
I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12316/comments | https://api.github.com/repos/huggingface/transformers/issues/12316/events | https://github.com/huggingface/transformers/pull/12316 | 927,714,786 | MDExOlB1bGxSZXF1ZXN0Njc1ODI3NjM1 | 12,316 | [models] respect dtype of the model when instantiating it | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger, where should we document the functionality added by this PR? (besides the API).",
"ok, this should be good to go.",
"Yes, it looks great!",
"Great work @stas00, this is really useful!"
] | 1,624 | 1,629 | 1,624 | CONTRIBUTOR | null | update for future readers - the initially proposed API has changed through the process of review, so below is slightly autodated, e.g. there is no `torch_dtype_auto_detect`, but `torch_dtype=auto`.
----------------
This PR resolves the issue discussed in https://github.com/huggingface/transformers/issues/12062.
The main feature is:
1. model will now be instantiated with the `dtype` passed via `from_pretrained` and `from_config` `torch_dtype` arg
2. alternatively `from_pretrained` now has `torch_dtype_auto_detect` which can do the same automatically
Examples:
```
model = AutoModel.from_config(config, torch_dtype=torch.float16)
model = T5ForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16)
model = T5ForConditionalGeneration.from_pretrained(model_path, torch_dtype_auto_detect=True)
```
**Important: only float dtypes are supported by `torch.set_default_dtype(dtype)`, so if it's not float, say int8, an exception will be generated**
Changes:
- `PreTrainedModel` now has a new `_from_config` method where all the context managers for the model instantiation are done
- `auto_factory`'s `from_config` is back to being a thin wrapper - all the deepspeed stuff has now moved to PT's version of `_from_config`
- The PT's version of `_from_config` now also sports the context manager-like functionality for dtype
- TF and Flax now have a thin `_from_config` method
- `from_pretrained`: had to move `torch.load` before model instantiation to enabled auto-discovery
- `from_pretrained`: like `from_config` has a similar context manager for dtype
- extensive tests added
Possible changes:
- I wasn't sure whether to call `config.torch_dtype` or `config.dtype` - I went with the former as it'd be easy to rename after reviews. I don't know whether we want it generic or torch specific - I don't know if tf/flax use similar names, but I guess we could automatically remap those if needed.
- When saving the dtype I saved only "float32" part of "torch.float32" - I could save "torch.float32" instead - either way I have to reconstruct the dtype object from string. same uncertainty as in the item above.
- the dtype context managers are poor man's versions since at the moment they are unique in each place, due to 3 frameworks using the same `from_pretrained` method - if we ever split it then we can use an actual context manager in `from_pretrained`.
Questions:
- should `T5ForConditionalGeneration.from_config(config)` be supported? Currently it's not and `T5ForConditionalGeneration(config)` ignores model instantiation context managers - just saves the config object
- probably should document this feature somewhere in the docs? Any suggestion where? It will work out-of-the-box but the documenting part would be handy for someone who wants to create a model from scratch in a non-default dtype.
Also note the new `log.info` entry:
```
Instantiating T5ForConditionalGeneration model under default dtype torch.float16
```
So one can tell what's happening.
The original version of this PR which tried to do the right thing automatically was dropped due to:
Possible issues:
- fp16 saved models now will be loaded as such and not fp32 as before - so some usages under CPU may fail, e.g. with:
```
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
```
e.g. many of our tiny models are fp16.
Another one on CUDA:
```
RuntimeError: Found param model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
```
Fixes: https://github.com/huggingface/transformers/issues/12062
@sgugger, @LysandreJik, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12316",
"html_url": "https://github.com/huggingface/transformers/pull/12316",
"diff_url": "https://github.com/huggingface/transformers/pull/12316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12316.patch",
"merged_at": 1624936281000
} |
https://api.github.com/repos/huggingface/transformers/issues/12315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12315/comments | https://api.github.com/repos/huggingface/transformers/issues/12315/events | https://github.com/huggingface/transformers/issues/12315 | 927,659,941 | MDU6SXNzdWU5Mjc2NTk5NDE= | 12,315 | Model is saved every eval_steps steps if eval_steps < save_steps. Is this expected behavior? | {
"login": "sam-writer",
"id": 47401552,
"node_id": "MDQ6VXNlcjQ3NDAxNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/47401552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-writer",
"html_url": "https://github.com/sam-writer",
"followers_url": "https://api.github.com/users/sam-writer/followers",
"following_url": "https://api.github.com/users/sam-writer/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-writer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-writer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-writer/subscriptions",
"organizations_url": "https://api.github.com/users/sam-writer/orgs",
"repos_url": "https://api.github.com/users/sam-writer/repos",
"events_url": "https://api.github.com/users/sam-writer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-writer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can't have different evaluation and save intervals when using `load_best_model_at_end=True` (save need to be synchronized with evaluation otherwise we can't keep track of the best model). Remove that option and you will have the evaluation and save disconnected as requested.",
"Thank you, that makes sense.\r\n\r\nAlso, now that I know it's related I immediately noticed\r\n\r\n\r\nMight be worth mentioning under `save_strategy` as well? But maybe it was just me.",
"Sure! Do you want to make a PR with that change?",
"sure!",
"haha it's been a while! \r\n\r\n",
"Oh indeed! :sweat_smile: "
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert, but I don't think that is relevant
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Make a `TrainingArgs` object with `eval_steps < save_steps` and `eval_strategy` and `save_strategy` both set to `"steps"`
2. Pass those to a `Trainer`
3. Model checkpoints every `eval_steps` steps, not every `save_steps` steps
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is my `TrainingArguments` code:
```python
args = TrainingArguments(
output_dir=outpath,
save_total_limit=10,
load_best_model_at_end=True,
save_strategy="steps" if cli_args.save_steps is not None else "epoch",
save_steps=cli_args.save_steps,
evaluation_strategy="steps" if cli_args.eval_steps is not None else "epoch",
eval_steps=cli_args.eval_steps,
metric_for_best_model="loss",
learning_rate=cli_args.learning_rate,
per_device_train_batch_size=cli_args.batch_size,
per_device_eval_batch_size=cli_args.batch_size,
num_train_epochs=cli_args.num_train_epochs,
weight_decay=cli_args.weight_decay,
fp16=cli_args.fp16,
deepspeed=deepspeed,
local_rank=cli_args.local_rank,
)
```
with the values I am using filled in, this is:
```python
args = TrainingArguments(
output_dir="ten_m/model",
save_total_limit=10,
load_best_model_at_end=True,
save_strategy="steps",
save_steps=6, # for testing
evaluation_strategy="steps",
eval_steps=2, # for testing
metric_for_best_model="loss",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
fp16=False,
deepspeed=None,
local_rank=-1,
)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Well, maybe this is expected? But if so, I feel like it should be documented more obviously.
I wrote a callback to upload the saved checkpoint to GCS, but the eval step is very quick, so I was going to do those much more frequently. However, if evaluating means I have to upload to GCS, then I will evaluate less often. However, I verified that even if I don't use the GCS save callback, with the above settings, a checkpoint is saved every 2 steps, not every 6.
If this is expected behavior, then is the correct way to change it to write a Callback that `on_evaluate` sets the argument of type `transformers.TrainerControl` to have property `should_save` to `False`?
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12315/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12314/comments | https://api.github.com/repos/huggingface/transformers/issues/12314/events | https://github.com/huggingface/transformers/pull/12314 | 927,647,502 | MDExOlB1bGxSZXF1ZXN0Njc1NzcxMDky | 12,314 | Add all XxxPreTrainedModel to the main init | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
This PR adds all XxxPreTrainedModel classes to the main init, making them public, and more generally, adds a CI quality check to make sure every object in the modeling files that is a subclass of `PreTrainedModel`, `TFPreTrainedModel` or `FlaxPreTrainedModel` is public (except the Encoder, Decoder, Wrapper and the ones explicitly listed as privates).
Fixes #12193 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12314/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12314",
"html_url": "https://github.com/huggingface/transformers/pull/12314",
"diff_url": "https://github.com/huggingface/transformers/pull/12314.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12314.patch",
"merged_at": 1624459254000
} |
https://api.github.com/repos/huggingface/transformers/issues/12313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12313/comments | https://api.github.com/repos/huggingface/transformers/issues/12313/events | https://github.com/huggingface/transformers/pull/12313 | 927,598,846 | MDExOlB1bGxSZXF1ZXN0Njc1NzMwNTc0 | 12,313 | FlaxBartPretrainedModel -> FlaxBartPreTrainedModel | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
All is said in the description. I think it's fine to fix for now without backward compatibility as this is a private class and we did not officially release the Flax models yet. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12313/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12313",
"html_url": "https://github.com/huggingface/transformers/pull/12313",
"diff_url": "https://github.com/huggingface/transformers/pull/12313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12313.patch",
"merged_at": 1624394225000
} |
https://api.github.com/repos/huggingface/transformers/issues/12312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12312/comments | https://api.github.com/repos/huggingface/transformers/issues/12312/events | https://github.com/huggingface/transformers/pull/12312 | 927,545,409 | MDExOlB1bGxSZXF1ZXN0Njc1Njg2MjU5 | 12,312 | Add possibility to maintain full copies of files | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | # What does this PR do?
#12252 introduced a file that is a full copy of another file. This PR adds to the existing `check_copies` util script the ability to make sure those copies stay in sync, with a check in `make quality` and an update with `make fix-copies`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12312/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12312",
"html_url": "https://github.com/huggingface/transformers/pull/12312",
"diff_url": "https://github.com/huggingface/transformers/pull/12312.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12312.patch",
"merged_at": 1624888974000
} |
https://api.github.com/repos/huggingface/transformers/issues/12311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12311/comments | https://api.github.com/repos/huggingface/transformers/issues/12311/events | https://github.com/huggingface/transformers/pull/12311 | 927,493,421 | MDExOlB1bGxSZXF1ZXN0Njc1NjQzNjQ0 | 12,311 | [Flax/JAX] Add how to propose projects markdown | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
This PR adds a HOW-TO to propose projects in JAX/Flax. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12311/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12311",
"html_url": "https://github.com/huggingface/transformers/pull/12311",
"diff_url": "https://github.com/huggingface/transformers/pull/12311.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12311.patch",
"merged_at": 1624456236000
} |
https://api.github.com/repos/huggingface/transformers/issues/12310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12310/comments | https://api.github.com/repos/huggingface/transformers/issues/12310/events | https://github.com/huggingface/transformers/issues/12310 | 927,483,311 | MDU6SXNzdWU5Mjc0ODMzMTE= | 12,310 | New `--log_level` feature introduces failures using 'passive' mode | {
"login": "allenwang28",
"id": 9057208,
"node_id": "MDQ6VXNlcjkwNTcyMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9057208?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allenwang28",
"html_url": "https://github.com/allenwang28",
"followers_url": "https://api.github.com/users/allenwang28/followers",
"following_url": "https://api.github.com/users/allenwang28/following{/other_user}",
"gists_url": "https://api.github.com/users/allenwang28/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allenwang28/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allenwang28/subscriptions",
"organizations_url": "https://api.github.com/users/allenwang28/orgs",
"repos_url": "https://api.github.com/users/allenwang28/repos",
"events_url": "https://api.github.com/users/allenwang28/events{/privacy}",
"received_events_url": "https://api.github.com/users/allenwang28/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the report, it will be fixed shortly via https://github.com/huggingface/transformers/pull/12309\r\n\r\nI'm just working on a test - need another 10min or so",
"Thank you for fixing this so quickly!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `nightly`
- Platform: PyTorch
- Python version: 3.6
- PyTorch version (GPU?): TPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
### Who can help
@stas00 @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
This was captured by Cloud TPU tests (XLNet/MNLI/GLUE), but I think this behavior is model/dataset agnostic. Essentially, it seems that:
1. The `training_args`'s `__post_init__` method _should_ [convert the `log_level` to `-1`](https://github.com/huggingface/transformers/blob/dad414d5f9c20627ee6c16f62e8a2056916bf35b/src/transformers/training_args.py#L606) if it's set to 'passive' (which it is by default).
2. However in the end-to-end `run_glue.py` example, using [`parse_args_into_dataclasses()`](https://github.com/huggingface/transformers/blob/dad414d5f9c20627ee6c16f62e8a2056916bf35b/examples/pytorch/text-classification/run_glue.py#L199) seems to not call `__post_init__`, as our tests are failing with:
```
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 554, in _mp_fn
main()
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 468, in main
data_collator=data_collator,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 295, in __init__
logging.set_verbosity(log_level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/utils/logging.py", line 161, in set_verbosity
_get_library_root_logger().setLevel(verbosity)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 1284, in setLevel
self.level = _checkLevel(level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 195, in _checkLevel
raise ValueError("Unknown level: %r" % level)
ValueError: Unknown level: 'passive'
```
## To reproduce
Steps to reproduce the behavior:
1. The command we're using is:
```
git clone https://github.com/huggingface/transformers.git
cd transformers && pip install .
git log -1
pip install datasets
python examples/pytorch/xla_spawn.py \
--num_cores 8 \
examples/pytorch/text-classification/run_glue.py \
--logging_dir=./tensorboard-metrics \
--task_name MNLI \
--cache_dir ./cache_dir \
--do_train \
--do_eval \
--num_train_epochs 3 \
--max_seq_length 128 \
--learning_rate 3e-5 \
--output_dir MNLI \
--overwrite_output_dir \
--logging_steps 30 \
--save_steps 3000 \
--overwrite_cache \
--tpu_metrics_debug \
--model_name_or_path xlnet-large-cased \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 16
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12310/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12309/comments | https://api.github.com/repos/huggingface/transformers/issues/12309/events | https://github.com/huggingface/transformers/pull/12309 | 927,425,658 | MDExOlB1bGxSZXF1ZXN0Njc1NTg2MjE0 | 12,309 | [trainer] 2 bug fixes and a rename | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | Fixes a bug in https://github.com/huggingface/transformers/pull/12257 and https://github.com/huggingface/transformers/pull/12276 and renames the function in the latter and adds a docstring.
Also added an extended DDP test to tests `log_level_replica`.
Fixes: https://github.com/huggingface/transformers/issues/12310
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12309/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12309",
"html_url": "https://github.com/huggingface/transformers/pull/12309",
"diff_url": "https://github.com/huggingface/transformers/pull/12309.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12309.patch",
"merged_at": 1624385603000
} |
https://api.github.com/repos/huggingface/transformers/issues/12308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12308/comments | https://api.github.com/repos/huggingface/transformers/issues/12308/events | https://github.com/huggingface/transformers/issues/12308 | 927,406,143 | MDU6SXNzdWU5Mjc0MDYxNDM= | 12,308 | odd whitespace handling with imported sentencepiece models | {
"login": "tlby",
"id": 3189927,
"node_id": "MDQ6VXNlcjMxODk5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3189927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tlby",
"html_url": "https://github.com/tlby",
"followers_url": "https://api.github.com/users/tlby/followers",
"following_url": "https://api.github.com/users/tlby/following{/other_user}",
"gists_url": "https://api.github.com/users/tlby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tlby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tlby/subscriptions",
"organizations_url": "https://api.github.com/users/tlby/orgs",
"repos_url": "https://api.github.com/users/tlby/repos",
"events_url": "https://api.github.com/users/tlby/events{/privacy}",
"received_events_url": "https://api.github.com/users/tlby/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"`spm.SentencePieceTrainer` and `ReformerTokenizerFast` are not the same tokenizers, so it's not unusual that each of them outputs different results.\r\nHowever, I'm not sure how the two tokenizers are different. It's because of the lack of my knowledge.\r\n\r\nRegarding the difference between `ReformerTokenizerFast` and `AutoTokenizer`, I discovered something.\r\nOne of the easiest ways to make the two tokenizers output the same results is to remove `mask_token='<mask>'` and `test` directory where previous config files exist (if there is `test` folder).\r\n\r\nAnother way is to remove `special_tokens_map.json` and `tokenizer_config.json` (after `save_pretrained`) that are unnecessary files when using fast tokenizer.\r\n\r\nI don't know what is the cause of this problem, but I guess there are conflicts between configurations of fast tokenizer and tokenizer.\r\n\r\n\r\n",
"This may have two underlying causes, one perhaps a serialization issue between `.save_pretrained()` and `AutoTokenizer.from_pretrained()`, and a separate issue related to different behavior between `PreTrainedTokenizer` and `PreTrainedTokenizerFast`.\r\n\r\nHere is perhaps a clearer example of the variations:\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nimport sentencepiece as spm\r\nimport transformers as tr\r\n\r\nsrc = (\r\n 'Lorem Ipsum dolor sit amet, consectetur adipiscing elit, sed do',\r\n 'eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut',\r\n 'enim ad minim veniam, quis nostrud exercitation ullamco laboris',\r\n 'nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in',\r\n 'reprehenderit in voluptate velit esse cillum dolore eu fugiat',\r\n 'nulla pariatur. Excepteur sint occaecat cupidatat non proident,',\r\n 'sunt in culpa qui officia deserunt mollit anim id est laborum.',\r\n)\r\n\r\nspm.SentencePieceTrainer.train(\r\n sentence_iterator=iter(src),\r\n model_prefix='test',\r\n vocab_size=96,\r\n treat_whitespace_as_suffix=True,\r\n user_defined_symbols=['<pad>', '<mask>'],\r\n minloglevel=1,\r\n)\r\n\r\ndef show(label, toks):\r\n print('%8s %2d: %s' % (label, len(toks), toks))\r\n\r\ntext = 'Lo<mask>m Ipsum'\r\n\r\ncfg = tr.T5Config()\r\n\r\ntok = tr.T5Tokenizer('test.model', mask_token='<mask>', pad_token='<pad>')\r\nshow('tr.slow', tok.tokenize(text))\r\ncfg.save_pretrained('test_slow')\r\ntok.save_pretrained('test_slow')\r\n\r\ntok = tr.AutoTokenizer.from_pretrained('test_slow', use_fast=False)\r\nshow('at.slow', tok.tokenize(text))\r\n\r\ntok = tr.T5TokenizerFast('test.model', mask_token='<mask>', pad_token='<pad>')\r\nshow('tr.fast', tok.tokenize(text))\r\ncfg.save_pretrained('test_fast')\r\ntok.save_pretrained('test_fast')\r\n\r\ntok = tr.AutoTokenizer.from_pretrained('test_fast')\r\nshow('at.fast', tok.tokenize(text))\r\n```\r\ngiving\r\n```\r\n tr.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']\r\n at.slow 10: ['L', 'o', '▁', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']\r\n tr.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']\r\n at.fast 11: ['▁', 'L', 'o', '<mask>', '▁', 'm', '▁', 'I', 'p', 's', 'um']\r\n```\r\nThe first one is consistent with `sentencepiece` directly, which is not surprising because these tokenizers use `spm.SentencePieceProcessor()` to encode.\r\n\r\n@europeanplaice It looks like you're making headway on the serialization part, which is great. I can file a separate ticket for the differences between the `PreTrainedTokenizer` and `PreTrainedTokenizerFast` subclasses if that part turns out unrelated.",
"@tlby\r\nThank you for giving another example.\nIt helped me a lot.\r\nFor now, I'm not sure whether the two underlying causes are unrelated.\r\n\r\nThe following is about the serialization issue.\r\n\r\nI've found that `tr.T5Tokenizer` or `ReformerTokenizer` don't expect `mask_token`. So this attribute is taken as an element of `kwargs`. (fast tokenizer may also be the same.)\r\n\r\nRefer to \r\nhttps://huggingface.co/transformers/model_doc/t5.html#t5tokenizer\r\nhttps://huggingface.co/transformers/model_doc/reformer.html#reformertokenizer\r\n\r\nI guess that when first initializing the tokenizer, it ignores `mask_token`, but when you recreate it by `from_pretrained`, it treats `mask_token` as a special token, then it tokenizes text differently.\r\n\r\nIn the first initialization, the tokenizer uses the default setting.\r\nSo it doesn't consider `mask_token` even if a user passes it as an argument. It gives the text to sentencepiece without any preprocessing, and `sentencepiece` recognizes `<mask>` as a mask token.\r\nHowever, the tokenizer also writes `<mask>` as a special token in the config JSON.\r\n\r\nWhen recreating the tokenizer by `from_pretrained`, it processes the tokens defined in the config JSON.\r\nBefore passing the text to sentencepiece, it splits the text to ['Lo', '`<mask>`', 'm Ipsum'] and sentencepiece tokenizes the elements except for `<mask>` then combines.\r\n\r\nWhen I removed `mask_token='<mask>'` as below\r\n```python\r\ntok = tr.T5Tokenizer('test.model', pad_token='<pad>')\r\ntok = tr.T5TokenizerFast('test.model', pad_token='<pad>')\r\n```\r\nthen results were\r\n```\r\n tr.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']\r\n at.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']\r\n tr.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']\r\n at.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']\r\n```\r\n\r\nIt would explain the difference between `tr.slow` and `at.slow`.\r\nIt also would explain the difference between `tr.fast` and `at.fast`.\r\n\r\n",
"When I do not specify a mask_token to the `PreTrainedTokenizer`, trying to use it in `examples/pytorch/language-modeling/run_mlm.py` gives errors.\r\n```\r\n[ERROR|tokenization_utils_base.py:1017] 2021-06-26 22:40:50,070 >> Using mask_token, but it is not set yet.\r\nTraceback (most recent call last):\r\n File \"run_mlm.py\", line 504, in <module>\r\n main()\r\n File \"run_mlm.py\", line 438, in main\r\n pad_to_multiple_of=8 if pad_to_multiple_of_8 else None,\r\n File \"<string>\", line 6, in __init__\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py\", line 335, in __post_init__\r\n \"This tokenizer does not have a mask token which is necessary for masked language modeling. \"\r\nValueError: This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.\r\n```\r\nI don't have a standalone example of this, but if you need one I can work one out.",
"Pinging @n1t0 for advice",
"@LysandreJik @n1t0\r\nThank you for checking this conversation.",
"My situation is somewhat more sensitive to whitespace than most western languages because I am hoping to do language modeling with HTML source code. In a sample such as `<a href=\"page.html\">the page</a>` the insertion of whitespace after an attribute name `<a href =\"page.html\">the page</a>` takes us outside the realm of samples in the training set, which is why this problem is significant.\r\n\r\nRather than trying to recycle one of the existing `sentencepiece` based tokenizers, I worked out [my own](/tlby/rnd-html/blob/e65f246cece32b55f1a49e76f5bcad8dfc077839/mytok.py) PreTrainedTokenizer subclass I hope will be compatible with various transformer architectures. So far it is working nicely for BERT in `run_mlm.py`. The `AutoTokenizer.from_pretrained()` instance is behaving consistently and since I don't implement a PreTrainedTokenizerFast I don't have problems with it getting upgraded and changing tokenization behavior.\r\n\r\nIf you can confirm this approach isn't an obvious anti-pattern, then that might be enough to consider my issue resolved.",
"Tokenizers are generally engineered for specific use-cases and don't always adapt to different domains. The tokenizers here didn't fit your use-cases so you went and implemented one yourself so that it best fits your particular problem - I believe this is the best way to handle it and ensure it behaves consistently across your tasks.\r\n\r\nI took a look at your `PreTrainedTokenizer` and I think you have everything covered! If ever you have some time available, it would be very helpful for us to know if you ran into issues implementing that subclass, or general ideas of how we could make it easier for you to implement custom tokenizers such as this one. For example, adding an option to register new tokenizers (such as the proposition in https://github.com/huggingface/transformers/issues/10256) to the `AutoTokenizer` mapping would probably have come in handy.\r\n\r\nIf you think of anything else, please feel free to open an issue with proposals, even if very high-level, so that we may improve the API. Thank you!",
"> If ever you have some time available, it would be very helpful for us to know if you ran into issues implementing that subclass\r\n\r\nAs with most mildly complicated inheritance APIs I had a couple of false starts trying to write something clean and minimalist first, then something heavy that got way out of hand. Once I tracked down a working example that seemed closest to what I was trying to do, retooling and cleaning progressed rapidly.\r\n\r\n\r\nProbably the biggest technical hurdle was assuming I wanted a \"Fast\" tokenizer and trying too hard to trick the `tokenizers` library into doing what I needed, which was almost, but not quite possible.",
"That's interesting feedback, thank you. We'll keep this in mind when working on improving the tokenizer API.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,629 | 1,629 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik Although my example uses ReformerTokenizer, I think this problem is present in several of the model architectures using sentencepiece tokenizers.
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
#!/usr/bin/env python3
import sentencepiece as spm
import transformers as tr
src = (
'Lorem Ipsum dolor sit amet, consectetur adipiscing elit, sed do',
'eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut',
'enim ad minim veniam, quis nostrud exercitation ullamco laboris',
'nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in',
'reprehenderit in voluptate velit esse cillum dolore eu fugiat',
'nulla pariatur. Excepteur sint occaecat cupidatat non proident,',
'sunt in culpa qui officia deserunt mollit anim id est laborum.',
)
spm.SentencePieceTrainer.train(
sentence_iterator=iter(src),
model_prefix='test',
vocab_size=96,
treat_whitespace_as_suffix=True,
user_defined_symbols=['<pad>', '<mask>'],
minloglevel=1,
)
def show(label, toks):
print('%14s %2d: %s' % (label, len(toks), toks))
text = 'Lo<mask>m Ipsum'
tok = spm.SentencePieceProcessor(model_file='test.model')
show('sentencepiece', tok.encode(text, out_type=str))
tok = tr.models.reformer.ReformerTokenizerFast('test.model',
mask_token='<mask>',
pad_token='<pad>')
show('transformers', tok.tokenize(text))
tok.save_pretrained('test')
tr.models.reformer.ReformerConfig().save_pretrained('test')
tok = tr.AutoTokenizer.from_pretrained('test')
show('AutoTokenizer', tok.tokenize(text))
```
is giving
```
sentencepiece 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
transformers 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']
AutoTokenizer 11: ['▁', 'L', 'o', '<mask>', '▁', 'm', '▁', 'I', 'p', 's', 'um']
```
## Expected behavior
I believe the tokenization of input text should be more consistent. I think these variations are cropping up between my attempts to pretrain a language model and then later finetune the saved model, resulting in model accuracy problems.
The use of `treat_whitespace_as_suffix=True` in `sentencepiece` makes this problem worse, but using a sentencepiece model without this flag still shows the `AutoTokenizer.from_pretrained()` created tokenizer inserting whitespace that was not present in the source text. I haven't been able to track down where this is coming from or how to avoid it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12308/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12307/comments | https://api.github.com/repos/huggingface/transformers/issues/12307/events | https://github.com/huggingface/transformers/issues/12307 | 927,383,319 | MDU6SXNzdWU5MjczODMzMTk= | 12,307 | Tokenizing in the dataset and padding manually using tokenizer.pad in the collator | {
"login": "jandono",
"id": 6882753,
"node_id": "MDQ6VXNlcjY4ODI3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6882753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jandono",
"html_url": "https://github.com/jandono",
"followers_url": "https://api.github.com/users/jandono/followers",
"following_url": "https://api.github.com/users/jandono/following{/other_user}",
"gists_url": "https://api.github.com/users/jandono/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jandono/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jandono/subscriptions",
"organizations_url": "https://api.github.com/users/jandono/orgs",
"repos_url": "https://api.github.com/users/jandono/repos",
"events_url": "https://api.github.com/users/jandono/events{/privacy}",
"received_events_url": "https://api.github.com/users/jandono/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Some additional info that might help. Encodings looks like: `encodings = [batch_encoding_1, ... , batch_encoding_2]`. Each batch encoding looks like: \r\n\r\n```python\r\n{'input_ids': tensor([[ 101, 1006, 1039, 1007, 2065, 1996, 13666, 11896, 2000, 14037,\r\n 2007, 2019, 14987, 2104, 2023, 11075, 3429, 1010, 1999, 2804,\r\n 2000, 2151, 2060, 2128, 7583, 3111, 1997, 1996, 4054, 1010,\r\n 1996, 4054, 2089, 4685, 2008, 14987, 2006, 1996, 13666, 1005,\r\n 1055, 6852, 1998, 2151, 3465, 22667, 2011, 1996, 4054, 2097,\r\n 2022, 1037, 7016, 2349, 2013, 1996, 13666, 2000, 1996, 4054,\r\n 1012, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\r\n```\r\n\r\nAnd this is the line that eventually raises an exception:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1498eb9888d55d76385b45e074f26703cc5049f3/src/transformers/tokenization_utils_base.py#L699\r\n\r\n",
"I managed to make a small reproducible example:\r\n\r\n\r\n```python\r\n\r\nfrom transformers import BertTokenizer\r\nfrom torch import tensor\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\nencodings = [{'input_ids': tensor([[ 101, 1006, 1039, 1007, 2065, 1996, 13666, 11896, 2000, 14037,\r\n 2007, 2019, 14987, 2104, 2023, 11075, 3429, 1010, 1999, 2804,\r\n 2000, 2151, 2060, 2128, 7583, 3111, 1997, 1996, 4054, 1010,\r\n 1996, 4054, 2089, 4685, 2008, 14987, 2006, 1996, 13666, 1005,\r\n 1055, 6852, 1998, 2151, 3465, 22667, 2011, 1996, 4054, 2097,\r\n 2022, 1037, 7016, 2349, 2013, 1996, 13666, 2000, 1996, 4054,\r\n 1012, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}, {'input_ids': tensor([[ 101, 1006, 1037, 1007, 2202, 2046, 4070, 2035, 1997, 1996, 7882, 6214,\r\n 1997, 1996, 3563, 3105, 1010, 2164, 1996, 3872, 2030, 3635, 1997, 1996,\r\n 7170, 2000, 2022, 2333, 1010, 3292, 2000, 2022, 7837, 2005, 4651, 1010,\r\n 2334, 4026, 3785, 1010, 2051, 1997, 2154, 1010, 3517, 3403, 2335, 1010,\r\n 2569, 2609, 3785, 1998, 2060, 2569, 6214, 1025, 1998, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}]\r\n\r\nbatched_encodings = tokenizer.pad(encodings, padding='longest', return_tensors='pt')\r\n```",
"@LysandreJik any update on this?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik @patrickvonplaten @sgugger \r\n\r\nI apologize for tagging Patric and Sylvain, but as Lysandre seems to be busy, do you perhaps know someone who can help with this?",
"The `tokenizer.pad` method only applies padding for list of examples, so each of the elements in your `encoding` should be one-dimensional. If you remove the extra pair of [] in all your tensors in your minimal example, it will work.\r\n\r\nAlso please use the [forums](https://discuss.huggingface.co/) for questions around the library as we keep the issues for bugs and feature requests only.",
"Thanks a lot @sgugger , I posted it her as it looked like a bug to me based on the documentation. Additionally, those extra set of parenthesis come from the tokenizer not me. When running:\r\n\r\n```python\r\nencoding = self.tokenizer(text,\r\n add_special_tokens=True,\r\n max_length=512,\r\n padding=False,\r\n truncation=True,\r\n return_attention_mask=True,\r\n return_tensors='pt')\r\n```\r\n\r\nYou get those extra parenthesis. I am assuming they come because in the background of the `__call__` method, `batch_encode` is called instead of `encode`. Am I doing something wrong in the way I am using the tokenizer? My main goal is to simply tokenize the entire dataset beforehand, and only pad during training.",
"You should not use `return_tensors='pt'` for just one text, that option is designed to create batches you directly pass to your model. So if you use it with one text, you get a batch of one encoding. Either add [0] to select the only element of that batch in your dataset, or create the tensors in the collate function.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,631 | 1,631 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-74-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am trying to avoid tokenizing in the collator in order to improve the speed of Data Loading, which is why I wanted to tokenize everything in advance and then simply pad in the collator. I can't provide the entire code however here are my `Dataset` and my `Collator` which will hopefully be enough.
```python
class DatasetTokenized(Dataset):
def __init__(self, data: pd.DataFrame, text_column: str,
label_columns: List[str], tokenizer_name: str):
super(DatasetTokenized, self).__init__()
self.data = data
self.text_column = text_column
self.label_columns = label_columns
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
self.tokenized_data = self.tokenize_data(data)
def __len__(self) -> int:
return len(self.tokenized_data)
def __getitem__(self, index: int) -> Dict:
return self.tokenized_data[index]
def tokenize_data(self, data: pd.DataFrame):
tokenized_data = []
print('Tokenizing data:')
for _, row in tqdm(data.iterrows(), total=len(data)):
text = row[self.text_column]
labels = row[self.label_columns]
encoding = self.tokenizer(text,
add_special_tokens=True,
max_length=512,
padding=False,
truncation=True,
return_attention_mask=True,
return_tensors='pt')
tokenized_data.append({
'text': text,
'encoding': encoding,
'labels': torch.FloatTensor(labels)
})
return tokenized_data
class BertCollatorTokenized:
def __init__(self, tokenizer_name: str):
super(BertCollatorTokenized, self).__init__()
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
def __call__(self, batch: List[Any]):
text, encodings, labels = zip(
*[[sample['text'], sample['encoding'], sample['labels']]
for sample in batch])
encodings = list(encodings)
encodings = self.tokenizer.pad(encodings,
max_length=512,
padding='longest',
return_tensors='pt')
return {
'text': text,
'input_ids': encodings['input_ids'],
'attention_mask': encodings['attention_mask'],
'labels': torch.FloatTensor(labels)
}
```
Error:
>ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Full error message:
```
File "train_text_classificator.py", line 78, in main
trainer.fit(lightning_system, data_module)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 949, in run_evaluation
for batch_idx, batch in enumerate(dataloader):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 771, in convert_to_tensors
tensor = as_tensor(value)
ValueError: expected sequence of length 4 at dim 2 (got 13)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/jav/experimental-framework/data_utils/collators/transformers_collatоrs.py", line 97, in __call__
return_tensors='pt')
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 2706, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 276, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 788, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect `self.tokenizer.pad(encodings, ... )` in the collator to work without issues when given a list of `BatchEncoding` elements.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12307/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12306/comments | https://api.github.com/repos/huggingface/transformers/issues/12306/events | https://github.com/huggingface/transformers/issues/12306 | 927,285,220 | MDU6SXNzdWU5MjcyODUyMjA= | 12,306 | Dimensional weight error | {
"login": "Noskid1999",
"id": 34827312,
"node_id": "MDQ6VXNlcjM0ODI3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/34827312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Noskid1999",
"html_url": "https://github.com/Noskid1999",
"followers_url": "https://api.github.com/users/Noskid1999/followers",
"following_url": "https://api.github.com/users/Noskid1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Noskid1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Noskid1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Noskid1999/subscriptions",
"organizations_url": "https://api.github.com/users/Noskid1999/orgs",
"repos_url": "https://api.github.com/users/Noskid1999/repos",
"events_url": "https://api.github.com/users/Noskid1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Noskid1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
" I am currently getting the same error - how did you solve it? Thanks in advance!"
] | 1,624 | 1,643 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Windows
- Python version: 3.7.5
- PyTorch version (GPU?): 1.8.1+cu111
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
wav2vec2: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
1. Same steps as in the fine-tuning wav2vec2 : https://huggingface.co/blog/fine-tune-wav2vec2-english
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-51-9db903a616c9> in <module>
----> 1 trainer.train()
2 trainer.save_model('content/wav2vec2-nepali-openslr-54_10000')
3 trainer.tokenizer.save_pretrained('content/wav2vec2-nepali-openslr-54_10000')
4 processor.save_pretrained('content/wav2vec2-nepali-openslr-54_10000')
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1261 tr_loss += self.training_step(model, inputs)
1262 else:
-> 1263 tr_loss += self.training_step(model, inputs)
1264 self.current_flos += float(self.floating_point_ops(inputs))
1265
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in training_step(self, model, inputs)
1744 if self.use_amp:
1745 with autocast():
-> 1746 loss = self.compute_loss(model, inputs)
1747 else:
1748 loss = self.compute_loss(model, inputs)
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in compute_loss(self, model, inputs, return_outputs)
1778 else:
1779 labels = None
-> 1780 outputs = model(**inputs)
1781 # Save past state if it exists
1782 # TODO: this needs to be fixed and made cleaner later.
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values, attention_mask, output_attentions, output_hidden_states, return_dict, labels)
1470 output_attentions=output_attentions,
1471 output_hidden_states=output_hidden_states,
-> 1472 return_dict=return_dict,
1473 )
1474
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values, attention_mask, mask_time_indices, output_attentions, output_hidden_states, return_dict)
1042 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1043
-> 1044 extract_features = self.feature_extractor(input_values)
1045 extract_features = extract_features.transpose(1, 2)
1046
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values)
329 hidden_states = input_values[:, None]
330 for conv_layer in self.conv_layers:
--> 331 hidden_states = conv_layer(hidden_states)
332
333 return hidden_states
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, hidden_states)
222
223 def forward(self, hidden_states):
--> 224 hidden_states = self.conv(hidden_states)
225
226 hidden_states = hidden_states.transpose(-2, -1)
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
261
262 def forward(self, input: Tensor) -> Tensor:
--> 263 return self._conv_forward(input, self.weight, self.bias)
264
265
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
258 _single(0), self.dilation, self.groups)
259 return F.conv1d(input, weight, bias, self.stride,
--> 260 self.padding, self.dilation, self.groups)
261
262 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 1, 10], but got 4-dimensional input of size [1, 1, 1, 43200] instead
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12306/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12305/comments | https://api.github.com/repos/huggingface/transformers/issues/12305/events | https://github.com/huggingface/transformers/pull/12305 | 927,266,566 | MDExOlB1bGxSZXF1ZXN0Njc1NDQ5Njk5 | 12,305 | [Flax] Main doc for event orga | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
This PR adds the main document for the Flax/JAX community week organization.
@patil-suraj @suzana-ilic @osanseviero @thomwolf @avital @marcvanzee | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12305/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12305/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12305",
"html_url": "https://github.com/huggingface/transformers/pull/12305",
"diff_url": "https://github.com/huggingface/transformers/pull/12305.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12305.patch",
"merged_at": 1624381372000
} |
https://api.github.com/repos/huggingface/transformers/issues/12304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12304/comments | https://api.github.com/repos/huggingface/transformers/issues/12304/events | https://github.com/huggingface/transformers/pull/12304 | 927,189,026 | MDExOlB1bGxSZXF1ZXN0Njc1MzgzMzQy | 12,304 | Add CodeCarbon Integration | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sashavor"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR adds `codecarbon` for carbon footprint tracking. This is also useful for BigScience.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12304/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12304",
"html_url": "https://github.com/huggingface/transformers/pull/12304",
"diff_url": "https://github.com/huggingface/transformers/pull/12304.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12304.patch",
"merged_at": 1624431189000
} |
https://api.github.com/repos/huggingface/transformers/issues/12303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12303/comments | https://api.github.com/repos/huggingface/transformers/issues/12303/events | https://github.com/huggingface/transformers/pull/12303 | 927,178,018 | MDExOlB1bGxSZXF1ZXN0Njc1MzczOTQ1 | 12,303 | Fix and improve documentation for LEDForConditionalGeneration | {
"login": "ionicsolutions",
"id": 32523967,
"node_id": "MDQ6VXNlcjMyNTIzOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32523967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ionicsolutions",
"html_url": "https://github.com/ionicsolutions",
"followers_url": "https://api.github.com/users/ionicsolutions/followers",
"following_url": "https://api.github.com/users/ionicsolutions/following{/other_user}",
"gists_url": "https://api.github.com/users/ionicsolutions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ionicsolutions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ionicsolutions/subscriptions",
"organizations_url": "https://api.github.com/users/ionicsolutions/orgs",
"repos_url": "https://api.github.com/users/ionicsolutions/repos",
"events_url": "https://api.github.com/users/ionicsolutions/events{/privacy}",
"received_events_url": "https://api.github.com/users/ionicsolutions/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks again!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
As reported in #12268, the example for text generation with LED does not work because it relies on an implementation detail of the BART model for which it was originally conceived. Further, the summarization example uses a checkpoint that was not finetuned on a summarization task, leading to the model just repeating the entire input.
This PR replaces both examples with versions that are fully functional and illustrate the respective task.
<!-- Remove if not applicable -->
Fixes #12268
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12303/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12303",
"html_url": "https://github.com/huggingface/transformers/pull/12303",
"diff_url": "https://github.com/huggingface/transformers/pull/12303.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12303.patch",
"merged_at": 1624370293000
} |
https://api.github.com/repos/huggingface/transformers/issues/12302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12302/comments | https://api.github.com/repos/huggingface/transformers/issues/12302/events | https://github.com/huggingface/transformers/pull/12302 | 927,129,453 | MDExOlB1bGxSZXF1ZXN0Njc1MzMxOTc3 | 12,302 | electra SequenceClassification layer change | {
"login": "kamalkraj",
"id": 17096858,
"node_id": "MDQ6VXNlcjE3MDk2ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalkraj",
"html_url": "https://github.com/kamalkraj",
"followers_url": "https://api.github.com/users/kamalkraj/followers",
"following_url": "https://api.github.com/users/kamalkraj/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions",
"organizations_url": "https://api.github.com/users/kamalkraj/orgs",
"repos_url": "https://api.github.com/users/kamalkraj/repos",
"events_url": "https://api.github.com/users/kamalkraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalkraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
from `ElectraClassificationHead` removed extra projection layer to make the fine-tuning models architure even with the original implementation.
As the ELECTRA pre-train doesn't have a sentence contrastive task there is no pooling layer for ELECTRA.
Code in the original codebase -
1. No additional projection, just returning the CLS token representation.
https://github.com/google-research/electra/blob/8a46635f32083ada044d7e9ad09604742600ee7b/model/modeling.py#L266
2. pooling_output -> dropout -> classification_dense
https://github.com/google-research/electra/blob/8a46635f32083ada044d7e9ad09604742600ee7b/finetune/classification/classification_tasks.py#L218
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12302/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12302",
"html_url": "https://github.com/huggingface/transformers/pull/12302",
"diff_url": "https://github.com/huggingface/transformers/pull/12302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12302.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12301/comments | https://api.github.com/repos/huggingface/transformers/issues/12301/events | https://github.com/huggingface/transformers/issues/12301 | 927,047,406 | MDU6SXNzdWU5MjcwNDc0MDY= | 12,301 | Are there any examples showing how to use the `metric_for_best_model` of class `TrainingArguments`? | {
"login": "yc1999",
"id": 37548571,
"node_id": "MDQ6VXNlcjM3NTQ4NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/37548571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yc1999",
"html_url": "https://github.com/yc1999",
"followers_url": "https://api.github.com/users/yc1999/followers",
"following_url": "https://api.github.com/users/yc1999/following{/other_user}",
"gists_url": "https://api.github.com/users/yc1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yc1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yc1999/subscriptions",
"organizations_url": "https://api.github.com/users/yc1999/orgs",
"repos_url": "https://api.github.com/users/yc1999/repos",
"events_url": "https://api.github.com/users/yc1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/yc1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nPinging @sgugger \r\n\r\nThanks!",
"The metrics need to be computed and have a name that is in what your `compute_metrics` function returns.",
"> The metrics need to be computed and have a name that is in what your `compute_metrics` function returns.\r\n\r\nthis works for me,thank you~"
] | 1,624 | 1,624 | 1,624 | NONE | null | <img width="567" alt="截屏2021-06-22 下午5 49 14" src="https://user-images.githubusercontent.com/37548571/122903624-2a128c80-d382-11eb-9400-4bb884d6c0c6.png">
In the [documents](https://huggingface.co/transformers/master/main_classes/trainer.html#trainingarguments), it says one can pass a `str` to `metric_for_best_model `.I am kind of confused why this can work. For example, can I just set `metric_for_best_model="accuracy"`, and it will compute accuracy itself? And if I have my own metric, how can I customize it? Thank you😄
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12301/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12300/comments | https://api.github.com/repos/huggingface/transformers/issues/12300/events | https://github.com/huggingface/transformers/pull/12300 | 927,023,605 | MDExOlB1bGxSZXF1ZXN0Njc1MjQxMDA5 | 12,300 | [Flax] ViT training example | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you run `make style` ?"
] | 1,624 | 1,625 | 1,625 | MEMBER | null | # What does this PR do?
This PR adds image classification script for fine-tuning flax vit.
For faster processing and loading , torch dataloader is used, since this can become a bottleneck on TPU. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12300/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12300",
"html_url": "https://github.com/huggingface/transformers/pull/12300",
"diff_url": "https://github.com/huggingface/transformers/pull/12300.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12300.patch",
"merged_at": 1625489583000
} |
https://api.github.com/repos/huggingface/transformers/issues/12299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12299/comments | https://api.github.com/repos/huggingface/transformers/issues/12299/events | https://github.com/huggingface/transformers/issues/12299 | 926,949,677 | MDU6SXNzdWU5MjY5NDk2Nzc= | 12,299 | T ** 2 in distillation process | {
"login": "DA-L3",
"id": 33768245,
"node_id": "MDQ6VXNlcjMzNzY4MjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/33768245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DA-L3",
"html_url": "https://github.com/DA-L3",
"followers_url": "https://api.github.com/users/DA-L3/followers",
"following_url": "https://api.github.com/users/DA-L3/following{/other_user}",
"gists_url": "https://api.github.com/users/DA-L3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DA-L3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DA-L3/subscriptions",
"organizations_url": "https://api.github.com/users/DA-L3/orgs",
"repos_url": "https://api.github.com/users/DA-L3/repos",
"events_url": "https://api.github.com/users/DA-L3/events{/privacy}",
"received_events_url": "https://api.github.com/users/DA-L3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | Hello,
I am confused about one part in the https://github.com/huggingface/transformers/blob/master/examples/research_projects/distillation/distiller.py script by VictorSanh et. al.

Why does the script weight the KL-Divergence between the student and the teacher distribution with an additional `T ** 2` namely the ```* (self.temperature) ** 2``` part in line 417.
The Hinton-paper says something about weighting with the squared temperature value but in this blog post: https://medium.com/huggingface/distilbert-8cf3380435b5 by VictorSanh (the same author) the KL value (can be found in about the middle of the page) does not seem to be weighted.
What am I missing here? Thank you.
@VictorSanh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12299/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12299/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12298 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12298/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12298/comments | https://api.github.com/repos/huggingface/transformers/issues/12298/events | https://github.com/huggingface/transformers/pull/12298 | 926,883,089 | MDExOlB1bGxSZXF1ZXN0Njc1MTIxMDI4 | 12,298 | add FlaxAutoModelForImageClassification in main init | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
Adds `FlaxAutoModelForImageClassification` and `FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING` in main init. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12298/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12298/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12298",
"html_url": "https://github.com/huggingface/transformers/pull/12298",
"diff_url": "https://github.com/huggingface/transformers/pull/12298.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12298.patch",
"merged_at": 1624366565000
} |
https://api.github.com/repos/huggingface/transformers/issues/12297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12297/comments | https://api.github.com/repos/huggingface/transformers/issues/12297/events | https://github.com/huggingface/transformers/issues/12297 | 926,860,650 | MDU6SXNzdWU5MjY4NjA2NTA= | 12,297 | Error: while executing run_qa.py from examples/pytorch/question-answering/ directory | {
"login": "pandyahariom",
"id": 58762872,
"node_id": "MDQ6VXNlcjU4NzYyODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/58762872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandyahariom",
"html_url": "https://github.com/pandyahariom",
"followers_url": "https://api.github.com/users/pandyahariom/followers",
"following_url": "https://api.github.com/users/pandyahariom/following{/other_user}",
"gists_url": "https://api.github.com/users/pandyahariom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pandyahariom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pandyahariom/subscriptions",
"organizations_url": "https://api.github.com/users/pandyahariom/orgs",
"repos_url": "https://api.github.com/users/pandyahariom/repos",
"events_url": "https://api.github.com/users/pandyahariom/events{/privacy}",
"received_events_url": "https://api.github.com/users/pandyahariom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are using a model with `max_length = 128` but then pass along `max_seq_length 384`, which is overridden to become 128 since that is the maximum the tokenizer thinks the model can handle because of:\r\n```\r\ntokenizer = BertTokenizer(do_lower_case=False, model_max_length=128)\r\n```\r\nand then trying to use a stride of 128. Either pick a greater max length or lower the stride to be < 128.",
"@sgugger Thanks a lot. I have reduced stride and it is working now. "
] | 1,624 | 1,624 | 1,624 | NONE | null | I am using the Google Colab environment to perform QA task. To evaluate finetuned model I am using run_qa.py file but in the **STEP-2** below, while using run_qa.py, I am getting mentioned error.
**STEP1:**
In the first step, the bert_base_cased model trained on the MLM task is finetuned on QA task using SQuAD.
model = AutoModelForQuestionAnswering.from_pretrained(path_of_my_bert_base_cased_MLM_checkpoint)
args = TrainingArguments(
f"test-squad",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=1,
weight_decay=0.01,
)
_In the below code I am trying to freeze embedding learning for experimental purpose_
for param in model.parameters():
param.requires_grad = True
for param in model.get_input_embeddings().parameters():
param.requires_grad = False
data_collator = default_data_collator
tokenizer = BertTokenizer(<location-of-my-custom-vocab>,
do_lower_case=False,
model_max_length=128)
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
**STEP 2:**
Now, while using above QA finetuned model with run_qa.py file with following parameters I am getting _panicked at 'assertion failed:_' error :
python run_qa.py \
--model_name_or_path <path-to-my-above-model>\
--dataset_name xquad \
--dataset_config_name xquad.hi \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content
Error Message:
06/22/2021 04:49:39 - WARNING - __main__ - The max_seq_length passed (384) is larger than the maximum length for themodel (128). Using max_seq_length=128.
Running tokenizer on validation dataset: 0% 0/2 [00:00<?, ?ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:322:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
## Environment info
- `transformers` version: 4.8.0.dev0
- Platform:Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?):1.9.0+cu102 (True)
- Tensorflow version (GPU?):2.5.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel set-up
### Who can help
@LysandreJik,@sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12297/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12296/comments | https://api.github.com/repos/huggingface/transformers/issues/12296/events | https://github.com/huggingface/transformers/issues/12296 | 926,850,039 | MDU6SXNzdWU5MjY4NTAwMzk= | 12,296 | BART infilling example? | {
"login": "suzyahyah",
"id": 2980993,
"node_id": "MDQ6VXNlcjI5ODA5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzyahyah",
"html_url": "https://github.com/suzyahyah",
"followers_url": "https://api.github.com/users/suzyahyah/followers",
"following_url": "https://api.github.com/users/suzyahyah/following{/other_user}",
"gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions",
"organizations_url": "https://api.github.com/users/suzyahyah/orgs",
"repos_url": "https://api.github.com/users/suzyahyah/repos",
"events_url": "https://api.github.com/users/suzyahyah/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzyahyah/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(Resolved, but documentation needs to be updated)\r\n\r\n```\r\nfrom transformers import BartConfig\r\nconfig = BartConfig(force_bos_token_to_be_generated=True)\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-large\", config=config)\r\n...\r\n```\r\n\r\nwill let the model do infilling. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | Hi,
I'm trying this official example from the documentation (v4.7.0) for BART mask infilling :
"https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration"
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```
First, I get this `TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated`
After removing that argument, the output is:` ['UNALSO SEE']`, which is unexpected.
I also tried several other examples but get behavior which does not seem like infilling:
```
example_english_phrase = "They are having a <mask> in a park."
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
tok.batch_decode(generated_ids, skip_special_tokens=True)
```
Output: `"'They are in a park.They are having a party.'"`
**Do I have the wrong model or documentation?** Grateful for any pointers to help resolve this.
@patrickvonplaten, @patil-suraj
- `transformers` version: 4.7.0
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1+cu110
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12296/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12295/comments | https://api.github.com/repos/huggingface/transformers/issues/12295/events | https://github.com/huggingface/transformers/issues/12295 | 926,771,569 | MDU6SXNzdWU5MjY3NzE1Njk= | 12,295 | [examples] replicate the new `--log_level` feature to all trainer-based pytorch examples | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can I take this?",
"Yes, of course! Thank you, @bhadreshpsavani ",
"@bhadreshpsavani, once https://github.com/huggingface/transformers/pull/12309 gets merged, please rebase your branch and rename\r\n```\r\n- get_node_log_level()\r\n+ get_process_log_level()\r\n```\r\n\r\nThank you!",
"ok, it's merged now.",
"Thanks!",
"How can I contribute?",
"Please refer to https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/pull/12276 introduced a new `--log_level` feature, which now allows users to set their desired log level via CLI or TrainingArguments.
`run_translation.py` was used as a "model" for other examples.
Now we need to replicate this to all other Trainer-based examples under examples/pytorch/, the 3 changes are
1. importing datasets
2. using `training_args.get_node_log_level()` and setting log_level in 3 modules
3. replacing `datasets` object name with `raw_datasets`, since otherwise we have a conflict with `datasets` the module.
and the relevant diff is [here](https://github.com/huggingface/transformers/pull/12276/files?file-filters%5B%5D=.py#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a)
and of course since we don't quite have extensive tests for examples, you can just test with a staple cmd from the corresponding README.md with `--log_level=error` and check that almost all logs are gone.
This is open to all.
And thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12295/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12294/comments | https://api.github.com/repos/huggingface/transformers/issues/12294/events | https://github.com/huggingface/transformers/pull/12294 | 926,707,701 | MDExOlB1bGxSZXF1ZXN0Njc0OTc3MTYy | 12,294 | [tests] multiple improvements | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR:
1. splits the 2 groups of `TrainerIntegrationTest` tests into separate subclasses, since at the moment many run 2x training in `setUp` and then don't use the results - should make things a bit faster, and mainly this removes the weird unexpected logs when debugging tests.
2. introduce, use and document `require_torch_up_to_2_gpus` as we have a bunch of tests that can only run on up to 2 gpus and currently they weren't `skipped`, but hacked to return reporting `passed`!
3. fix `test_resume_training_with_randomness` to use `assertAlmostEqual` so that we get debug data when it fails - and also fix the comment to match the code that this test only works with 0 or 1 gpus - and use the marker.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12294/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12294",
"html_url": "https://github.com/huggingface/transformers/pull/12294",
"diff_url": "https://github.com/huggingface/transformers/pull/12294.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12294.patch",
"merged_at": 1624330296000
} |
https://api.github.com/repos/huggingface/transformers/issues/12293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12293/comments | https://api.github.com/repos/huggingface/transformers/issues/12293/events | https://github.com/huggingface/transformers/pull/12293 | 926,684,634 | MDExOlB1bGxSZXF1ZXN0Njc0OTU3NjYy | 12,293 | [tests] reset report_to to none, avoid deprecation warning | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR fixes the warnings:
```
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none).
In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code
and make this info disappear :-).
```
by setting `report_to=[]`, which also makes the tests a tad faster by not doing any reporting, unless the test explicitly asks for it.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12293/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12293",
"html_url": "https://github.com/huggingface/transformers/pull/12293",
"diff_url": "https://github.com/huggingface/transformers/pull/12293.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12293.patch",
"merged_at": 1624319412000
} |
https://api.github.com/repos/huggingface/transformers/issues/12292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12292/comments | https://api.github.com/repos/huggingface/transformers/issues/12292/events | https://github.com/huggingface/transformers/pull/12292 | 926,601,919 | MDExOlB1bGxSZXF1ZXN0Njc0ODg0OTI4 | 12,292 | Fix for the issue of device-id getting hardcoded for position-ids during Tracing for Flaubert | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,630 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Flaubert.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Does not apply.
## Who can review?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12292/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12292",
"html_url": "https://github.com/huggingface/transformers/pull/12292",
"diff_url": "https://github.com/huggingface/transformers/pull/12292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12292.patch",
"merged_at": 1630486018000
} |
https://api.github.com/repos/huggingface/transformers/issues/12291 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12291/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12291/comments | https://api.github.com/repos/huggingface/transformers/issues/12291/events | https://github.com/huggingface/transformers/pull/12291 | 926,586,252 | MDExOlB1bGxSZXF1ZXN0Njc0ODcwNzQz | 12,291 | Trainer: adjust wandb installation example | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | COLLABORATOR | null | Hi,
this is a very pedantic fix for the `wandb` installation example.
In the original version, `wandb login` will be executed, even when the previous command - `pip install wandb` - failed.
This can be solved by using the "and" operator.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12291/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12291",
"html_url": "https://github.com/huggingface/transformers/pull/12291",
"diff_url": "https://github.com/huggingface/transformers/pull/12291.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12291.patch",
"merged_at": 1624366051000
} |
https://api.github.com/repos/huggingface/transformers/issues/12290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12290/comments | https://api.github.com/repos/huggingface/transformers/issues/12290/events | https://github.com/huggingface/transformers/pull/12290 | 926,520,801 | MDExOlB1bGxSZXF1ZXN0Njc0ODEzOTU2 | 12,290 | Fix for the issue of device-id getting hardcoded for position-ids during Tracing for Distillbert | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,630 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Distillbert.
Fixes # (issue)
Registering a buffer for position_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? does not apply.
## Who can review?
@LysandreJik . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12290/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12290",
"html_url": "https://github.com/huggingface/transformers/pull/12290",
"diff_url": "https://github.com/huggingface/transformers/pull/12290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12290.patch",
"merged_at": 1630486045000
} |
https://api.github.com/repos/huggingface/transformers/issues/12289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12289/comments | https://api.github.com/repos/huggingface/transformers/issues/12289/events | https://github.com/huggingface/transformers/pull/12289 | 926,476,891 | MDExOlB1bGxSZXF1ZXN0Njc0Nzc2MTM2 | 12,289 | Fix TFWav2Vec2 SpecAugment | {
"login": "will-rice",
"id": 25072137,
"node_id": "MDQ6VXNlcjI1MDcyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-rice",
"html_url": "https://github.com/will-rice",
"followers_url": "https://api.github.com/users/will-rice/followers",
"following_url": "https://api.github.com/users/will-rice/following{/other_user}",
"gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-rice/subscriptions",
"organizations_url": "https://api.github.com/users/will-rice/orgs",
"repos_url": "https://api.github.com/users/will-rice/repos",
"events_url": "https://api.github.com/users/will-rice/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-rice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Based on the original paper the time mask should be applied on the time axis like \r\n\r\nHowever, wav2vec2 is masking hidden states. In principle, it should work the same way and if correctly applied you should be able to see the zeroed span in the time dimension of the hidden states.",
"Hey @will-rice,\r\n\r\nThanks for fixing the bug. As I understand it, when applying spec_augment along the time axis we are actually not setting the values to zero but to a trained mask embedding vector (this is mostly because that's how it's used for pretraining actually). \r\nIMO, to correctly implement this one has to make use of `tf.where`. A good way would be to expand the masked indices and `self.masked_spec_embed` to be of the same size as `hidden_states` (as explained above) ati use `tf.where`.\r\n\r\nFor spec_augment along the feature axis I would also suggest to use `tf.where` -> expand the feature indices (this time along the time axis (seq length) and then one can simply do:\r\n\r\n```\r\nhidden_states = tf.where(expanded_mask, hidden_states, 0)\r\n```",
"Let me know if this is not understandable ;-) ",
"> Hey @will-rice,\r\n> \r\n> Thanks for fixing the bug. As I understand it, when applying spec_augment along the time axis we are actually not setting the values to zero but to a trained mask embedding vector (this is mostly because that's how it's used for pretraining actually).\r\n> IMO, to correctly implement this one has to make use of `tf.where`. A good way would be to expand the masked indices and `self.masked_spec_embed` to be of the same size as `hidden_states` (as explained above) ati use `tf.where`.\r\n> \r\n> For spec_augment along the feature axis I would also suggest to use `tf.where` -> expand the feature indices (this time along the time axis (seq length) and then one can simply do:\r\n> \r\n> ```\r\n> hidden_states = tf.where(expanded_mask, hidden_states, 0)\r\n> ```\r\n\r\n\"fixing\" :stuck_out_tongue:. I believe I understand the way to do it now but will post questions here if I get stuck. Thank you for walking through this!"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the SpecAugment implementation for TFWav2Vec2. I had a lot of trouble with this during the original PR, so I'm not 100% if this is correct. I would love feedback because at this point there must be a knowledge gap.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/12264
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12289/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12289/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12289",
"html_url": "https://github.com/huggingface/transformers/pull/12289",
"diff_url": "https://github.com/huggingface/transformers/pull/12289.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12289.patch",
"merged_at": 1624954557000
} |
https://api.github.com/repos/huggingface/transformers/issues/12288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12288/comments | https://api.github.com/repos/huggingface/transformers/issues/12288/events | https://github.com/huggingface/transformers/pull/12288 | 926,472,174 | MDExOlB1bGxSZXF1ZXN0Njc0NzcyMDc3 | 12,288 | Add out of vocabulary error to ASR models | {
"login": "will-rice",
"id": 25072137,
"node_id": "MDQ6VXNlcjI1MDcyMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/25072137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-rice",
"html_url": "https://github.com/will-rice",
"followers_url": "https://api.github.com/users/will-rice/followers",
"following_url": "https://api.github.com/users/will-rice/following{/other_user}",
"gists_url": "https://api.github.com/users/will-rice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-rice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-rice/subscriptions",
"organizations_url": "https://api.github.com/users/will-rice/orgs",
"repos_url": "https://api.github.com/users/will-rice/repos",
"events_url": "https://api.github.com/users/will-rice/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-rice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
This PR adds a check to Wav2Vec2 and Hubert models to ensure the labels do not contain values greater than the configured vocab size.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/12270
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12288/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12288",
"html_url": "https://github.com/huggingface/transformers/pull/12288",
"diff_url": "https://github.com/huggingface/transformers/pull/12288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12288.patch",
"merged_at": 1624953466000
} |
https://api.github.com/repos/huggingface/transformers/issues/12287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12287/comments | https://api.github.com/repos/huggingface/transformers/issues/12287/events | https://github.com/huggingface/transformers/pull/12287 | 926,468,568 | MDExOlB1bGxSZXF1ZXN0Njc0NzY4ODgw | 12,287 | Fix for the issue of device-id getting hardcoded for token_type_ids during Tracing for ConvBert | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,630 | 1,630 | CONTRIBUTOR | null | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to ConvBert.
Fixes # (issue)
Registering a buffer for token_type_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Not required.
## Who can review?
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12287/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12287",
"html_url": "https://github.com/huggingface/transformers/pull/12287",
"diff_url": "https://github.com/huggingface/transformers/pull/12287.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12287.patch",
"merged_at": 1630486078000
} |
https://api.github.com/repos/huggingface/transformers/issues/12286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12286/comments | https://api.github.com/repos/huggingface/transformers/issues/12286/events | https://github.com/huggingface/transformers/issues/12286 | 926,426,814 | MDU6SXNzdWU5MjY0MjY4MTQ= | 12,286 | Memory leak when using DistilBert for inference to extract [CLS] hidden state | {
"login": "lucaguarro",
"id": 22605313,
"node_id": "MDQ6VXNlcjIyNjA1MzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22605313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaguarro",
"html_url": "https://github.com/lucaguarro",
"followers_url": "https://api.github.com/users/lucaguarro/followers",
"following_url": "https://api.github.com/users/lucaguarro/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaguarro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaguarro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaguarro/subscriptions",
"organizations_url": "https://api.github.com/users/lucaguarro/orgs",
"repos_url": "https://api.github.com/users/lucaguarro/repos",
"events_url": "https://api.github.com/users/lucaguarro/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaguarro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You're keeping the results of your model forward call in memory when extending your `pooled_output` list so your memory is bound to take a hit as you iterate through your dataset",
"@LysandreJik Sorry I should have clarified that my dataset consists of 14000 rows and the size out of the output I am trying to extract for each one of them is (1,768). This thus corresponds to (14000 * 768 * 4) Bytes --> 43 megabytes. Unless there are undesired artifacts from the forward calls that are being stored?",
"Seems like I have fixed my problem by making `pooled_outputs` a pytorch tensor and not a list.\r\nSo that my function now looks like this:\r\n\r\n```\r\ndef getPooledOutputs(model, encoded_dataset, batch_size = 32):\r\n model.eval()\r\n\r\n # pooled_outputs = []\r\n pooled_outputs = torch.empty([0,768]).cuda()\r\n print(\"total number of iters \", len(encoded_dataset['input_ids'])//batch_size + 1)\r\n \r\n for i in range(len(encoded_dataset['input_ids'])//batch_size + 1):\r\n print(i)\r\n up_to = i*batch_size + batch_size\r\n if len(encoded_dataset['input_ids']) < up_to:\r\n up_to = len(encoded_dataset['input_ids'])\r\n input_ids = th.LongTensor(encoded_dataset['input_ids'][i*batch_size:up_to]).cuda()\r\n attention_mask = th.LongTensor(encoded_dataset['attention_mask'][i*batch_size:up_to]).cuda()\r\n\r\n with torch.no_grad():\r\n embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)['hidden_states'][-1][:,0] # Pooled output\r\n pooled_outputs = th.cat([pooled_outputs, embeddings],0)\r\n th.cuda.empty_cache()\r\n\r\n return pooled_outputs\r\n```\r\n\r\nStill do not know why having a list of tensors is a problem but I suppose that this does not concern Huggingface"
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @drjosephliu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am attempting to extract all of the pooled outputs for each row in my dataset and return them as an array. My dataset consists of 14000 rows and the size of a single pooled output is (1,768). Therefore, I would expect my RAM usage to be ~(14000 * 768 * 4) bytes --> 43 MBs.
However, I notice that my RAM usage seems to increase exponentially as more and more iterations are executed. This occurs when using both the CPU and the GPU. When running on CPU, the google colab environment shows a huge jump in RAM-usage about 75% through my dataset.
Here is a screenshot of the RAM usage that illustrates this problem:

## To reproduce
Steps to reproduce the behavior:
1. Encode a dataset (sufficiently large; mine has 14k samples of 512 tokens)
2. Run it through my function (provided below) to extract the pooled output of each sample
`
def getPooledOutputs(model, encoded_dataset, batch_size = 32):
model.eval()
pooled_outputs = []
print("total number of iters ", len(encoded_dataset['input_ids'])//batch_size + 1)
for i in range(len(encoded_dataset['input_ids'])//batch_size + 1):
print(i)
up_to = i*batch_size + batch_size
if len(encoded_dataset['input_ids']) < up_to:
up_to = len(encoded_dataset['input_ids'])
input_ids = th.LongTensor(encoded_dataset['input_ids'][i*batch_size:up_to]).cuda()
attention_mask = th.LongTensor(encoded_dataset['attention_mask'][i*batch_size:up_to]).cuda()
with torch.no_grad():
embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)['hidden_states'][-1][:,0] # Pooled output
pooled_outputs.extend(embeddings)
th.cuda.empty_cache()
return pooled_outputs
`
This is the error message (GPU):
> RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.78 GiB total capacity; 13.75 GiB already allocated; 260.75 MiB free; 14.21 GiB reserved in total by PyTorch)
On CPU my runtime just crashes.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To have my function return an array of the [CLS] pooled output for every row in my dataset and to have my GPU ram usage roughly constant during the entirety of the function call. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12286/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12285/comments | https://api.github.com/repos/huggingface/transformers/issues/12285/events | https://github.com/huggingface/transformers/pull/12285 | 926,301,828 | MDExOlB1bGxSZXF1ZXN0Njc0NjI1MjEw | 12,285 | [WIP][Flax] CLIP training example | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"closing this PR, messed up the commit history :( \r\nOpened a new PR here #12491"
] | 1,624 | 1,625 | 1,625 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12285/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12285",
"html_url": "https://github.com/huggingface/transformers/pull/12285",
"diff_url": "https://github.com/huggingface/transformers/pull/12285.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12285.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12284/comments | https://api.github.com/repos/huggingface/transformers/issues/12284/events | https://github.com/huggingface/transformers/pull/12284 | 926,204,895 | MDExOlB1bGxSZXF1ZXN0Njc0NTQyNTU5 | 12,284 | [FlaxClip] fix test from/save pretrained test | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
Fixes `test_from_pretrained_save_pretrained` test for `FlaxClip` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12284/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12284",
"html_url": "https://github.com/huggingface/transformers/pull/12284",
"diff_url": "https://github.com/huggingface/transformers/pull/12284.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12284.patch",
"merged_at": 1624287274000
} |
https://api.github.com/repos/huggingface/transformers/issues/12283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12283/comments | https://api.github.com/repos/huggingface/transformers/issues/12283/events | https://github.com/huggingface/transformers/pull/12283 | 926,053,654 | MDExOlB1bGxSZXF1ZXN0Njc0NDEyNjE5 | 12,283 | [TFWav2Vec2] Fix docs | {
"login": "chenht2021",
"id": 1046370,
"node_id": "MDQ6VXNlcjEwNDYzNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1046370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenht2021",
"html_url": "https://github.com/chenht2021",
"followers_url": "https://api.github.com/users/chenht2021/followers",
"following_url": "https://api.github.com/users/chenht2021/following{/other_user}",
"gists_url": "https://api.github.com/users/chenht2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenht2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenht2021/subscriptions",
"organizations_url": "https://api.github.com/users/chenht2021/orgs",
"repos_url": "https://api.github.com/users/chenht2021/repos",
"events_url": "https://api.github.com/users/chenht2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenht2021/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for fixing the error @chenht2010 - could you run `make style` once to fix the check code quality test? The PyTorch test error seems unrelated.",
"`make style` should be run from the root folder"
] | 1,624 | 1,624 | 1,624 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes TFwav2vec2 docs error
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12283/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12283",
"html_url": "https://github.com/huggingface/transformers/pull/12283",
"diff_url": "https://github.com/huggingface/transformers/pull/12283.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12283.patch",
"merged_at": 1624456291000
} |
https://api.github.com/repos/huggingface/transformers/issues/12282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12282/comments | https://api.github.com/repos/huggingface/transformers/issues/12282/events | https://github.com/huggingface/transformers/issues/12282 | 926,029,723 | MDU6SXNzdWU5MjYwMjk3MjM= | 12,282 | TFWav2Vec2ForCTC: Error when using padded batch and attention mask | {
"login": "yossing-audatic",
"id": 64093391,
"node_id": "MDQ6VXNlcjY0MDkzMzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/64093391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yossing-audatic",
"html_url": "https://github.com/yossing-audatic",
"followers_url": "https://api.github.com/users/yossing-audatic/followers",
"following_url": "https://api.github.com/users/yossing-audatic/following{/other_user}",
"gists_url": "https://api.github.com/users/yossing-audatic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yossing-audatic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yossing-audatic/subscriptions",
"organizations_url": "https://api.github.com/users/yossing-audatic/orgs",
"repos_url": "https://api.github.com/users/yossing-audatic/repos",
"events_url": "https://api.github.com/users/yossing-audatic/events{/privacy}",
"received_events_url": "https://api.github.com/users/yossing-audatic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Think we can close this one"
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.8.0.dev0
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
@will-rice
Models:
- Wav2Vec2
-->
## Information
Model I am using: TFWav2Vec2ForCTC
The problem arises when using:
* Official example script of TFWav2Vec2ForCTC modified to use padded batch
## To reproduce
Steps to reproduce the behavior:
1. Install relevant libraries.
2. Run code snippet below
```python
import tensorflow as tf
from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# Pad the speech file with zeros and create corresponding attention mask
speech_len = len(ds["speech"][0])
padded_speech = ds["speech"][0] + [0.0]*1000
attention_mask = tf.sequence_mask((speech_len,), maxlen=len(padded_speech), dtype=tf.float32)
input_values = processor(padded_speech, return_tensors="tf").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = tf.argmax(logits, axis=-1)
transcription = processor.decode(predicted_ids[0])
# compute loss
target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
# wrap processor as target processor to encode labels
with processor.as_target_processor():
labels = processor(transcription, return_tensors="tf").input_ids
loss = model(input_values, attention_mask=attention_mask, labels=labels).loss
```
## Outputs
```
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-1-263e70f23fd3> in <module>
33
34
---> 35 loss = model(input_values, attention_mask=attention_mask, labels=labels).loss
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, labels, output_hidden_states, return_dict, training, **kwargs)
1553 )
1554
-> 1555 outputs = self.wav2vec2(
1556 input_values=inputs["input_values"],
1557 attention_mask=inputs["attention_mask"],
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1225 hidden_states = self._mask_hidden_states(hidden_states)
1226
-> 1227 encoder_outputs = self.encoder(
1228 hidden_states,
1229 attention_mask=attention_mask,
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, hidden_states, attention_mask, output_attentions, output_hidden_states, return_dict, training)
993
994 if attention_mask is not None:
--> 995 hidden_states = hidden_states * tf.expand_dims(attention_mask, -1)
996 attention_mask = _expand_mask(attention_mask)
997 else:
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
1162 with ops.name_scope(None, op_name, [x, y]) as name:
1163 try:
-> 1164 return func(x, y, name=name)
1165 except (TypeError, ValueError) as e:
1166 # Even if dispatching the op failed, the RHS may be a tensor aware
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
1494 return sparse_tensor.SparseTensor(y.indices, new_vals, y.dense_shape)
1495 else:
-> 1496 return multiply(x, y, name=name)
1497
1498
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in multiply(x, y, name)
516 """
517
--> 518 return gen_math_ops.mul(x, y, name)
519
520
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py in mul(x, y, name)
6066 return _result
6067 except _core._NotOkStatusException as e:
-> 6068 _ops.raise_from_not_ok_status(e, name)
6069 except _core._FallbackException:
6070 pass
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6860 message = e.message + (" name: " + name if name is not None else "")
6861 # pylint: disable=protected-access
-> 6862 six.raise_from(core._status_to_exception(e.code, message), None)
6863 # pylint: enable=protected-access
6864
/opt/audatic/venv/lib/python3.8/site-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Incompatible shapes: [1,547,768] vs. [1,544,1] [Op:Mul]
```
## Expected behavior
Code should run without errors and produce a loss equivalent to not using padded batch. Without the padded batch, the loss is:
```python
print(loss)
>>> <tf.Tensor: shape=(), dtype=float32, numpy=39.32432>
```
When using the padded batch and not specifying an attention_mask, the loss is:
```python
print(loss)
>>> <tf.Tensor: shape=(), dtype=float32, numpy=39.96655>
```
## Fix:
The bugfix should be quite easy. In line 1217 of transformers.models.wav2vec2.modeling_tf_wav2vec2.py, instead of:
```python
attention_mask = tf.sequence_mask(output_lengths, dtype=hidden_states.dtype)
```
It should be:
```python
max_output_length = self._get_feat_extract_output_lengths(inputs["input_values"].shape[-1])
attention_mask = tf.sequence_mask(output_lengths, maxlen=max_output_length, dtype=hidden_states.dtype)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12282/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12281/comments | https://api.github.com/repos/huggingface/transformers/issues/12281/events | https://github.com/huggingface/transformers/pull/12281 | 925,913,793 | MDExOlB1bGxSZXF1ZXN0Njc0Mjk0MjQ0 | 12,281 | [WIP] Cogview | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @patil-suraj, do you think it still makes sense to work on this and get it merged?"
] | 1,624 | 1,636 | 1,627 | MEMBER | null | # What does this PR do?
Adds the [CogView](https://github.com/THUDM/CogView) model for text-to-image generation.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12281/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12281/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12281",
"html_url": "https://github.com/huggingface/transformers/pull/12281",
"diff_url": "https://github.com/huggingface/transformers/pull/12281.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12281.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12280/comments | https://api.github.com/repos/huggingface/transformers/issues/12280/events | https://github.com/huggingface/transformers/pull/12280 | 925,909,980 | MDExOlB1bGxSZXF1ZXN0Njc0MjkxMDEx | 12,280 | Rename detr targets to labels | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik if you want, I can also update the failing integration test for `DetrForSegmentation` in this PR (and perhaps rename the PR). And how can I create an alias for `DetrForSegmentation` (to be renamed to `DetrForImageSegmentation`)?",
"Let's merge this PR as-is and update the integration test in another PR.\r\n\r\nFor the alias you can simply do `DetrForImageSegmentation = DetrForSegmentation`"
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
It fixes #12248. As the models accept "labels" as an argument, it's better to also use the term "labels" in the feature extractor instead of "target".
Note that if this PR gets merged, I'll need to update my demo notebooks (rename "target" to "labels").
I also improved the documentation a little more, and removed some unused variables from `DetrConfig`.
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12280/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12280",
"html_url": "https://github.com/huggingface/transformers/pull/12280",
"diff_url": "https://github.com/huggingface/transformers/pull/12280.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12280.patch",
"merged_at": 1624950467000
} |
https://api.github.com/repos/huggingface/transformers/issues/12279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12279/comments | https://api.github.com/repos/huggingface/transformers/issues/12279/events | https://github.com/huggingface/transformers/pull/12279 | 925,887,481 | MDExOlB1bGxSZXF1ZXN0Njc0MjcxODQz | 12,279 | Better CI feedback | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | The "View on GitHub" link now redirects to the correct run. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12279/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12279",
"html_url": "https://github.com/huggingface/transformers/pull/12279",
"diff_url": "https://github.com/huggingface/transformers/pull/12279.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12279.patch",
"merged_at": 1624258332000
} |
https://api.github.com/repos/huggingface/transformers/issues/12278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12278/comments | https://api.github.com/repos/huggingface/transformers/issues/12278/events | https://github.com/huggingface/transformers/issues/12278 | 925,788,524 | MDU6SXNzdWU5MjU3ODg1MjQ= | 12,278 | ViTFeatureExtractor.save_pretrained() generate "preprocessor_config.json" but not "config.json" | {
"login": "daquarti",
"id": 48704929,
"node_id": "MDQ6VXNlcjQ4NzA0OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/48704929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daquarti",
"html_url": "https://github.com/daquarti",
"followers_url": "https://api.github.com/users/daquarti/followers",
"following_url": "https://api.github.com/users/daquarti/following{/other_user}",
"gists_url": "https://api.github.com/users/daquarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daquarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daquarti/subscriptions",
"organizations_url": "https://api.github.com/users/daquarti/orgs",
"repos_url": "https://api.github.com/users/daquarti/repos",
"events_url": "https://api.github.com/users/daquarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/daquarti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`ViTFeatureExtractor` is the feature extractor, not the model itself. The model itself requires the `config.json` file that specifies the architecture of the model, while the feature extractor requires its `preprocessor_config.json` file. These two are different files.",
"The problem is ViTForImageClassification.from_pretrained() not take \"preprocessor_config.json\" and you need to rename it as \"config.json\".\r\nThanks",
"`ViTForImageClassification` is not `ViTFeatureExtractor`, the first one is a model while the second one is a feature extractor. See some usage examples [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTForImageClassification)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | config.json is needed to use ViTForImageClassification.from_pretrained()
I made a pull-request | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12278/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12277/comments | https://api.github.com/repos/huggingface/transformers/issues/12277/events | https://github.com/huggingface/transformers/pull/12277 | 925,780,555 | MDExOlB1bGxSZXF1ZXN0Njc0MTgwMDY5 | 12,277 | Update feature_extraction_utils.py | {
"login": "daquarti",
"id": 48704929,
"node_id": "MDQ6VXNlcjQ4NzA0OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/48704929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daquarti",
"html_url": "https://github.com/daquarti",
"followers_url": "https://api.github.com/users/daquarti/followers",
"following_url": "https://api.github.com/users/daquarti/following{/other_user}",
"gists_url": "https://api.github.com/users/daquarti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daquarti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daquarti/subscriptions",
"organizations_url": "https://api.github.com/users/daquarti/orgs",
"repos_url": "https://api.github.com/users/daquarti/repos",
"events_url": "https://api.github.com/users/daquarti/events{/privacy}",
"received_events_url": "https://api.github.com/users/daquarti/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten could you take a look at this?",
"Hey @daquarti, \r\n\r\nThis looks wrong to me -> the feature extractors don't save their parameters into `config.json`, but into `preprocessor_config.json`. Could you elaborate a bit more on the PR?",
"Thanks for help @patrickvonplaten , maybe may solution was not good because ViTFeatureExtractor.from_pretrained() needs \"preprocessor_config.json\"\r\n\r\nThe problem is the following:\r\nWhen I use ViTFeatureExtractor.from_pretrained() \"preprocessor_config.json\" woks good.\r\nbut them, when I use ViTForImageClassification.from_pretrained() \"config.json\" is needed. If I rename \"preprocessor_config.json\" like \"config.json\" ViTForImageClassification.from_pretrained() works\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Note that in order to load the model `ViTForImageClassification` one needs a `config.json`. In order to load the feature extractor one needs `preprocessor_config.json`.",
"Those are actually two different configs for two different classes",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,629 | 1,629 | NONE | null | config.json is needed to use custom ViTForImageClassification.from_pretrained()
but FEATURE_EXTRACTOR_NAME = "preprocessor_config.json"
so ---> output_feature_extractor_file = os.path.join(save_directory, "config.json")
preprocessor_config.json
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12277/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12277",
"html_url": "https://github.com/huggingface/transformers/pull/12277",
"diff_url": "https://github.com/huggingface/transformers/pull/12277.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12277.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12276/comments | https://api.github.com/repos/huggingface/transformers/issues/12276/events | https://github.com/huggingface/transformers/pull/12276 | 925,765,493 | MDExOlB1bGxSZXF1ZXN0Njc0MTY3Mzky | 12,276 | [trainer + examples] set log level from CLI | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> but for the `should_print` (or whatever) argument, I would wait until a user actually requests it. It seems weird to me to run a training script and not even want to have the metrics outputted.\r\n\r\nTotally agree!\r\n\r\nwrt the rest of the logic, it looks like I didn't look close enough and `should_log` is a toggle on sort of whether this is a master or a slave node, but I don't know the sagemaker logic to tell that particular branch. So it has its function.\r\n\r\nAfter experimenting with different approaches, it looks like to give users a full control they really should be able to set separately the log_level on the master and slave nodes. So I propose to further add another arg:\r\n\r\nSo training args:\r\n```\r\n def get_node_log_level(self):\r\n default_log_level_master_node = logging.INFO\r\n default_log_level_slave_node = logging.WARNING\r\n log_level_master_node = default_log_level_master_node if self.log_level_master_node != -1 else self.log_level_master_node\r\n log_level_slave_node = default_log_level_slave_node if self.log_level_slave_node != -1 else self.log_level_slave_node\r\n return log_level_master_node if self.should_log else log_level_slave_node\r\n```\r\n\r\nand then the application side becomes much simpler:\r\n\r\n```\r\n log_level = training_args.get_node_log_level()\r\n logger.setLevel(log_level)\r\n datasets.utils.logging.set_verbosity(log_level)\r\n transformers.utils.logging.set_verbosity(log_level)\r\n```\r\n\r\nno need for any logic there.\r\n\r\nW/o the cli arg for the slave, the user then can't really set a high log level since the slave nodes will still be set to WARNING.\r\n\r\n(After experiments with 64nodes any extra line gets multiplied many times. I wish I could control the exceptions too.)\r\n\r\nI'm just not sure whether to call the 2 new args:\r\n\r\n1.\r\n```\r\n--log_level\r\n--log_level_slave\r\n```\r\nor 2.\r\n```\r\n--log_level_master\r\n--log_level_slave\r\n```\r\nor 3.\r\n```\r\n--log_level_master_node\r\n--log_level_slave_node\r\n```\r\n\r\nI'd say the 2nd one is more explicit and probably is a better choice in the long run. 3rd - may be even more clear.\r\n\r\nPlease let me know your thoughts on this modification to the new CLI args and the naming.\r\n\r\nthank you!",
"Ah yes, `should_log` is a property that determines if a process is the local main or not (and also takes into account the `log_on_each_node` argument). I would keep `--log_level` for the log level of the main process (or main processes if `log_on_each_node` is true) since most users are not using distributed training, so it makes sense keeping it like that.\r\n\r\nFor the other argument, let's not use the slave terminology as many people have voiced it makes them uncomfortable. We can use main/replica and call the second argument `--log_level_replica` for instance. I think the default `\"passive\"` should then be warning if no `log_level` is passed and `log_level+1` if one is passed, does that make sense?\r\n\r\n",
"> Ah yes, `should_log` is a property that determines if a process is the local main or not (and also takes into account the `log_on_each_node` argument). I would keep `--log_level` for the log level of the main process (or main processes if `log_on_each_node` is true) since most users are not using distributed training, so it makes sense keeping it like that.\r\n\r\n+1\r\n\r\n> For the other argument, let's not use the slave terminology as many people have voiced it makes them uncomfortable. \r\n\r\nBut we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.\r\n\r\n> We can use main/replica and call the second argument `--log_level_replica` for instance. \r\n\r\nThis sounds very foreign, but logically it makes sense.\r\n\r\nDo you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?\r\n\r\n> I think the default `\"passive\"` should then be warning if no `log_level` is passed and `log_level+1` if one is passed, does that make sense?\r\n\r\nThe assumption of `log_level+1` doesn't quite work here. What if the user wants to get `log.DEBUG` for the main process and still keep the slave nodes relatively quiet. +1 would force them all to `log.INFO` which would be too much. (and it'd be +10 here ;)\r\n\r\nTherefore I propose to just stick to (please ignore the naming at this moment):\r\n```\r\n default_log_level_master_node = logging.INFO\r\n default_log_level_slave_node = logging.WARNING\r\n```\r\nand only override each with the corresponding:\r\n```\r\n--log_level_master_node\r\n--log_level_slave_node\r\n```\r\n\r\n",
"> But we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.\r\n\r\nI may have missed something, but there should not be any master reference in the Transformers code base, so I'm not sure what you mean. We can't control how PyTorch names its arguments if you're referring to `--master_port` and `--master_addr`.\r\n\r\n> Do you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?\r\n\r\n`main` and `replica` seem to be used, but I don't know if there are the new standard. I don't think there is one, but I may have missed something. I'm not attached to those names if you have better ideas.\r\n\r\n> The assumption of log_level+1 doesn't quite work here. \r\n\r\nThat makes sense, let's go for defaults to INFO and WARNING respectively.",
"> > But we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.\r\n> \r\n> I may have missed something, but there should not be any master reference in the Transformers code base, so I'm not sure what you mean. We can't control how PyTorch names its arguments if you're referring to `--master_port` and `--master_addr`.\r\n\r\nIndeed, that what I was referring to. \r\n\r\n> > Do you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?\r\n> \r\n> `main` and `replica` seem to be used, but I don't know if there are the new standard. I don't think there is one, but I may have missed something. I'm not attached to those names if you have better ideas.\r\n\r\nI asked about this on the torch slack and was told that they dealt with **blacklists**:\r\nhttps://github.com/pytorch/pytorch/search?q=blacklist&type=commits\r\nreplacing those with **blocklists**. But master/slave hasn't been dealt with yet. But I also don't see any `slave` in the pytorch source code - only in 3rd party code, so perhaps there is no need to.\r\n\r\nAnd the best finding was the sharing of this resource:\r\nhttps://en.wikipedia.org/wiki/Master/slave_(technology)#Replacements\r\n\r\nSo in our context, given that we are dealing with master and slave being identical, I think these 3 would be the most matching:\r\n\r\n1. Primary/Replica\r\n2. Master/Replica\r\n3. Source/Replica\r\n\r\nand personally I 2nd resonates the most, same as master weights for example.\r\n\r\nUnless I'm missing something in the whole controversy, master on its own, with the slave connotation is OK, right? Otherwise we would have to kill the concept of mastering something - that would be super 1984. ",
"It's not controversial as long it's use in the context of mastering something, which is not really what the master process is all about, so I would use main as GitHub now does, or primary if you prefer this terminology.\r\n\r\nLike I said though, the argument for controlling the primary process for logging should just be `log_level` IMO as distributed training is not used by all users, and `log_level`, `log_level_replicas` is clear enough.",
"Sounds good, Sylvain. I was just using the opportunity to get clarity around this modern development to do the right thing in the future code.",
"No problem at all!",
"ok, so as discussed - added `--log_level_replica` (let me know if you prefer `--log_level_replicas`)\r\n\r\nThough now as I'm thinking about it - I don't think the name is unambiguous since we have replicas that are on the same node and those replicas get very different treatment.\r\n\r\nI also added a test and extended docs.\r\n\r\nI moved `log_on_each_node` into the log_* group in training args.\r\n\r\nThere is a bit of an overlap / possible conflict in setting `transformers` log-level in the user code and also in trainer's init. The early setting in the app is important to catch those early logs and we hope that the app used the same log-level - otherwise trainer's init will reset user's log-level for `transformers` - but I think this is expected if `--log_level` is passed. Just don't pass it.\r\n\r\nThat's why I added the extended docs so that the whole domain of logging is discussed in one place.\r\n\r\nComments and suggestions for further improvements are welcome. Thank you!",
"If you don't mind having another look with my last changes at your convenience, @sgugger - thank you.",
"Looking good!",
"Noticing one potentially undesirable effect of this change, since the example now syncs `datasets`'s verbosity level, their `info` is a way too loud, and a lot of it is too much unimportant info. They seem to use warning as info and info as debug.\r\n\r\nFiled an issue there: https://github.com/huggingface/datasets/issues/2543 - I'd change many warnings to infos and the previous infos to debug there."
] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | As examples keep adding more more debug dumps as info (3 new dumps in `run_translation.py`) and repetitive logger warnings keep on growing - w/o being able to control the level of noise, this PR gives the noise control back to the user.
One of the main pros of this change is that now we can actually use `logger.debug` and put the less important info there, and only activate it when reporting issues or debugging something. Much better than having only `info` or `warning` toggle to logging.
This PR:
1. Introduces `--log_level` which has the normal 5 levels, plus "passive" which doesn't do anything special and let's the driver application do whatever it wants. if it's not `passive` it sets the log level to that arg's value asap.
2. Change Traner's `log_metrics` to be a non-logger print, since this is the whole point of the training/eval and thus IMHO should be always printed as its result. I can see where someone would say, but what if I don't want even this printed. This is fair, in which case I propose to add an explicit new arg `--print_results`.
3. As a single template to work on changes `run_translation.py` to also use this new CLI arg to do the log settings in its own and all sub-modules it uses, e.g. `datasets` here. e.g. previously `datasets` verbosity was on its own.
Questions/Notes to reviewers:
1. If this is accepted I propose to deprecate `training_args.should_log` since now it's no longer just `info` or `warn`, but provides a more refined control over log levels. And if it's passed to auto-set `--log_level=info`. I left the original logic there for now. The examples can still default to `warn` as they are now.
2. It's however possible that there should be `--train_log_level` and `--log_level` with the latter overriding the former if set, but most likely these should be in sync with everything.
3. Obviously if this is accepted once this example is polished - we will replicate the same for other examples in another PR.
4. I couldn't find how get rid of logger warnings at import time, but it's alright, as now there are only a few left.
5. Specific to Deepspeed integration I need to find a way to do the same there as it's *very* noisy. (in a follow up PR most likely if find a way).
I am very open to other ways of implementing it, naming it, etc. Not really attached to how, but when I develop/debug code I want to see only the messages that I need to focus on and not hundreds of lines of noise that it's always the same and makes it difficult to see the important things.
With this PR I get:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_train_samples 50 --max_eval_samples 50 \
--log_level=critical
2021-06-20 19:17:40.705704: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
{'loss': 3.0621, 'learning_rate': 6.000000000000001e-07, 'epoch': 0.25}
{'train_runtime': 1.3178, 'train_samples_per_second': 37.943, 'train_steps_per_second': 3.035, 'train_loss': 2.9988757967948914, 'epoch': 1.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 7.43it/s]
***** train metrics *****
epoch = 1.0
train_loss = 2.9989
train_runtime = 0:00:01.31
train_samples = 50
train_samples_per_second = 37.943
train_steps_per_second = 3.035
```
Thank you.
@sgugger, @LysandreJik, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12276/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12276",
"html_url": "https://github.com/huggingface/transformers/pull/12276",
"diff_url": "https://github.com/huggingface/transformers/pull/12276.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12276.patch",
"merged_at": 1624329050000
} |
https://api.github.com/repos/huggingface/transformers/issues/12275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12275/comments | https://api.github.com/repos/huggingface/transformers/issues/12275/events | https://github.com/huggingface/transformers/issues/12275 | 925,706,421 | MDU6SXNzdWU5MjU3MDY0MjE= | 12,275 | Transformers-CLI not saving pytorch model after conversion | {
"login": "anandmg101",
"id": 24319144,
"node_id": "MDQ6VXNlcjI0MzE5MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/24319144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandmg101",
"html_url": "https://github.com/anandmg101",
"followers_url": "https://api.github.com/users/anandmg101/followers",
"following_url": "https://api.github.com/users/anandmg101/following{/other_user}",
"gists_url": "https://api.github.com/users/anandmg101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandmg101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandmg101/subscriptions",
"organizations_url": "https://api.github.com/users/anandmg101/orgs",
"repos_url": "https://api.github.com/users/anandmg101/repos",
"events_url": "https://api.github.com/users/anandmg101/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandmg101/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I did this\n\nhttps://github.com/minimaxir/aitextgen/issues/141\n\n\nOn Mon, Nov 22, 2021 at 4:48 PM ryanhampton ***@***.***>\nwrote:\n\n> This is also not working for me - any fix?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12275#issuecomment-975984018>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFZRJKCPEX7W4YLIZHPKYETUNLCCDANCNFSM47ARIHXQ>\n> .\n> Triage notifications on the go with GitHub Mobile for iOS\n> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>\n> or Android\n> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.\n>\n>\n"
] | 1,624 | 1,637 | 1,627 | NONE | null | @patrickvonplaten, @LysandreJik
Hi, I'm using transformers CLI as per https://docs.aitextgen.io/gpt-2-simple/
But getting this error, where transformers is not saving the pytorch model. Not sure why. I tried in both Ubuntu and Windows. Two different system, but getting same error. So not sure what is the issue
Save PyTorch model to pytorch/pytorch_model.bin
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\programdata\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\XXX\AppData\Roaming\Python\Python38\Scripts\transformers-cli.exe\__main__.py", line 7, in <module>
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\commands\transformers_cli.py", line 51, in main
service.run()
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\commands\convert.py", line 152, in run
convert_gpt2_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\models\gpt2\convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 45, in convert_gpt2_checkpoint_to_pytorch
torch.save(model.state_dict(), pytorch_weights_dump_path)
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 376, in save
with _open_file_like(f, 'wb') as opened_file:
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 211, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'pytorch/pytorch_model.bin'
## Environment info
- `transformers` version: 4.7.0
- Platform: Windows
- Python version: 3.8.3
- PyTorch version (GPU?): 1.9.0+cpu
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
## Information
Model I am using: GPT2
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the command transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json
## Expected behavior
The script should save pytorch_model.bin but it seems it is not saving. Hence it is unable to load it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12275/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12274/comments | https://api.github.com/repos/huggingface/transformers/issues/12274/events | https://github.com/huggingface/transformers/issues/12274 | 925,628,111 | MDU6SXNzdWU5MjU2MjgxMTE= | 12,274 | [performance] module init w/ `from_pretrained` skip storage allocation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"If this issue hasn't already been resolved and a fix is relevant, can I have a try at it @stas00?",
"Thank you for offering to implement this, @tanaymeh \r\n\r\nI think this is no longer relevant, as recent pytorch versions added allocation on meta device, which does the same and should be used instead, so closing this.\r\n"
] | 1,624 | 1,697 | 1,697 | CONTRIBUTOR | null | # 🚀 Feature request
pt-1.9.0 added `torch.nn.utils.skip_init()` which (1) skips the module init (2) doesn't allocate any memory
https://pytorch.org/tutorials/prototype/skip_param_init.html
note: `torch.nn.utils.skip_init()` itself will be in 1.9.1, but the rest of the code should be in 1.9.0 (update: as 1.9.1 isn't planned, probably `s/1.9.1/1.10/`)
We already implemented part 1 (skipping the custom init) in https://github.com/huggingface/transformers/pull/11471.
We could further speed up the start up time and reduce CPU memory usage, by not allocating any storage for module init since `load_state_dict` will already have allocated `state_dict` from the pretrained weights (and some sub-modules that don't have pre-trained weights - will have to go through normal init). See https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details
another note: currently deepspeed needs to have the module storage pre-allocated for its `zero.Init` gather/scatter, but if the initial model's weights aren't allocated, then we can probably get rid of `zero.Init` altogether https://github.com/huggingface/transformers/issues/12273 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12274/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12274/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12273/comments | https://api.github.com/repos/huggingface/transformers/issues/12273/events | https://github.com/huggingface/transformers/issues/12273 | 925,627,493 | MDU6SXNzdWU5MjU2Mjc0OTM= | 12,273 | [Deepspeed] [performance] inefficient load with `from_pretrained` w/ zero3 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,626 | null | CONTRIBUTOR | null | # 🚀 Feature request
Currently under Deepspeed stage3 with `from_pretrained` we:
a. loop over each sub-module in zero.Init
1. init the sub-module
2. shard and scatter the shards
b. then to load pre-trained weights we loop over each sub-module:
1. gather the shards
2. `load_state_dict` for the one layer layer
3. shard and scatter the shards
c. any sub-module params that weren't in the pretrained state_dict
1. run the postponed `module_init` as it was done in https://github.com/huggingface/transformers/pull/11471
2. shard and scatter the shards XXX: I actually don't think `deepspeed.zero.GatheredParameters` was handled here. so these params don't get ZeRO'ed - need to fix that https://github.com/huggingface/transformers/issues/12272
Because we unnecessarily do scatter/gather/scatter, this takes much longer than just:
a. init the modules w/o allocating any storage as it has been implemented in pt-1.9.0/1.9.1 https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details
b. for each sub-module with pretrained weights
1. load_state_dict
2. shard and scatter the shards
c. any sub-module params that weren't in the pretrained state_dict
1. materialize and module_init
2. shard and scatter the shards
Solving this will most likely require support from Deepspeed, https://github.com/microsoft/DeepSpeed/issues/1142 or perhaps we can just try to remove `zero.Init` if the weights aren't materialized during model creation. So the very first sharding will get postponed to the `load_state_dict` stage (and `module_init` for the sub-modules that don't have pre-trained weights). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12273/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12272/comments | https://api.github.com/repos/huggingface/transformers/issues/12272/events | https://github.com/huggingface/transformers/issues/12272 | 925,626,798 | MDU6SXNzdWU5MjU2MjY3OTg= | 12,272 | [Deepspeed zero3] lazy weights init | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,624 | 1,626 | null | CONTRIBUTOR | null | I'm pretty sure we need to follow up to the lazy weights init feature https://github.com/huggingface/transformers/pull/11471
and add under zero3 `deepspeed.zero.GatheredParameters` here (or inside `_init_weights`):
https://github.com/huggingface/transformers/pull/11471/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1275-R1276
plus need a test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12272/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12271/comments | https://api.github.com/repos/huggingface/transformers/issues/12271/events | https://github.com/huggingface/transformers/pull/12271 | 925,575,509 | MDExOlB1bGxSZXF1ZXN0Njc0MDE2NzIw | 12,271 | [Flax] Add wav2vec2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Anything I can help with ? \r\nLooking forward to use Flax Wav2vec for the community event :)",
"Hey @ThomAub - that's very nice of you! I will need to spend a couple more hours on this and then add pretraining as well. One thing that would be very helpful would be to check how to do the GumbelSoftmax in Flax. *I.e.* how can we translate this: https://github.com/huggingface/transformers/blob/f2c4ce7e339f4a2f8aaacb392496bc1a5743881f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L751 PyTorch module to Flax? This might be a bit difficult and require some googling to see if others have already implement gumbel softmax in jax/Flax or not. If you could take a look at this, it would be very useful! ",
"I guess I wasn't fast enough ! Great work ",
"@patil-suraj @sgugger - actually this is still WIP. Sorry for tagging you too early"
] | 1,624 | 1,625 | 1,625 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Wav2Vec2 in Flax
- [x] FlaxCTCWav2Vec2
- [x] FlaxForPreTraining
- [x] FlaxWav2Vec2 random mask code
- [x] Clean-up
- [x] Write pretraining script | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12271/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12271/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12271",
"html_url": "https://github.com/huggingface/transformers/pull/12271",
"diff_url": "https://github.com/huggingface/transformers/pull/12271.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12271.patch",
"merged_at": 1625075063000
} |
https://api.github.com/repos/huggingface/transformers/issues/12270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12270/comments | https://api.github.com/repos/huggingface/transformers/issues/12270/events | https://github.com/huggingface/transformers/issues/12270 | 925,571,314 | MDU6SXNzdWU5MjU1NzEzMTQ= | 12,270 | Add error message to Wav2Vec2 & Hubert if labels > vocab_size | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I will create a PR to fix this.",
"@vasudevgupta7 @patrickvonplaten Is this fixed? If not, I will like to work on it. ",
"Hey, this issue has been fixed in this PR: https://github.com/huggingface/transformers/pull/12288",
"Thanks for informing. I had seen it, but since the issue is still open, I thought something might be left. ",
"Closing as fixed :)"
] | 1,624 | 1,632 | 1,632 | MEMBER | null | # 🚀 Feature request
Add better error message to `HubertForCTC`, `Wav2Vec2ForCTC` if labels are bigger than vocab size.
## Motivation
Following this issue: https://github.com/huggingface/transformers/issues/12264 it is clear that an error message should be thrown if any of the any of the labels are > `self.config.vocab_size` or else silent errors can sneak into the training script.
So we should modify: `Wav2Vec2ForCTC`, `TFWav2Vec2ForCTC`, and `HubertForCTC` to add a nice error message in this case.
## Your contribution
This is a first good issue and should be rather easy to accomplish. I'm happy to give more guidance if needed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12270/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12270/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12269 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12269/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12269/comments | https://api.github.com/repos/huggingface/transformers/issues/12269/events | https://github.com/huggingface/transformers/issues/12269 | 925,571,302 | MDU6SXNzdWU5MjU1NzEzMDI= | 12,269 | Add TFSpeech2Text | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Is this still in progress? I didn't see a WIP PR, but I'd like to use the TensorFlow version if possible.",
"Hello @will-rice, I have this in progress. I'll try to finish some missing components tomorrow and then open the PR for a review :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,630 | null | CONTRIBUTOR | null | # 🚀 Feature request
Add TensorFlow implementation of Speech2Text model.
## Your contribution
I'll try to do this.
**Reviewers:** @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12269/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12269/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12268/comments | https://api.github.com/repos/huggingface/transformers/issues/12268/events | https://github.com/huggingface/transformers/issues/12268 | 925,560,530 | MDU6SXNzdWU5MjU1NjA1MzA= | 12,268 | [Documentation] Example for LEDForConditionalGeneration does not work | {
"login": "ionicsolutions",
"id": 32523967,
"node_id": "MDQ6VXNlcjMyNTIzOTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32523967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ionicsolutions",
"html_url": "https://github.com/ionicsolutions",
"followers_url": "https://api.github.com/users/ionicsolutions/followers",
"following_url": "https://api.github.com/users/ionicsolutions/following{/other_user}",
"gists_url": "https://api.github.com/users/ionicsolutions/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ionicsolutions/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ionicsolutions/subscriptions",
"organizations_url": "https://api.github.com/users/ionicsolutions/orgs",
"repos_url": "https://api.github.com/users/ionicsolutions/repos",
"events_url": "https://api.github.com/users/ionicsolutions/events{/privacy}",
"received_events_url": "https://api.github.com/users/ionicsolutions/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | The [documentation for LEDForConditionalGeneration](https://huggingface.co/transformers/model_doc/led.html#transformers.LEDForConditionalGeneration) appears to be incorrect. The same example is also used for [BartForConditionalGeneration](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), where it works as intended. I believe that the example was just copied and not adapted, but perhaps I'm missing something?
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
tokenizer = LEDTokenizer.from_pretrained('allenai/led-base-16384')
TXT = "My friends are <mask> but they eat too many carbs."
model = LEDForConditionalGeneration.from_pretrained('allenai/led-base-16384')
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
logits = model(input_ids).logits
```
Here, the last step fails with `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds`, which as far as I can tell is not a bug, but expected, as no `decoder_input_ids`/`embeds` (or `labels`) are provided. (BART [silently generates the `decoder_input_ids` from the `input_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1151), which LED does not.)
I believe the example should look like this:
```python
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
prediction = model.generate(input_ids)[0]
print(tokenizer.decode(prediction, skip_special_tokens=True))
# My friends are good at eating healthy but they eat too many carbs.
```
This is also a nice demonstration that LED generates more than just one token for the masked parts of the sequence.
Tagging @patrickvonplaten who contributed the model and the example. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12268/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12267/comments | https://api.github.com/repos/huggingface/transformers/issues/12267/events | https://github.com/huggingface/transformers/pull/12267 | 925,543,279 | MDExOlB1bGxSZXF1ZXN0NjczOTkxODgz | 12,267 | [WIP] Enable GPT2Model to handle 3d attention_mask | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thank you for offering a fix! It seems this proposal would be breaking for the GPT-2 double heads model; the following test fails: `test_gpt2_double_lm_head_model` with the following error:\r\n\r\n```\r\n_________________ GPT2ModelTest.test_gpt2_double_lm_head_model _________________\r\n[gw1] linux -- Python 3.7.10 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_gpt2.GPT2ModelTest testMethod=test_gpt2_double_lm_head_model>\r\n\r\n def test_gpt2_double_lm_head_model(self):\r\n config_and_inputs = self.model_tester.prepare_config_and_inputs()\r\n> self.model_tester.create_and_check_double_lm_head_model(*config_and_inputs)\r\n\r\ntests/test_modeling_gpt2.py:457: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_gpt2.py:350: in create_and_check_double_lm_head_model\r\n result = model(**inputs)\r\n../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/gpt2/modeling_gpt2.py:1160: in forward\r\n return_dict=return_dict,\r\n../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/gpt2/modeling_gpt2.py:802: in forward\r\n output_attentions=output_attentions,\r\n../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/gpt2/modeling_gpt2.py:323: in forward\r\n output_attentions=output_attentions,\r\n../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl\r\n return forward_call(*input, **kwargs)\r\nsrc/transformers/models/gpt2/modeling_gpt2.py:258: in forward\r\n attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = GPT2Attention(\r\n (c_attn): Conv1D()\r\n (c_proj): Conv1D()\r\n (attn_dropout): Dropout(p=0.1, inplace=False)\r\n (resid_dropout): Dropout(p=0.1, inplace=False)\r\n)\r\nquery = tensor([[[[ 0.0202, -0.0295, -0.0911, ..., -0.1715, 0.0008, 0.0310],\r\n [-0.0724, -0.0315, -0.0237, ..., 0... -0.0802],\r\n [-0.1085, 0.0763, -0.0241, ..., -0.1154, 0.1063, -0.1542]]]],\r\n grad_fn=<PermuteBackward>)\r\nkey = tensor([[[[ 2.2768e-02, -1.5131e-02, -3.1551e-02, ..., 1.2214e-01,\r\n 3.4581e-03, 1.2902e-01],\r\n [...6e-01, 1.0495e-01, -8.1176e-02, ..., 1.5278e-01,\r\n -1.6426e-01, 4.7595e-02]]]], grad_fn=<PermuteBackward>)\r\nvalue = tensor([[[[ 0.0396, -0.0556, -0.0115, ..., 0.1020, 0.0598, 0.1249],\r\n [ 0.1337, -0.0851, 0.0792, ..., 0... 0.0443],\r\n [-0.1049, -0.0717, 0.1128, ..., 0.2006, 0.0411, -0.0256]]]],\r\n grad_fn=<PermuteBackward>)\r\nattention_mask = tensor([[[[-10000., -0., -10000., -10000., -0., -0., -0.],\r\n [-10000., -0., -10000., -1000...00., -0., -0., -10000., -0.],\r\n [ -0., -0., -10000., -0., -0., -10000., -0.]]]])\r\nhead_mask = None\r\n\r\n def _attn(self, query, key, value, attention_mask=None, head_mask=None):\r\n attn_weights = torch.matmul(query, key.transpose(-1, -2))\r\n \r\n if self.scale_attn_weights:\r\n attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)\r\n \r\n if not self.is_cross_attention:\r\n # if only \"normal\" attention layer implements causal mask\r\n query_length, key_length = query.size(-2), key.size(-2)\r\n causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].bool()\r\n attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype))\r\n \r\n if attention_mask is not None:\r\n # Apply the attention mask\r\n> attn_weights = attn_weights + attention_mask\r\nE RuntimeError: The size of tensor a (7) must match the size of tensor b (4) at non-singleton dimension 2\r\n\r\nsrc/transformers/models/gpt2/modeling_gpt2.py:191: RuntimeError\r\n```\r\n\r\nCould you give it a look?",
"@LysandreJik Thank you for the comment! I inspected code carefully and found that it was because `input_ids` from `create_and_check_double_lm_head_model` has extra dimension for `num_choices` and is merged to `batch` dimension before pushed to model. I fixed this error by adding the condition that `batch_size` of `input_ids` is equal to `batch_size` of `attention_mask` or attention_mask's dimension of `num_choices` would be merged to `batch` dimension as `input_ids`. Also, I added the same code for openai gpt. Please check again!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,628 | 1,628 | CONTRIBUTOR | null | This PR solves the problem discussed at #12261.
@patrickvonplaten, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12267/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12267",
"html_url": "https://github.com/huggingface/transformers/pull/12267",
"diff_url": "https://github.com/huggingface/transformers/pull/12267.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12267.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12266/comments | https://api.github.com/repos/huggingface/transformers/issues/12266/events | https://github.com/huggingface/transformers/issues/12266 | 925,533,181 | MDU6SXNzdWU5MjU1MzMxODE= | 12,266 | Causal Mask in BertGeneration | {
"login": "JamesHujy",
"id": 48405323,
"node_id": "MDQ6VXNlcjQ4NDA1MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/48405323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesHujy",
"html_url": "https://github.com/JamesHujy",
"followers_url": "https://api.github.com/users/JamesHujy/followers",
"following_url": "https://api.github.com/users/JamesHujy/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesHujy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesHujy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesHujy/subscriptions",
"organizations_url": "https://api.github.com/users/JamesHujy/orgs",
"repos_url": "https://api.github.com/users/JamesHujy/repos",
"events_url": "https://api.github.com/users/JamesHujy/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesHujy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"It's here:\r\nhttps://github.com/huggingface/transformers/blob/cabcc75171650f9131a4cf31c62e1f102589014e/src/transformers/modeling_utils.py#L243\r\n:-)\r\nNote that `is_decoder` has to be set to True ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,629 | 1,629 | NONE | null | I cannot find the implementation of CausalMask in BertGeneration. Can you help me locate it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12266/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12265/comments | https://api.github.com/repos/huggingface/transformers/issues/12265/events | https://github.com/huggingface/transformers/issues/12265 | 925,502,059 | MDU6SXNzdWU5MjU1MDIwNTk= | 12,265 | Mbart continue training with same training task on a specific language | {
"login": "maroxtn",
"id": 16374280,
"node_id": "MDQ6VXNlcjE2Mzc0Mjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/16374280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maroxtn",
"html_url": "https://github.com/maroxtn",
"followers_url": "https://api.github.com/users/maroxtn/followers",
"following_url": "https://api.github.com/users/maroxtn/following{/other_user}",
"gists_url": "https://api.github.com/users/maroxtn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maroxtn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maroxtn/subscriptions",
"organizations_url": "https://api.github.com/users/maroxtn/orgs",
"repos_url": "https://api.github.com/users/maroxtn/repos",
"events_url": "https://api.github.com/users/maroxtn/events{/privacy}",
"received_events_url": "https://api.github.com/users/maroxtn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"@LysandreJik Sorry, I will close the issue now. \r\nHowever this might qualify as a useful feature request, don't you think?"
] | 1,624 | 1,624 | 1,624 | NONE | null | Hello.
I am not sure if this is possible using the transformers library, but it isn't it would be nice to have.
Mbart was initially trained using span corruption and other training tasks on a corpus containing many languages. Since I am going to use it for specifically the Arabic language, I wish to solely finetune it on Arabic text with the same training tasks it had, only then at a third step to fine tune for specific text generation task.
I believe this is possible using fairseq, but having it here in the transformers library would better. Is that possible / useful per your opinion ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12264/comments | https://api.github.com/repos/huggingface/transformers/issues/12264/events | https://github.com/huggingface/transformers/issues/12264 | 925,458,332 | MDU6SXNzdWU5MjU0NTgzMzI= | 12,264 | TFWav2Vec2ForCTC & Wav2Vec2ForCTC gives different loss values | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"try it with labels that are less than the vocab size. I was able to change it to this and they are pretty close.\r\n\r\n```\r\nimport tensorflow as tf\r\nimport torch\r\nfrom transformers import Wav2Vec2ForCTC, TFWav2Vec2ForCTC\r\n\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\ntf_model = TFWav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\r\n\r\ntf_labels = tf.constant([[12, 10, 1, 2, 3], [4, 5, 6, 7, 8]])\r\nlabels = torch.from_numpy(tf_labels.numpy())\r\n\r\ntf_speech = tf.random.uniform(shape=(2, 40000))\r\nmasks = tf.ones_like(tf_speech)\r\n\r\nspeech = torch.from_numpy(tf_speech.numpy()).float()\r\n\r\nwith torch.no_grad():\r\n out = model(speech, labels=labels)\r\ntf_out = tf_model(tf_speech, labels=tf_labels)\r\n\r\nprint(out[\"loss\"].numpy(), tf_out[\"loss\"].numpy())\r\n```\r\n\r\n88.34665 88.34598",
"Thanks for the quick reply. It will help. ",
"It should actually throw an error if labels are > vocab_size! Will open an issue for this",
"@will-rice @patrickvonplaten \r\n\r\nPyTorch & TensorFlow losses are becoming different if padding indices are set to -100. Checkout this small [Colab notebook](https://colab.research.google.com/drive/190NDNtAKg4y2a-jjMby-XNZ2EYScOm6m?usp=sharing).\r\n\r\nThis is happening because these [2 lines](https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1584) [1584 & 1585] should not be there in TensorFlow implementation. If we just remove them, PyTorch & TensorFlow loss will become same.\r\n\r\nSo:\r\n\r\n```python\r\n# we should remove these lines\r\nflattened_labels = tf.boolean_mask(labels, labels_mask) \r\nflattened_labels = tf.reshape(flattened_labels, [labels.shape[0], -1])\r\n\r\n# rather replace it with\r\nflattened_labels = labels\r\n```\r\n",
"There is one other bug also in TensorFlow implementation. `training` argument should be passed in this [line](https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1225) because right now spec augmentation is not getting applied even when `config.apply_spec_augment = True`.\r\n\r\nNow, if we pass training arg in above linked line, spec augmentation does't work & rather throws an error. This needs to be fixed as well, I think.",
"Good catches! I'm working on a fix for these. Were you still wanting to open a PR for the label error or would you like me to just roll that one into these?",
"I am fine if you are going to fix the label error messages.",
"Closing this issue as it is fixed in #12289 "
] | 1,624 | 1,626 | 1,626 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@will-rice @patrickvonplaten
## Information
Model I am using: `TFWav2Vec2ForCTC` & `Wav2Vec2ForCTC`
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
import torch
from transformers import Wav2Vec2ForCTC, TFWav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_labels = tf.constant([[3, 54, 65, 76, 21], [32, 42, 434, 76, 231]])
labels = torch.from_numpy(tf_labels.numpy())
tf_speech = tf.random.uniform(shape=(2, 40000))
speech = torch.from_numpy(tf_speech.numpy()).float()
with torch.no_grad():
out = model(speech, labels=labels)
tf_out = tf_model(tf_speech, labels=tf_labels)
print(out["loss"], tf_out["loss"])
# -> 71.64 -> 16.92
```
## Expected behavior
Loss values from tensorflow & PyTorch model should be similar (Note: logits are perfectly same as expected). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12264/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12264/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12263/comments | https://api.github.com/repos/huggingface/transformers/issues/12263/events | https://github.com/huggingface/transformers/pull/12263 | 925,453,112 | MDExOlB1bGxSZXF1ZXN0NjczOTIzMDcx | 12,263 | Add VisualBERT demo notebook | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"I have updated the demo. Turns out I didn't need to change a lot from the LXMERT demo. I have used the same files, just replaced the tokenizer, the model and the labels that are being used. Only `demo.ipynb` is different.\r\n\r\nRequesting @LysandreJik @patil-suraj to review.",
"Thanks for approving and merging @patil-suraj @LysandreJik ^_^ "
] | 1,624 | 1,628 | 1,628 | CONTRIBUTOR | null | In continuation with #10534, this PR adds demo for VisualBERT model.
I am planning to base it on the `LXMERT` examples, hence the copy-paste of files for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12263/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12263",
"html_url": "https://github.com/huggingface/transformers/pull/12263",
"diff_url": "https://github.com/huggingface/transformers/pull/12263.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12263.patch",
"merged_at": 1628691059000
} |
https://api.github.com/repos/huggingface/transformers/issues/12262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12262/comments | https://api.github.com/repos/huggingface/transformers/issues/12262/events | https://github.com/huggingface/transformers/pull/12262 | 925,451,501 | MDExOlB1bGxSZXF1ZXN0NjczOTIxODkx | 12,262 | [WIP] SMITH | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Hi @gchhablani \r\n\r\nWhat is the state if this PR ?\r\n\r\nHave you tried loading [the official SMITH checkpoint](https://github.com/google-research/google-research/tree/master/smith#pre-trained-model-checkpoint) ?\r\n\r\nAmine",
"Hi @amineabdaoui\n\nI had tried it a while back and it had worked. My focus switched to other things. I'll get back onto this PR this week."
] | 1,624 | 1,706 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds SMITH encoder by Google Research and potentially closes #9526.
Potential reviewers:
@LysandreJik @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12262/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12262/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12262",
"html_url": "https://github.com/huggingface/transformers/pull/12262",
"diff_url": "https://github.com/huggingface/transformers/pull/12262.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12262.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/12261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12261/comments | https://api.github.com/repos/huggingface/transformers/issues/12261/events | https://github.com/huggingface/transformers/issues/12261 | 925,341,465 | MDU6SXNzdWU5MjUzNDE0NjU= | 12,261 | GPT2Model cannot handle 3D attention_mask | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | CONTRIBUTOR | null | When pretraining GPT2 model, it sometimes need to receive a 3d attention_mask. Let's take an example with "gpt2" model. The model would be trained with an instance consisting of two different document "I am a boy. <|endoftext|> you are rich.".
tokenizer convert the sentence into input_ids: [40, 716, 257, 2933, 13, 220, 50256, 345, 389, 5527, 13] and attention_mask: [1,1,1,1,1,1,1,1,1,1,1].
Since "I am a boy." and "you are rich." are from different documents, when predicting "you are rich." we do not want GPT to attend "I am a boy.", leading to the need for 3d attention_mask. Other models like BERT can handle this problem with the function "get_extended_attention_mask" from modeling_utils.
However, GPT2 model considers only 2d attention_mask now.
https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/gpt2/modeling_gpt2.py#L697
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12261/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12260/comments | https://api.github.com/repos/huggingface/transformers/issues/12260/events | https://github.com/huggingface/transformers/issues/12260 | 925,247,478 | MDU6SXNzdWU5MjUyNDc0Nzg= | 12,260 | 353 duplicate tokens in GPT-2? | {
"login": "lukesalamone",
"id": 10817704,
"node_id": "MDQ6VXNlcjEwODE3NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/10817704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukesalamone",
"html_url": "https://github.com/lukesalamone",
"followers_url": "https://api.github.com/users/lukesalamone/followers",
"following_url": "https://api.github.com/users/lukesalamone/following{/other_user}",
"gists_url": "https://api.github.com/users/lukesalamone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukesalamone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukesalamone/subscriptions",
"organizations_url": "https://api.github.com/users/lukesalamone/orgs",
"repos_url": "https://api.github.com/users/lukesalamone/repos",
"events_url": "https://api.github.com/users/lukesalamone/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukesalamone/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Never mind, realized the tokens are stored in https://huggingface.co/gpt2/resolve/main/vocab.json"
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
- `transformers` version: 4.6.1
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I have noticed that there are quite a few duplicate tokens in the tokenizer. Out of the vocab size of 50257 there are 353 duplicate tokens by my crude calculation. Am I doing anything wrong here?
```python
VOCAB_SIZE = 50257
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
all_tokens = set()
duplicates = []
for i in range(VOCAB_SIZE):
x = tokenizer.decode(i).encode('utf8')
if x not in all_tokens:
all_tokens.add(x)
else:
print(f'{i}\t\t {x}')
duplicates.append(x)
```
When I print `len(duplicates)` I get 353. The output from this loop is
```
95 b'\xef\xbf\xbd'
96 b'\xef\xbf\xbd'
97 b'\xef\xbf\xbd'
98 b'\xef\xbf\xbd'
99 b'\xef\xbf\xbd'
100 b'\xef\xbf\xbd'
101 b'\xef\xbf\xbd'
102 b'\xef\xbf\xbd'
103 b'\xef\xbf\xbd'
104 b'\xef\xbf\xbd'
105 b'\xef\xbf\xbd'
106 b'\xef\xbf\xbd'
107 b'\xef\xbf\xbd'
108 b'\xef\xbf\xbd'
109 b'\xef\xbf\xbd'
110 b'\xef\xbf\xbd'
111 b'\xef\xbf\xbd'
112 b'\xef\xbf\xbd'
113 b'\xef\xbf\xbd'
114 b'\xef\xbf\xbd'
115 b'\xef\xbf\xbd'
116 b'\xef\xbf\xbd'
117 b'\xef\xbf\xbd'
118 b'\xef\xbf\xbd'
119 b'\xef\xbf\xbd'
120 b'\xef\xbf\xbd'
121 b'\xef\xbf\xbd'
122 b'\xef\xbf\xbd'
123 b'\xef\xbf\xbd'
124 b'\xef\xbf\xbd'
125 b'\xef\xbf\xbd'
126 b'\xef\xbf\xbd'
127 b'\xef\xbf\xbd'
128 b'\xef\xbf\xbd'
129 b'\xef\xbf\xbd'
130 b'\xef\xbf\xbd'
131 b'\xef\xbf\xbd'
132 b'\xef\xbf\xbd'
133 b'\xef\xbf\xbd'
134 b'\xef\xbf\xbd'
135 b'\xef\xbf\xbd'
136 b'\xef\xbf\xbd'
137 b'\xef\xbf\xbd'
138 b'\xef\xbf\xbd'
139 b'\xef\xbf\xbd'
140 b'\xef\xbf\xbd'
141 b'\xef\xbf\xbd'
142 b'\xef\xbf\xbd'
143 b'\xef\xbf\xbd'
144 b'\xef\xbf\xbd'
145 b'\xef\xbf\xbd'
146 b'\xef\xbf\xbd'
147 b'\xef\xbf\xbd'
148 b'\xef\xbf\xbd'
149 b'\xef\xbf\xbd'
150 b'\xef\xbf\xbd'
151 b'\xef\xbf\xbd'
152 b'\xef\xbf\xbd'
153 b'\xef\xbf\xbd'
154 b'\xef\xbf\xbd'
155 b'\xef\xbf\xbd'
156 b'\xef\xbf\xbd'
157 b'\xef\xbf\xbd'
158 b'\xef\xbf\xbd'
159 b'\xef\xbf\xbd'
160 b'\xef\xbf\xbd'
161 b'\xef\xbf\xbd'
162 b'\xef\xbf\xbd'
163 b'\xef\xbf\xbd'
164 b'\xef\xbf\xbd'
165 b'\xef\xbf\xbd'
166 b'\xef\xbf\xbd'
167 b'\xef\xbf\xbd'
168 b'\xef\xbf\xbd'
169 b'\xef\xbf\xbd'
170 b'\xef\xbf\xbd'
171 b'\xef\xbf\xbd'
172 b'\xef\xbf\xbd'
173 b'\xef\xbf\xbd'
174 b'\xef\xbf\xbd'
175 b'\xef\xbf\xbd'
176 b'\xef\xbf\xbd'
177 b'\xef\xbf\xbd'
178 b'\xef\xbf\xbd'
179 b'\xef\xbf\xbd'
180 b'\xef\xbf\xbd'
181 b'\xef\xbf\xbd'
182 b'\xef\xbf\xbd'
183 b'\xef\xbf\xbd'
184 b'\xef\xbf\xbd'
185 b'\xef\xbf\xbd'
186 b'\xef\xbf\xbd'
187 b'\xef\xbf\xbd'
222 b'\xef\xbf\xbd'
223 b'\xef\xbf\xbd'
224 b'\xef\xbf\xbd'
225 b'\xef\xbf\xbd'
226 b'\xef\xbf\xbd'
227 b'\xef\xbf\xbd'
228 b'\xef\xbf\xbd'
229 b'\xef\xbf\xbd'
230 b'\xef\xbf\xbd'
231 b'\xef\xbf\xbd'
232 b'\xef\xbf\xbd'
233 b'\xef\xbf\xbd'
234 b'\xef\xbf\xbd'
235 b'\xef\xbf\xbd'
236 b'\xef\xbf\xbd'
237 b'\xef\xbf\xbd'
238 b'\xef\xbf\xbd'
239 b'\xef\xbf\xbd'
240 b'\xef\xbf\xbd'
241 b'\xef\xbf\xbd'
242 b'\xef\xbf\xbd'
243 b'\xef\xbf\xbd'
244 b'\xef\xbf\xbd'
245 b'\xef\xbf\xbd'
246 b'\xef\xbf\xbd'
247 b'\xef\xbf\xbd'
248 b'\xef\xbf\xbd'
249 b'\xef\xbf\xbd'
250 b'\xef\xbf\xbd'
251 b'\xef\xbf\xbd'
252 b'\xef\xbf\xbd'
253 b'\xef\xbf\xbd'
254 b'\xef\xbf\xbd'
255 b'\xef\xbf\xbd'
447 b'\xef\xbf\xbd'
764 b'.'
837 b','
1209 b'\xef\xbf\xbd'
1587 b' \xef\xbf\xbd'
1792 b'\xef\xbf\xbd'
2343 b' \xef\xbf\xbd'
2515 b'\xef\xbf\xbd'
2644 b'...'
4210 b'\xef\xbf\xbd'
5008 b'\xef\xbf\xbd'
5099 b'\xef\xbf\xbd'
5145 b'!'
5525 b' \xef\xbf\xbd'
5633 b'?'
6184 b' \xef\xbf\xbd'
6353 b'\xef\xbf\xbd\xef\xbf\xbd'
6408 b'\xef\xbf\xbd\xef\xbf\xbd'
6552 b'\xef\xbf\xbd'
7134 b'\xef\xbf\xbd\xef\xbf\xbd'
7377 b' \xef\xbf\xbd'
8008 b'\xef\xbf\xbd\xef\xbf\xbd'
8582 b'\xef\xbf\xbd'
8955 b'\xef\xbf\xbd\xef\xbf\xbd'
10253 b'\xef\xbf\xbd\xef\xbf\xbd'
10263 b' \xef\xbf\xbd'
10310 b'\xef\xbf\xbd'
10545 b' \xef\xbf\xbd'
11019 b' \xef\xbf\xbd'
11485 b'..'
11737 b'\xef\xbf\xbd'
11805 b'\xef\xbf\xbd\xef\xbf\xbd'
11976 b'\xef\xbf\xbd'
12466 b' \xef\xbf\xbd'
12520 b' \xef\xbf\xbd'
12859 b'\xef\xbf\xbd'
13305 b' \xef\xbf\xbd'
13328 b' \xef\xbf\xbd'
13783 b'\xef\xbf\xbd'
13945 b'\xef\xbf\xbd\xef\xbf\xbd'
14360 b' \xef\xbf\xbd'
14519 b' \xef\xbf\xbd'
14524 b' \xef\xbf\xbd'
15139 b' \xef\xbf\xbd'
15926 b'\xef\xbf\xbd'
16268 b' \xef\xbf\xbd'
17312 b'\xef\xbf\xbd'
17358 b'\xef\xbf\xbd'
17433 b' \xef\xbf\xbd'
17550 b' \xef\xbf\xbd'
17683 b'\xe3\x81\xae\xef\xbf\xbd'
17739 b'\xef\xbf\xbd'
17804 b' \xef\xbf\xbd'
17992 b'\xef\xbf\xbd'
18004 b'\xef\xbf\xbd'
18074 b' \xef\xbf\xbd'
18433 b'\xef\xbf\xbd\xef\xbf\xbd'
18796 b'\xef\xbf\xbd'
18872 b' \xef\xbf\xbd'
18923 b' \xef\xbf\xbd'
19021 b'\xef\xbf\xbd'
19153 b'??'
19424 b'....'
19469 b'\xef\xbf\xbd'
19526 b'\xef\xbf\xbd'
19567 b'\xef\xbf\xbd'
20004 b'........'
20015 b'\xef\xbf\xbd'
20046 b'\xef\xbf\xbd'
20543 b' \xef\xbf\xbd'
20724 b' \xef\xbf\xbd'
20998 b'\xef\xbf\xbd'
21253 b'\xef\xbf\xbd\xef\xbf\xbd'
22135 b'."'
22522 b'\xef\xbf\xbd'
22755 b'\xef\xbf\xbd'
22880 b'\xef\xbf\xbd'
22887 b'\xef\xbf\xbd'
23294 b' \xef\xbf\xbd'
23329 b'\xef\xbf\xbd\xef\xbf\xbd'
23596 b'\xef\xbf\xbd\xef\xbf\xbd'
23626 b'\xef\xbf\xbd'
23821 b' \xef\xbf\xbd'
23877 b'\xef\xbf\xbd'
24231 b'\xef\xbf\xbd'
24457 b'./'
24583 b'\xef\xbf\xbd'
24861 b'\xef\xbf\xbd'
24966 b' \xef\xbf\xbd'
25001 b'\xef\xbf\xbd'
25081 b'\xef\xbf\xbd\xef\xbf\xbd'
25370 b' \xef\xbf\xbd'
26193 b'\xef\xbf\xbd'
26292 b'\xef\xbf\xbd'
26344 b'\xef\xbf\xbd'
26486 b'\xef\xbf\xbd'
26534 b'\xef\xbf\xbd\xef\xbf\xbd'
27032 b'\xe3\x81\xae\xef\xbf\xbd'
27332 b' \xef\xbf\xbd'
27670 b'\xef\xbf\xbd'
27764 b'\xef\xbf\xbd'
27950 b'\xef\xbf\xbd'
28053 b' \xef\xbf\xbd'
28156 b'\xef\xbf\xbd'
28225 b' \xef\xbf\xbd'
28839 b'\xef\xbf\xbd'
28938 b'\xef\xbf\xbd'
29705 b'\xef\xbf\xbd'
29773 b'\xef\xbf\xbd'
29785 b'\xef\xbf\xbd'
29826 b'\xef\xbf\xbd'
30266 b'\xef\xbf\xbd'
30298 b'\xef\xbf\xbd'
30325 b' \xef\xbf\xbd'
30585 b'\xef\xbf\xbd'
31204 b'\xef\xbf\xbd\xef\xbf\xbd'
31479 b'\xef\xbf\xbd'
31619 b' \xef\xbf\xbd'
31965 b'\xef\xbf\xbd'
32003 b'\xef\xbf\xbd'
32368 b'\xef\xbf\xbd'
32391 b'\xef\xbf\xbd'
32432 b'\xef\xbf\xbd'
32518 b'\xef\xbf\xbd'
32573 b'\xef\xbf\xbd'
32849 b'\xef\xbf\xbd'
33176 b'\xef\xbf\xbd'
33232 b'\xef\xbf\xbd'
33426 b'\xe3\x81\xae\xef\xbf\xbd'
33566 b'\xef\xbf\xbd'
33699 b'\xef\xbf\xbd'
33768 b'\xef\xbf\xbd'
34402 b'\xef\xbf\xbd'
34460 b'\xef\xbf\xbd'
34504 b' \xe8\xa3\x8f\xef\xbf\xbd'
34650 b'\xef\xbf\xbd'
34719 b' \xef\xbf\xbd'
34754 b' \xef\xbf\xbd'
34913 b'???'
34932 b'\xef\xbf\xbd'
35050 b'\xef\xbf\xbd\xef\xbf\xbd'
35069 b'\xef\xbf\xbd\xef\xbf\xbd'
35266 b'\xef\xbf\xbd'
35705 b'\xef\xbf\xbd'
35707 b'\xef\xbf\xbd'
35713 b'..."'
35975 b'\xef\xbf\xbd'
36181 b'\xef\xbf\xbd'
36365 b'\xef\xbf\xbd'
36469 b' \xef\xbf\xbd'
36596 b'\xef\xbf\xbd\xef\xbf\xbd'
36685 b'\xef\xbf\xbd'
37239 b'\xef\xbf\xbd'
37345 b'\xef\xbf\xbd'
37605 b'\xef\xbf\xbd'
37772 b'\xef\xbf\xbd'
37863 b'\xef\xbf\xbd'
37867 b'!!'
38184 b'\xef\xbf\xbd'
38461 b'\xef\xbf\xbd'
39333 b'\xef\xbf\xbd\xef\xbf\xbd'
39355 b'\xef\xbf\xbd'
39611 b'\xef\xbf\xbd'
39820 b'\xe9\xbe\x8d\xef\xbf\xbd'
40367 b'\xef\xbf\xbd'
41340 b'\xef\xbf\xbd'
41349 b'?)'
41365 b'\xef\xbf\xbd\xef\xbf\xbd'
41585 b'\xef\xbf\xbd'
41678 b'\xef\xbf\xbd\xef\xbf\xbd'
41753 b'\xef\xbf\xbd'
41840 b'\xef\xbf\xbd'
42062 b'\xef\xbf\xbd\xef\xbf\xbd'
42164 b' \xef\xbf\xbd'
42314 b' \xef\xbf\xbd'
42527 b' \xef\xbf\xbd'
42911 b',"'
43074 b' \xef\xbf\xbd'
43102 b'\xef\xbf\xbd'
43297 b'\xef\xbf\xbd'
43380 b'\xef\xbf\xbd'
43518 b'\xef\xbf\xbd\xef\xbf\xbd'
43636 b'\xef\xbf\xbd'
43718 b'\xef\xbf\xbd'
43769 b'\xef\xbf\xbd\xef\xbf\xbd'
43889 b'\xef\xbf\xbd'
43897 b'\xef\xbf\xbd'
44165 b'\xef\xbf\xbd'
44293 b'\xef\xbf\xbd'
44713 b'................'
45250 b'\xef\xbf\xbd'
45379 b'\xef\xbf\xbd'
45433 b'\xef\xbf\xbd\xef\xbf\xbd'
45495 b'\xef\xbf\xbd'
45539 b'\xef\xbf\xbd\xef\xbf\xbd'
45617 b'\xef\xbf\xbd'
45739 b'\xef\xbf\xbd'
45784 b'\xef\xbf\xbd'
45865 b'\xef\xbf\xbd\xef\xbf\xbd'
45911 b'\xef\xbf\xbd'
46237 b'\xef\xbf\xbd'
46256 b'\xef\xbf\xbd'
46328 b'.)'
46349 b'\xef\xbf\xbd'
46479 b'\xef\xbf\xbd'
46695 b'\xef\xbf\xbd'
46763 b'\xef\xbf\xbd'
46788 b'\xef\xbf\xbd\xef\xbf\xbd'
47078 b'\xef\xbf\xbd'
47082 b'......'
47249 b'\xef\xbf\xbd'
47540 b'._'
47728 b'\xef\xbf\xbd'
47797 b'\xef\xbf\xbd'
47947 b'\xef\xbf\xbd'
47991 b'\xef\xbf\xbd'
48071 b'\xef\xbf\xbd'
48585 b'\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd'
48953 b'\xef\xbf\xbd\xef\xbf\xbd'
48958 b'\xef\xbf\xbd'
49035 b'\xef\xbf\xbd'
49149 b'\xe3\x81\xae\xef\xbf\xbd'
49426 b'\xef\xbf\xbd'
49694 b'\xef\xbf\xbd'
50159 b'\xef\xbf\xbd\xef\xbf\xbd'
50169 b' \xef\xbf\xbd'
```
## Expected behavior
I would expect that there are no duplicate tokens. I might be decoding tokens incorrectly which would explain why there are so many replacement character `b'\xef\xbf\xbd'` tokens but there are even duplicates in the normal utf8 characters such as `?` which occurs in tokens 30 and 5633:
```python
[i for i in range(VOCAB_SIZE) if tokenizer.decode(i) == '?']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12260/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12259/comments | https://api.github.com/repos/huggingface/transformers/issues/12259/events | https://github.com/huggingface/transformers/issues/12259 | 925,139,850 | MDU6SXNzdWU5MjUxMzk4NTA= | 12,259 | `ValueError: Expected input batch_size to match target batch_size` occurs when training GPT2 with `Seq2SeqTrainer` | {
"login": "ryangawei",
"id": 25638070,
"node_id": "MDQ6VXNlcjI1NjM4MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25638070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryangawei",
"html_url": "https://github.com/ryangawei",
"followers_url": "https://api.github.com/users/ryangawei/followers",
"following_url": "https://api.github.com/users/ryangawei/following{/other_user}",
"gists_url": "https://api.github.com/users/ryangawei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryangawei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryangawei/subscriptions",
"organizations_url": "https://api.github.com/users/ryangawei/orgs",
"repos_url": "https://api.github.com/users/ryangawei/repos",
"events_url": "https://api.github.com/users/ryangawei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryangawei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The problem is that you are using a decoder model (DistilGPT2) but have inputs and targets of different lengths, which is impossible for those models. You should use an encode-decoder (also called seq2seq) model for this kind of tasks, see the complete list [here](https://huggingface.co/transformers/model_summary.html#seq-to-seq-models).",
"@sgugger Ah that's exactly my mistake. Thank you very much for the answer."
] | 1,624 | 1,624 | 1,624 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- trainer: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- trainer: @sgugger
## Information
Model I am using: `distilgpt2`
## To reproduce
I was following the tutorial [https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb](https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb) to fine-tune `distilgpt2` on a Seq2Seq task. Here's how I run my training process.
My map function for preprocessing the datasets,
```
def tokenize(sample_batch, tokenizer):
src_text = []
batch_size = len(sample_batch["src_abstract"])
for i in range(batch_size):
src_text.append(" ".join(
[sample_batch["src_abstract"][i], sample_batch["text_before_explicit_citation"][i], sample_batch["text_after_explicit_citation"][i]]))
tgt_text = sample_batch["tgt_abstract"]
inputs = tokenizer(
src_text,
tgt_text,
add_special_tokens=True,
truncation="longest_first",
# padding="max_length",
max_length=750
)
labels = tokenizer(
sample_batch["explicit_citation"],
truncation="longest_first",
# padding="max_length",
max_length=128,
)
inputs["labels"] = labels["input_ids"]
return inputs
```
My training code,
```
model_name = "distilgpt2"
model = GPT2LMHeadModel.from_pretrained(model_name).to('cuda')
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding="max_length")
training_args = Seq2SeqTrainingArguments(
"./checkpoints",
learning_rate=2e-5,
weight_decay=0.01,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
save_strategy='steps',
evaluation_strategy='steps',
logging_strategy='steps',
save_total_limit=1,
logging_steps=500,
fp16=True,
predict_with_generate=True
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics
)
trainer.train()
```
And the error log occurs,
```
ValueError: Caught ValueError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 972, in forward
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1047, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/functional.py", line 2693, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/functional.py", line 2384, in nll_loss
raise ValueError(
ValueError: Expected input batch_size (2046) to match target batch_size (138).
```
It seems like the `input_ids` are padded to the model's `max_length`, but the `labels` are not (I also have a question on why the `batch_size` looks like `2046` instead of `batch_size * max_length = 2048`). I found similar errors in the forum [https://discuss.huggingface.co/t/how-to-use-seq2seqtrainer-seq2seqdatacollator-in-v4-2-1/3243](https://discuss.huggingface.co/t/how-to-use-seq2seqtrainer-seq2seqdatacollator-in-v4-2-1/3243), which says,
> The PR has been merged, so you should be able to use a similar workflow. Note that the processing that used to be done in Seq2SeqDataCollator is now done on the dataset directly.
But I'm not sure how it solves the problem. I'd really appreciate any kinds of help!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12259/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12259/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12258/comments | https://api.github.com/repos/huggingface/transformers/issues/12258/events | https://github.com/huggingface/transformers/pull/12258 | 925,129,535 | MDExOlB1bGxSZXF1ZXN0NjczNjY0NjYy | 12,258 | [docs] performance | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR imports the doc I started working on some months back https://github.com/huggingface/transformers/issues/9824 after syncing the code to master's API.
Surely a lot more work can and will be done, this is just a starting baseline.
Fixes: https://github.com/huggingface/transformers/issues/9824
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12258/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12258",
"html_url": "https://github.com/huggingface/transformers/pull/12258",
"diff_url": "https://github.com/huggingface/transformers/pull/12258.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12258.patch",
"merged_at": 1624401259000
} |
https://api.github.com/repos/huggingface/transformers/issues/12257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12257/comments | https://api.github.com/repos/huggingface/transformers/issues/12257/events | https://github.com/huggingface/transformers/pull/12257 | 925,104,030 | MDExOlB1bGxSZXF1ZXN0NjczNjQzMDAz | 12,257 | [DeepSpeed] don't ignore --adafactor | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | CONTRIBUTOR | null | This PR adds a small improvement that checks that if `--adafactor` is passed and the DS config has an optimizer section, then we assert. Before that it was silently ignored, which was misleading to the user.
Fixes: https://github.com/huggingface/transformers/issues/11749
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12257/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12257",
"html_url": "https://github.com/huggingface/transformers/pull/12257",
"diff_url": "https://github.com/huggingface/transformers/pull/12257.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12257.patch",
"merged_at": 1624288620000
} |
https://api.github.com/repos/huggingface/transformers/issues/12256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12256/comments | https://api.github.com/repos/huggingface/transformers/issues/12256/events | https://github.com/huggingface/transformers/pull/12256 | 925,097,379 | MDExOlB1bGxSZXF1ZXN0NjczNjM3NDUx | 12,256 | [Flax] Fix flax test save pretrained | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj - it looks like Flax lip has a problem with the save/load test. Could you take a look?",
"@patrickvonplaten \r\n#12284 shold fix this."
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Flax save/load test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12256/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12256",
"html_url": "https://github.com/huggingface/transformers/pull/12256",
"diff_url": "https://github.com/huggingface/transformers/pull/12256.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12256.patch",
"merged_at": 1624289833000
} |
https://api.github.com/repos/huggingface/transformers/issues/12255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12255/comments | https://api.github.com/repos/huggingface/transformers/issues/12255/events | https://github.com/huggingface/transformers/pull/12255 | 925,088,428 | MDExOlB1bGxSZXF1ZXN0NjczNjI5OTQy | 12,255 | [Flax] [WIP] allow loading head model with base model weights | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks for fixing this! Could we maybe also add some tests that make sure that loading/saving works correctly for all models?"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
Allows loading flax head model with base model weights.
Right now it's not possible to load a flax head model using weights from the base model as weights `dict` of the base model does not contain the `base_model_prefix` key. To reproduce
```python
from transformers import BertConfig, FlaxBertModel, FlaxBertForSequenceClassification
config = BertConfig(hidden_size=64, intermediate_size=128, max_position_embeddings=128, num_attention_heads=8, num_hidden_layers=8)
base_model = FlaxBertModel(config)
base_model.save_pretrained("base")
head_model = FlaxBertForSequenceClassification.from_pretrained("base")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12255/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12255",
"html_url": "https://github.com/huggingface/transformers/pull/12255",
"diff_url": "https://github.com/huggingface/transformers/pull/12255.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12255.patch",
"merged_at": 1624287403000
} |
https://api.github.com/repos/huggingface/transformers/issues/12254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12254/comments | https://api.github.com/repos/huggingface/transformers/issues/12254/events | https://github.com/huggingface/transformers/issues/12254 | 925,037,712 | MDU6SXNzdWU5MjUwMzc3MTI= | 12,254 | [Documentation Example] Task Summary - Start/End of Span in QA Example | {
"login": "RobMcH",
"id": 7346905,
"node_id": "MDQ6VXNlcjczNDY5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7346905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobMcH",
"html_url": "https://github.com/RobMcH",
"followers_url": "https://api.github.com/users/RobMcH/followers",
"following_url": "https://api.github.com/users/RobMcH/following{/other_user}",
"gists_url": "https://api.github.com/users/RobMcH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobMcH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobMcH/subscriptions",
"organizations_url": "https://api.github.com/users/RobMcH/orgs",
"repos_url": "https://api.github.com/users/RobMcH/repos",
"events_url": "https://api.github.com/users/RobMcH/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobMcH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The example is just a quick demo on Question Answering and there are many, many things it doesn't do. I'd rather keep it simple since this page covers a lot of things already, and since we have the two [question answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) scripts that are more complete.",
"I understand that it's a simple example but the point raised is not about things it doesn't do, but rather that it does this specific thing wrong. The code snippet assumes conditional independence which then leads to it potentially producing empty outputs. The example is placed very prominently within the documentation so I'd argue that it should be corrected. The question answering scripts seem to [make a similar assumption](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/utils_qa.py#L128) but then correct for it by ignoring invalid predictions.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
This bug pertains the [extractive QA example given in the documentation](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) and will be reproducible across operating systems and library versions. I provide a fix for the PyTorch code below, the same can be applied to the TensorFlow example.
### Who can help
Probably @sgugger and/or @patrickvonplaten.
## Information
The way that the start and end of the extracted span are calculated means that it is possible that `end <= start`, which will lead to an empty answer. Instead, a joint probability distribution over start/end positions should be used with the probability of selecting (start, end) s.t. `end <= start` = 0.
## To reproduce
For reference, the original code:
```
for question in questions:
... inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
... input_ids = inputs["input_ids"].tolist()[0]
...
... outputs = model(**inputs)
... answer_start_scores = outputs.start_logits
... answer_end_scores = outputs.end_logits
...
... answer_start = torch.argmax(
... answer_start_scores
... ) # Get the most likely beginning of answer with the argmax of the score
... answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
...
... answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
...
... print(f"Question: {question}")
... print(f"Answer: {answer}")
```
I suggest modifying the code something akin to the below. I also modified the answer decoding to exclude special tokens.
```
for question in questions:
... inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
... input_ids = inputs["input_ids"].tolist()[0]
...
... outputs = model(**inputs)
... # Get probabilities for start and end of the answer span.
... start_probs = torch.softmax(outputs.start_logits.squeeze(), 0)
... end_probs = torch.softmax(outputs.end_logits.squeeze(), 0)
... # Calculate joint probabilities.
... answer_span = torch.outer(start_probs, end_probs)
... # Mask out any pair (i, j) where j <= i.
... mask = torch.ones(answer_span.shape).tril()
... mask[mask == 1] = float("-inf")
... # Select span based on max joint probability.
... answer_start, answer_end = torch.where(answer_span == (answer_span + mask).max())
... answer_start, answer_end = answer_start[0], answer_end[0] + 1
... # Decode IDs within the span, ignoring special tokens.
... answer = tokenizer.decode(input_ids[answer_start:answer_end], skip_special_tokens=True)
...
... print(f"Question: {question}")
... print(f"Answer: {answer}")
```
Running this code gives the same answers for the example input (c.f. documentation), but avoids the issue of extracting spans with `end <= start` for other inputs. One such input would for instance be:
```
text = 'stairs down eventually lead to the invocation level. Performing the invocation ritual at the vibrating square opens the stairs to the Sanctum.\n With the Amulet of Yendor in hand, the adventurer may ascend from level 1 into the Plane of Earth; thence s/he may proceed through magic portals to the planes of Air, Fire, and Water, and thence to the Astral Plane. Offering the Amulet of Yendor on the correct high altar wins the game.\n Along the way, one will encounter these branches and special levels'
questions = ["how do i descend the dungeon"]
```
for which the output of the original, unmodified code will be (because it selects a span end that is before the selected start):
```
Question: how do i descend the dungeon
Answer:
```
and for the fixed code proposed above:
```
Question: how do i descend the dungeon
Answer: stairs down eventually lead to the invocation level
```
I'm happy to port the code to TensorFlow and submit a pull request with the updated code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12254/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12253/comments | https://api.github.com/repos/huggingface/transformers/issues/12253/events | https://github.com/huggingface/transformers/issues/12253 | 925,016,981 | MDU6SXNzdWU5MjUwMTY5ODE= | 12,253 | First hidden state of the last layer of Bert (french version : FlauBert) only prints vectors of 0 or -0 after using it !! | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: linux
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0a0
- Tensorflow version (GPU?):
- Using GPU in script?: no (for on small sample ~25 sentences) but yes ( for full sample ~70K sentences/paragraphs)
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. ! -->
@LysandreJik
@sgugger
# Library
python 3.7.6
numpy == 1.18.4
pandas == 1.0.3
transformers == 4.6.1
torch :1.5.0a0
scipy == 1.4.1
sklearn== 0.22.1
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
-->
## Information
Model I am using (FlauBERT ...):
The problem arises when using FlauBert model to get first hidden state (I do not know if it is an error but the hidden states produce by the model download from transformers library produce the same value (0 or -0) for all my data after applying the model. maybe it is what it is supposed to produce but the behaviour is recurrent for all the sentence of my dataset ( ~70K).
* [ ] the official example scripts: (give details below)
import torch
from transformers import FlaubertModel, FlaubertTokenizer
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
sentence = "Le chat mange une pomme."
token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)])
last_layer = flaubert(token_ids)[0]
print(last_layer.shape)
cls_embedding = last_layer[:, 0, :]
* [ ] my own modified scripts: (give details below)
def spliterate(buf, chunk):
for start in range(0, buf.size, chunk):
yield buf[start:start + chunk]
path_to_lge = "flaubert/flaubert_small_cased"
def get_flaubert_layer(texte, path_to_lge):
lge_size = path_to_lge[33:-1]
print("Embeddings bert model used.................... : ", lge_size, "\n")
flaubert = FlaubertModel.from_pretrained(path_to_lge)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(path_to_lge)
print(texte)
tokenized = texte.apply(
(lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
print("Exit after applying tokenizer :" , "\n", tokenized, "\n")
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
print("Exit after padding: ", "\n", padded, "\n")
last_layer_ = []
for tmp in spliterate(padded, 4000):
#print(tmp)
if len(tmp) != 0:
#print(len(tmp))
token_ids = torch.tensor(tmp)
print("Exit after torch transformation:" , "\n", token_ids, "\n")
attention_mask = np.where(tmp != 0, 1, 0)
attention_mask = torch.tensor(attention_mask)
print("Exit after torch transformation for attention_mask:", "\n", attention_mask, "\n")
with torch.no_grad():
layer = flaubert(token_ids, attention_mask=attention_mask)
layer = layer[0][:, 0, :].numpy()
print(" After applying model flaubert to get features :", "\n", layer, "\n")
last_layer_.append(layer)
else:
None
last_layer_np = np.array(last_layer_)
last_layer_np_array = np.concatenate(last_layer_np)
# print("Total of sentences :" , len(last_layer_np_array))
return last_layer_np_array, lge_size
#For getting hidden state (last layer)
#Xtest is a dataframe column and look like this :
0 si j’ai un problème, comment je remonte l’info...
1 des agents de maintenance ? Oui, oui. Enfin… I...
2 Il faudrait des tiroirs qui sortent / rentrent...
3 ROI, 5 à 10 ans. Si l’énergie explose, ça devi...
4 Je ne vois pas cela en conception de cuisine, ...
5 Les proverbes. C'est drôle parce qu'on a déjà ...
6 J'ai l'impression que ça peut-être utilisable ...
7 Ça reste… ça serait un réfrigérateur comme on ...
8 C’est en plastique souple et on arrive vraimen...
9 Parce que déjà, là, on évite de mettre un cong...
10 En rénovation, il n'est pas évident de rajoute...
Xtest_emb, s = get_hidden_state(Xtest, path_to_model_lge)
# What data(token_ids) looks after transforming it to tensor before giving to flaubert model
[0, 93, 106, ..., 0, 0, 0],
[ 0, 23, 2080, ..., 0, 0, 0],
[ 0, 59, 2961, ..., 0, 0, 0],
...,
[ 0, 55, 369, ..., 0, 0, 0],
[ 0, 6077, 53, ..., 0, 0, 0],
[ 0, 46, 41, ..., 0, 0, 0]])
#What layer[0] look like after using flaubert model on the data :
tensor([[[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0932, 0.7302, 1.3671, ..., -2.5153, -2.0251, 1.2235],
[-0.4520, 2.3935, 1.0048, ..., -3.4742, -0.4194, 0.4428],
...,
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000]],
[[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[ 2.0035, 0.6900, -0.5092, ..., 0.0862, -1.6157, 0.6070],
[-0.3516, 2.5931, -1.6113, ..., -0.6265, 0.6559, -0.9409],
...,
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000]],
[[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0040, -1.7643, -1.5588, ..., -1.8786, -0.4597, 0.3843],
[-3.4181, 0.1528, 0.6369, ..., -2.2618, -1.0742, -0.6097],
...,
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000]],
..,
[[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-1.3317, -0.3899, 0.2560, ..., -1.7550, -1.9626, 0.3821],
[-1.3053, -0.2642, 0.1691, ..., -1.8541, -2.1521, 0.6066],
...,
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000]],
[[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.8575, -2.6781, 1.0530, ..., 0.7656, -2.3176, 0.6474],
[ 0.5465, 0.1727, -0.8362, ..., -0.1918, -1.5318, 1.0457],
...,
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000]],
[[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.1900, -1.6420, -0.7254, ..., -1.5700, -1.1521, -0.0588],
[-0.7427, -2.5433, 0.6748, ..., -3.1792, -1.8242, 0.4684],
...,
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000]]])
What the result look like :
print(layer = layer[0][:, 0, :].numpy())
[[-0. -0. -0. ... -0. -0. -0.]
[-0. -0. 0. ... 0. -0. -0.]
[-0. -0. 0. ... -0. 0. -0.]
...
[-0. 0. -0. ... -0. -0. 0.]
[-0. -0. -0. ... 0. -0. 0.]
[ 0. -0. 0. ... -0. -0. -0.]]
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Extractiing features (get the hidden layer and pass it to a MLPerceptron from scikit learn for a classification task)
## To reproduce
Steps to reproduce the behavior:
1. Get the different libraries mentionned above in a virtual environnement
2. get the small example of Xtest given above in a dataframe column and pass it to the
function "get_hidden_state" and print the results
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I did not expect the value of the first hidden state to be zero values , so I maybe using the flaubert the wrong way or I did download the bad version
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12253/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12252/comments | https://api.github.com/repos/huggingface/transformers/issues/12252/events | https://github.com/huggingface/transformers/pull/12252 | 924,956,301 | MDExOlB1bGxSZXF1ZXN0NjczNTE4NDgw | 12,252 | Tensorflow QA example | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | The new TF QA example! There were a couple of issues with the metrics that might need more investigation, but I confirmed they happened in the PyTorch version too. Possibly that was caused by evaluating on an untrained model, though.
Also, don't stress about reviewing this until after the weekend! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12252/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12252",
"html_url": "https://github.com/huggingface/transformers/pull/12252",
"diff_url": "https://github.com/huggingface/transformers/pull/12252.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12252.patch",
"merged_at": 1624289848000
} |
https://api.github.com/repos/huggingface/transformers/issues/12251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12251/comments | https://api.github.com/repos/huggingface/transformers/issues/12251/events | https://github.com/huggingface/transformers/pull/12251 | 924,950,202 | MDExOlB1bGxSZXF1ZXN0NjczNTEzMzIw | 12,251 | [Flax] Add jax flax to env command | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If most of the tests pass, then can I use the changes made by the PR in the time being to check whether I can atleast start my training?"
] | 1,624 | 1,624 | 1,624 | MEMBER | null | This PR adds jax/flax libs to the env command.
@lewtun - can you maybe try out running `transformers-cli env` after this command to see why you cannot import `FlaxBigBirdForMaskedLM` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12251/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12251/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12251",
"html_url": "https://github.com/huggingface/transformers/pull/12251",
"diff_url": "https://github.com/huggingface/transformers/pull/12251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12251.patch",
"merged_at": 1624291932000
} |
https://api.github.com/repos/huggingface/transformers/issues/12250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12250/comments | https://api.github.com/repos/huggingface/transformers/issues/12250/events | https://github.com/huggingface/transformers/issues/12250 | 924,949,550 | MDU6SXNzdWU5MjQ5NDk1NTA= | 12,250 | RAG with T5 in a multitask setting | {
"login": "sb1992",
"id": 10261100,
"node_id": "MDQ6VXNlcjEwMjYxMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sb1992",
"html_url": "https://github.com/sb1992",
"followers_url": "https://api.github.com/users/sb1992/followers",
"following_url": "https://api.github.com/users/sb1992/following{/other_user}",
"gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sb1992/subscriptions",
"organizations_url": "https://api.github.com/users/sb1992/orgs",
"repos_url": "https://api.github.com/users/sb1992/repos",
"events_url": "https://api.github.com/users/sb1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/sb1992/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!",
"@sb1992 \r\n\r\nNo, it doesn't matter whether you use T5 or BART. You can actually use a special token in front of every line in the target files.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,627 | 1,627 | NONE | null | I was trying to run RAG with t5 for multiple tasks i.e fact checking and QA. As i understand only with t5 you can do that by adding a unique prefix for the particular task before the input I was wondering what could be the best way to handle that ? Do i preprocess train.source, val.source so that they already have these prefixes or is their an easier way as well?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12250/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12249/comments | https://api.github.com/repos/huggingface/transformers/issues/12249/events | https://github.com/huggingface/transformers/issues/12249 | 924,873,762 | MDU6SXNzdWU5MjQ4NzM3NjI= | 12,249 | Got unexpected result when using BertTokenizer in Chinese | {
"login": "hxxxxh",
"id": 20272551,
"node_id": "MDQ6VXNlcjIwMjcyNTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20272551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hxxxxh",
"html_url": "https://github.com/hxxxxh",
"followers_url": "https://api.github.com/users/hxxxxh/followers",
"following_url": "https://api.github.com/users/hxxxxh/following{/other_user}",
"gists_url": "https://api.github.com/users/hxxxxh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hxxxxh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hxxxxh/subscriptions",
"organizations_url": "https://api.github.com/users/hxxxxh/orgs",
"repos_url": "https://api.github.com/users/hxxxxh/repos",
"events_url": "https://api.github.com/users/hxxxxh/events{/privacy}",
"received_events_url": "https://api.github.com/users/hxxxxh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @JetRunner ",
"This behavior seems to be related to `BertTokenizerFast`. I'm not very sure whether I know how to fix it but I can give it a look at `tokenizers`.",
"I wouldn't say the current tokenization is \"wrong\", but I would prefer `BertTokenizerFast` to be consistent with Google's tokenizer. I'll look into this and let's see what's the behavior of Google BERT.",
"\r\nHere's what you get with the original Google BERT tokenizer. Kinda confusing..",
"@hxxxxh Given the chaotic output of different implementations, rather than try to \"fix\" one to be consistent with another, I would recommend you to clean the text before using it as an input.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,624 | 1,648 | 1,627 | NONE | null | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-74-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
- `tokenizers` version: 0.10.3
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
When I use BertTokenizer, I got unexpected results in some "strange" chars.
```python
text = "这是一个���中文句子"
tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
print(tokenizer.tokenize(text))
```
Output: ```['这', '是', '一', '个', '中', '文', '句', '子']```
And I got the same result using BertTokenizerFast too.
## Expected behavior
I think it may cause an error or affect performance when doing sequence labeling tasks, like NER or Word Segment.
And I noticed that I will get a more reasonable result when I downgrade the transformers to version 3.3.0 (with tokenizers==0.8.1rc2):
```['这', '是', '一', '个', '[UNK]', '中', '文', '[UNK]', '句', '子']```
why this happens?Is there any way to get the correct result in the new version of transformers?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12249/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12248/comments | https://api.github.com/repos/huggingface/transformers/issues/12248/events | https://github.com/huggingface/transformers/issues/12248 | 924,678,394 | MDU6SXNzdWU5MjQ2NzgzOTQ= | 12,248 | finding a bug in training the code of /src/transformers/models/detr/modeling_detr.py | {
"login": "zhangbo2008",
"id": 35842504,
"node_id": "MDQ6VXNlcjM1ODQyNTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/35842504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangbo2008",
"html_url": "https://github.com/zhangbo2008",
"followers_url": "https://api.github.com/users/zhangbo2008/followers",
"following_url": "https://api.github.com/users/zhangbo2008/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangbo2008/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangbo2008/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangbo2008/subscriptions",
"organizations_url": "https://api.github.com/users/zhangbo2008/orgs",
"repos_url": "https://api.github.com/users/zhangbo2008/repos",
"events_url": "https://api.github.com/users/zhangbo2008/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangbo2008/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh yeah, that makes sense, thanks for spotting it. I didn't have an issue with it as I never directly provided labels created by `DetrFeatureExtractor` to the model, but in case you do, you indeed will encounter an error.\r\n\r\nWill update it! "
] | 1,624 | 1,624 | 1,624 | NONE | null | finding a bug in training the code of https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/modeling_detr.py
https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/feature_extraction_detr.py#L616
should change encoded_inputs["targets"] to encoded_inputs["labels"]
because in https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/modeling_detr.py#L1350 we use variable of name labels in the function.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12248/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12247/comments | https://api.github.com/repos/huggingface/transformers/issues/12247/events | https://github.com/huggingface/transformers/pull/12247 | 924,674,309 | MDExOlB1bGxSZXF1ZXN0NjczMjc2OTg0 | 12,247 | [FlaxBart] few small fixes | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,624 | 1,624 | 1,624 | MEMBER | null | # What does this PR do?
Typos and few small fixes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12247/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/12247",
"html_url": "https://github.com/huggingface/transformers/pull/12247",
"diff_url": "https://github.com/huggingface/transformers/pull/12247.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/12247.patch",
"merged_at": 1624008582000
} |
https://api.github.com/repos/huggingface/transformers/issues/12246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12246/comments | https://api.github.com/repos/huggingface/transformers/issues/12246/events | https://github.com/huggingface/transformers/issues/12246 | 924,650,938 | MDU6SXNzdWU5MjQ2NTA5Mzg= | 12,246 | Different Weights between google-bert (uncased_L-12_H-768_A-12) and Huggingface-bert (bert-base-uncased) | {
"login": "Doragd",
"id": 26213546,
"node_id": "MDQ6VXNlcjI2MjEzNTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/26213546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Doragd",
"html_url": "https://github.com/Doragd",
"followers_url": "https://api.github.com/users/Doragd/followers",
"following_url": "https://api.github.com/users/Doragd/following{/other_user}",
"gists_url": "https://api.github.com/users/Doragd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Doragd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doragd/subscriptions",
"organizations_url": "https://api.github.com/users/Doragd/orgs",
"repos_url": "https://api.github.com/users/Doragd/repos",
"events_url": "https://api.github.com/users/Doragd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Doragd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"When checking the [model card]() of `google/bert_uncased_L-12_H-768_A-12`, it states the following:\r\n\r\n> Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.\r\n\r\nHence, `google/bert_uncased_L-12_H-768_A-12` is a retrained version of `bert-base-uncased`. So of course, the parameter values will differ.",
"@NielsRogge Is the `google/bert_uncased_L-12_H-768_A-12` equal to `https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip`"
] | 1,624 | 1,624 | 1,624 | NONE | null | Environment: torch==1.8.1+cu111 transformers==4.3.3
```python
# google-bert (uncased_L-12_H-768_A-12)
model = BertForPreTraining.from_pretrained('uncased_L-12_H-768_A-12/', from_tf=True)
print(model.bert.embeddings.word_embeddings.weight)
```
```python
# google-bert (uncased_L-12_H-768_A-12) output
Parameter containing:
tensor([[-0.0314, -0.0045, 0.0182, ..., -0.0309, 0.0204, -0.0345],
[-0.0295, -0.0486, 0.0746, ..., -0.0363, 0.0262, -0.0108],
[-0.0328, -0.0582, -0.0149, ..., -0.0932, 0.0444, 0.0221],
...,
[-0.0337, -0.0518, -0.0280, ..., -0.0174, 0.0078, -0.0010],
[-0.0022, -0.0297, -0.0167, ..., -0.0472, -0.0006, 0.0128],
[-0.0631, -0.0144, -0.0232, ..., 0.0072, -0.0704, -0.0479]],
requires_grad=True)
```
```python
# Huggingface-bert (bert-base-uncased)
model = BertModel.from_pretrained('bert-base-uncased')
print(model.embeddings.word_embeddings.weight)
```
```python
# Huggingface-bert (bert-base-uncased) output
Parameter containing:
tensor([[-0.0102, -0.0615, -0.0265, ..., -0.0199, -0.0372, -0.0098],
[-0.0117, -0.0600, -0.0323, ..., -0.0168, -0.0401, -0.0107],
[-0.0198, -0.0627, -0.0326, ..., -0.0165, -0.0420, -0.0032],
...,
[-0.0218, -0.0556, -0.0135, ..., -0.0043, -0.0151, -0.0249],
[-0.0462, -0.0565, -0.0019, ..., 0.0157, -0.0139, -0.0095],
[ 0.0015, -0.0821, -0.0160, ..., -0.0081, -0.0475, 0.0753]],
requires_grad=True)
```
**Why the output of `embeddings.word_embeddings.weight` is different?**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12246/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/12245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/12245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/12245/comments | https://api.github.com/repos/huggingface/transformers/issues/12245/events | https://github.com/huggingface/transformers/issues/12245 | 924,599,464 | MDU6SXNzdWU5MjQ1OTk0NjQ= | 12,245 | TFBertForMaskedLM won't reload from saved checkpoint, shape mismatch issue | {
"login": "martingajek",
"id": 26400212,
"node_id": "MDQ6VXNlcjI2NDAwMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/26400212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martingajek",
"html_url": "https://github.com/martingajek",
"followers_url": "https://api.github.com/users/martingajek/followers",
"following_url": "https://api.github.com/users/martingajek/following{/other_user}",
"gists_url": "https://api.github.com/users/martingajek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martingajek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martingajek/subscriptions",
"organizations_url": "https://api.github.com/users/martingajek/orgs",
"repos_url": "https://api.github.com/users/martingajek/repos",
"events_url": "https://api.github.com/users/martingajek/events{/privacy}",
"received_events_url": "https://api.github.com/users/martingajek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is an odd one - can you check if the problem still occurs when you use `model.save_pretrained()`? Just pass a path to that method to save, then load the model using `TFAutoModelForMaskedLM.from_pretrained()` with the same path.",
"Yeah `model.save_pretrained()` followed by `TFAutoModelForMaskedLM.from_pretrained()` works fine ",
"@Rocketknight1 I think that this is due to the fact that the variable for the token_type_embeddings clashes with the position_embeddings for having the same name: if I assign a different name to position_embeddings here: https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/bert/modeling_tf_bert.py#L164\r\nSay go from \"embeddings\" to \"pos_embeddings\", the issue disappears, it's weird because I would expect the name_scope to take precedence but apparently not. I imagine that if I were to proceed with that name change, that might cause issues during model conversion to the BertModel pytorch implementation.",
"Hey, thank you for that very helpful bit of diagnostic info! That links this with #11202, another issue we have caused by the same underlying problem. This is helpful because I'll probably need to make some breaking changes to fix that issue, and the fact that it's causing multiple downstream problems will increase the urgency there.",
"Cool! Glad that was helpful, thanks for looking into the issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,623 | 1,627 | 1,627 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1-4.7
- Platform: Debian GNU/Linux 10 (buster)
- Python version: 3.9.2
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Rocketknight1, @LysandreJik, @sgugger
## Information
Model I am using: TFBertForMaskedLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
I believe this issue also affects the official TFTrainer implementation
as the checkpoint restore snippet was adapted from it.
## To reproduce
Steps to reproduce the behavior:
1. Generate Masked Batch
2. initialize TF Model and assign CheckpointManager
3. Save model checkpoint
4. initialize new TF Model and assign CheckpointManager
5. restore from checkpoint
```
import numpy as np
from transformers import AutoTokenizer, TFAutoModelForMaskedLM, AutoConfig, TFAutoModelForCausalLM
import tensorflow as tf
random_sentences = ["You'll see the rainbow bridge after it rains cats and dogs.",
"They looked up at the sky and saw a million stars.",
"The bullet pierced the window shattering it before missing Danny's head by mere millimeters.",
"He was willing to find the depths of the rabbit hole in order to be with her."]
tok = AutoTokenizer.from_pretrained('bert-base-uncased')
input_ids = tok.batch_encode_plus(random_sentences,return_tensors='np',padding=True)['input_ids']
#Create masked tokens as labels
labels = np.ones_like(input_ids)*-100
mask = (np.random.uniform(size=input_ids.shape)<=0.2) & (input_ids != 0)
labels[mask]=tok.mask_token_id
batch= {'input_ids':tf.convert_to_tensor(input_ids),
'labels':tf.convert_to_tensor(labels)}
"""## Run model and save checkpoint"""
model = TFAutoModelForMaskedLM.from_pretrained('bert-base-uncased')
checkpoint = tf.train.Checkpoint(model=model)
model.ckpt_manager = tf.train.CheckpointManager(checkpoint, './', max_to_keep=1)
out = model(**batch)
print(out.loss.numpy())
model.ckpt_manager.save()
"""## Re-Initialize from config alone an load existing checkpoint"""
cfg = AutoConfig.from_pretrained('bert-base-uncased')
model2 = TFAutoModelForMaskedLM.from_config(cfg)
checkpoint2 = tf.train.Checkpoint(model=model2)
model2.ckpt_manager = tf.train.CheckpointManager(checkpoint2, './', max_to_keep=1)
latest_ckpt = tf.train.latest_checkpoint('./')
status = checkpoint2.restore(latest_ckpt)
status.assert_existing_objects_matched()
out = model2(**batch)
print(out.loss.numpy())
```
## Expected behavior
Expect to fully restore from checkpoint
## Current Behavior, error output
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-5ec2de12ee44> in <module>()
----> 1 out = model2(**batch)
2 out.loss
19 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in set_shape(self, shape)
1238 raise ValueError(
1239 "Tensor's shape %s is not compatible with supplied shape %s" %
-> 1240 (self.shape, shape))
1241
1242 # Methods not supported / implemented for Eager Tensors.
ValueError: Tensor's shape (512, 768) is not compatible with supplied shape [2, 768]
```
## Link to colab
https://colab.research.google.com/drive/12pwo4WSueOT523hh1INw5J_SLpkK0IgB?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/12245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/12245/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.