url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/8625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8625/comments | https://api.github.com/repos/huggingface/transformers/issues/8625/events | https://github.com/huggingface/transformers/pull/8625 | 745,855,893 | MDExOlB1bGxSZXF1ZXN0NTIzMzQ3MzY5 | 8,625 | Model Card for abhilash1910/financial_roberta | {
"login": "abhilash1910",
"id": 30946547,
"node_id": "MDQ6VXNlcjMwOTQ2NTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/30946547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhilash1910",
"html_url": "https://github.com/abhilash1910",
"followers_url": "https://api.github.com/users/abhilash1910/followers",
"following_url": "https://api.github.com/users/abhilash1910/following{/other_user}",
"gists_url": "https://api.github.com/users/abhilash1910/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhilash1910/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhilash1910/subscriptions",
"organizations_url": "https://api.github.com/users/abhilash1910/orgs",
"repos_url": "https://api.github.com/users/abhilash1910/repos",
"events_url": "https://api.github.com/users/abhilash1910/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhilash1910/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8625/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8625",
"html_url": "https://github.com/huggingface/transformers/pull/8625",
"diff_url": "https://github.com/huggingface/transformers/pull/8625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8625.patch",
"merged_at": 1605723748000
} |
https://api.github.com/repos/huggingface/transformers/issues/8624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8624/comments | https://api.github.com/repos/huggingface/transformers/issues/8624/events | https://github.com/huggingface/transformers/pull/8624 | 745,820,816 | MDExOlB1bGxSZXF1ZXN0NTIzMzE3NjQx | 8,624 | Fixes the training resuming with gradient accumulation | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
As #5605 pointed out, there was a mistake in the way the number of steps to skip was computed when the `Trainer` wants to resume training from a checkpoint with gradient accumulation activated. This PR fixes that and adds more tests. More specifically:
1. It tests a regular gradient accumulation training (wasn't done before) and checks it gives the same results as the same training with the batch size multiplied by the number of gradient accumulation steps.
2. It adds a test of a training resuming with gradient accumulation (which fails on current master)
3. It fixes master so that the test in 2 passes.
Fixes #5605 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8624/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8624",
"html_url": "https://github.com/huggingface/transformers/pull/8624",
"diff_url": "https://github.com/huggingface/transformers/pull/8624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8624.patch",
"merged_at": 1605718812000
} |
https://api.github.com/repos/huggingface/transformers/issues/8623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8623/comments | https://api.github.com/repos/huggingface/transformers/issues/8623/events | https://github.com/huggingface/transformers/pull/8623 | 745,801,537 | MDExOlB1bGxSZXF1ZXN0NTIzMzAxNTYx | 8,623 | Fix training from scratch in new scripts | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR fixes a test in the new example scripts when the model can be trained from scratch and the `model_name_or_path` argument can be None. It also updates the template accordingly.
Fixes #8590 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8623/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8623",
"html_url": "https://github.com/huggingface/transformers/pull/8623",
"diff_url": "https://github.com/huggingface/transformers/pull/8623.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8623.patch",
"merged_at": 1605719726000
} |
https://api.github.com/repos/huggingface/transformers/issues/8622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8622/comments | https://api.github.com/repos/huggingface/transformers/issues/8622/events | https://github.com/huggingface/transformers/pull/8622 | 745,747,624 | MDExOlB1bGxSZXF1ZXN0NTIzMjU2Mzc3 | 8,622 | [Tokenizer Doc] Improve tokenizer summary | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR improves the tokenizer docs. This includes more consistent use of terminology, arguably better phrasing, correction of spelling, and more consistent formatting.
Terminology: Tried to make the difference between "symbol", "character", "word", and "subword"
Consistency: Use `"` for token notation and replace `this paper <...>`__ by `<paper_name (author, year)>`, rename section to "Summary of the tokenizers"
I want to spend some time on the tokenizers library in Rust in the next couple of weeks and was reading through this summary for a start. It's great! I thought that I can improve the wording and explanations a bit while reading through it and pass it through Grammarly.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8622/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8622",
"html_url": "https://github.com/huggingface/transformers/pull/8622",
"diff_url": "https://github.com/huggingface/transformers/pull/8622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8622.patch",
"merged_at": 1605716055000
} |
https://api.github.com/repos/huggingface/transformers/issues/8621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8621/comments | https://api.github.com/repos/huggingface/transformers/issues/8621/events | https://github.com/huggingface/transformers/pull/8621 | 745,725,596 | MDExOlB1bGxSZXF1ZXN0NTIzMjM4NTM2 | 8,621 | Fix DataCollatorForLanguageModeling | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
A clone was removed by mistake, this PR adds it back.
Fixes #8619
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8621/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8621",
"html_url": "https://github.com/huggingface/transformers/pull/8621",
"diff_url": "https://github.com/huggingface/transformers/pull/8621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8621.patch",
"merged_at": 1605711771000
} |
https://api.github.com/repos/huggingface/transformers/issues/8620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8620/comments | https://api.github.com/repos/huggingface/transformers/issues/8620/events | https://github.com/huggingface/transformers/pull/8620 | 745,722,504 | MDExOlB1bGxSZXF1ZXN0NTIzMjM2MDc5 | 8,620 | Create model_cards for Chinese Couplet and Poem GPT2 models | {
"login": "hhou435",
"id": 59219579,
"node_id": "MDQ6VXNlcjU5MjE5NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/59219579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhou435",
"html_url": "https://github.com/hhou435",
"followers_url": "https://api.github.com/users/hhou435/followers",
"following_url": "https://api.github.com/users/hhou435/following{/other_user}",
"gists_url": "https://api.github.com/users/hhou435/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhou435/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhou435/subscriptions",
"organizations_url": "https://api.github.com/users/hhou435/orgs",
"repos_url": "https://api.github.com/users/hhou435/repos",
"events_url": "https://api.github.com/users/hhou435/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhou435/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Really cool, and great that you inputed custom widget inputs. Merging.",
"cc'ing @JetRunner for info"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8620/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8620/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8620",
"html_url": "https://github.com/huggingface/transformers/pull/8620",
"diff_url": "https://github.com/huggingface/transformers/pull/8620.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8620.patch",
"merged_at": 1605722790000
} |
https://api.github.com/repos/huggingface/transformers/issues/8619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8619/comments | https://api.github.com/repos/huggingface/transformers/issues/8619/events | https://github.com/huggingface/transformers/issues/8619 | 745,702,454 | MDU6SXNzdWU3NDU3MDI0NTQ= | 8,619 | `DataCollatorForLanguageModeling` modifies `input_ids` via `labels` variable | {
"login": "sveitser",
"id": 1040871,
"node_id": "MDQ6VXNlcjEwNDA4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1040871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sveitser",
"html_url": "https://github.com/sveitser",
"followers_url": "https://api.github.com/users/sveitser/followers",
"following_url": "https://api.github.com/users/sveitser/following{/other_user}",
"gists_url": "https://api.github.com/users/sveitser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sveitser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sveitser/subscriptions",
"organizations_url": "https://api.github.com/users/sveitser/orgs",
"repos_url": "https://api.github.com/users/sveitser/repos",
"events_url": "https://api.github.com/users/sveitser/events{/privacy}",
"received_events_url": "https://api.github.com/users/sveitser/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah yes, only the detach was supposed to be removed but I guess I went a bit too far with my mouse, sorry about that. Will fix right now, thanks for flagging!"
] | 1,605 | 1,605 | 1,605 | NONE | null | The cloning step was removed in https://github.com/huggingface/transformers/pull/8308 at https://github.com/huggingface/transformers/pull/8308/files#diff-046566f2b40a246c7d533457cd7f6f07830516da845b904086f36b3cfe0d5965L201 so now the code that sets padded labels to `-100` is operating on the `input_ids` tensor directly.
I suspect the code then fails when trying to look up the embedding for `-100` .
cc @sgugger
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-5.4.72-x86_64-with
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use `DataCollatorForLanguageModeling` with `Trainer` and a tokenizer with `pad_token`
```
File "/home/lulu/r/buganart/dialog/.build/pip_packages/bin/finetune", line 33, in <module>
sys.exit(load_entry_point('dialog', 'console_scripts', 'finetune')())
File "/home/lulu/r/buganart/dialog/dialog/finetune.py", line 139, in main
trainer.train()
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 775, in train
tr_loss += self.training_step(model, inputs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 1112, in training_step
loss = self.compute_loss(model, inputs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/trainer.py", line 1136, in compute_loss
outputs = model(**inputs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 774, in forward
transformer_outputs = self.transformer(
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 612, in forward
inputs_embeds = self.wte(input_ids)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward
return F.embedding(
File "/nix/store/0jdyxgmg88y6sbjm3xkqdn06f493ahf2-python3-3.8.6-env/lib/python3.8/site-packages/torch/nn/functional.py", line 1852, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
My script is here https://github.com/buganart/dialog/blob/master/dialog/finetune.py .
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8619/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8618/comments | https://api.github.com/repos/huggingface/transformers/issues/8618/events | https://github.com/huggingface/transformers/issues/8618 | 745,692,452 | MDU6SXNzdWU3NDU2OTI0NTI= | 8,618 | seq2seq_trainer optimization issue on TPU | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm not sure your first stacktrace shows an actual error? Just deprecation warnings.\r\n\r\nAlso, without your code we have no way of understanding what might have happened here.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"Hello, I recently got a very similar problem when trying to implement (manually) a self-attention module for images on a model which is trained using adafactor. I'm also using PyTorch lightning, but I don't think that makes a difference, since I tried with the \"default\" optimizers coming with torch.optim (RMSProp, Adam), and they work. This means that the problem might possibly be caused by the adafactor implementation. I'm running the latest, stable version of 'transformers' as of now (4.9.1). Here are the detailed logs:\r\n```\r\nFile \"train.py\", line 125, in <module>\r\n run(args)\r\n File \"train.py\", line 90, in run\r\n trainer.fit(model, train_dataloader=train_loader, val_dataloaders=test_loader)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 473, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 66, in train\r\n results = self.train_or_test()\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 69, in train_or_test\r\n results = self.trainer.train()\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 524, in train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 572, in run_training_epoch\r\n batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 730, in run_training_batch\r\n self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 505, in optimizer_step\r\n model_ref.optimizer_step(\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py\", line 1261, in optimizer_step\r\n optimizer.step(closure=optimizer_closure)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py\", line 286, in step\r\n self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py\", line 144, in __optimizer_step\r\n optimizer.step(closure=closure, *args, **kwargs)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/torch/optim/lr_scheduler.py\", line 67, in wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py\", line 576, in step\r\n update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)\r\n File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py\", line 507, in _approx_sq_grad\r\n return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))\r\nRuntimeError: tensors must be 2-D\r\n```",
"> \r\n> \r\n> Hello, I recently got a very similar problem when trying to implement (manually) a self-attention module for images on a model which is trained using adafactor. I'm also using PyTorch lightning, but I don't think that makes a difference, since I tried with the \"default\" optimizers coming with torch.optim (RMSProp, Adam), and they work. This means that the problem might possibly be caused by the adafactor implementation. I'm running the latest, stable version of 'transformers' as of now (4.9.1). Here are the detailed logs:\r\n> \r\n> ```\r\n> File \"train.py\", line 125, in <module>\r\n> run(args)\r\n> File \"train.py\", line 90, in run\r\n> trainer.fit(model, train_dataloader=train_loader, val_dataloaders=test_loader)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 473, in fit\r\n> results = self.accelerator_backend.train()\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 66, in train\r\n> results = self.train_or_test()\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 69, in train_or_test\r\n> results = self.trainer.train()\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 524, in train\r\n> self.train_loop.run_training_epoch()\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 572, in run_training_epoch\r\n> batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 730, in run_training_batch\r\n> self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 505, in optimizer_step\r\n> model_ref.optimizer_step(\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py\", line 1261, in optimizer_step\r\n> optimizer.step(closure=optimizer_closure)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py\", line 286, in step\r\n> self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py\", line 144, in __optimizer_step\r\n> optimizer.step(closure=closure, *args, **kwargs)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/torch/optim/lr_scheduler.py\", line 67, in wrapper\r\n> return wrapped(*args, **kwargs)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py\", line 576, in step\r\n> update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)\r\n> File \"/home/gskenderi/anaconda3/envs/geri/lib/python3.8/site-packages/transformers/optimization.py\", line 507, in _approx_sq_grad\r\n> return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))\r\n> RuntimeError: tensors must be 2-D\r\n> ```\r\n\r\nUpon further inspection in these last few minutes, it seems that the Adafactor optimizer has difficulties optimizing PyTorch's nn.Conv* layers. If I try to use some Conv1d or to fine-tune a Resnet model, I get the error indicated above but if not my model works fine. In both these cases, the only thing that has changed is the task of updating the weights of some Convolutional layers. I urge you to take a look at this, as it is quite bizarre. Since I did not mentioned this above, this is the optimizer I'm currently trying to use:\r\n```\r\noptimizer = Adafactor(self.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None)\r\nlr_scheduler = AdafactorSchedule(optimizer)\r\n```\r\n"
] | 1,605 | 1,628 | 1,614 | NONE | null | Hi
I am running seq2seq_trainer.py model on TPU v3-8 instance with pytorch xla 1.7, using adafactor, here are the logs, could you please assist? thanks
```
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
0%| | 1/19380 [00:02<13:07:23, 2.44s/it]/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
{'loss': 10383.4560546875, 'learning_rate': 6e-07, 'epoch': 0.0010319917440660474}
```
on GPU also I cannot use adafactor with this error:
```
/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py:506: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1))
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 223, in <module>
main()
File "finetune_t5_trainer.py", line 159, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/trainer.py", line 797, in train
self.optimizer.step()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py", line 510, in step
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/optimization.py", line 441, in _approx_sq_grad
return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))
RuntimeError: tensors must be 2-D
```
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8618/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8617/comments | https://api.github.com/repos/huggingface/transformers/issues/8617/events | https://github.com/huggingface/transformers/pull/8617 | 745,572,071 | MDExOlB1bGxSZXF1ZXN0NTIzMTEyMTY1 | 8,617 | Add cards for all Geotrend models | {
"login": "amineabdaoui",
"id": 17952908,
"node_id": "MDQ6VXNlcjE3OTUyOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/17952908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amineabdaoui",
"html_url": "https://github.com/amineabdaoui",
"followers_url": "https://api.github.com/users/amineabdaoui/followers",
"following_url": "https://api.github.com/users/amineabdaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/amineabdaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amineabdaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amineabdaoui/subscriptions",
"organizations_url": "https://api.github.com/users/amineabdaoui/orgs",
"repos_url": "https://api.github.com/users/amineabdaoui/repos",
"events_url": "https://api.github.com/users/amineabdaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/amineabdaoui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"Awesome, thank you!"
] | 1,605 | 1,605 | 1,605 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds model card
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8617/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8617",
"html_url": "https://github.com/huggingface/transformers/pull/8617",
"diff_url": "https://github.com/huggingface/transformers/pull/8617.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8617.patch",
"merged_at": 1605779245000
} |
https://api.github.com/repos/huggingface/transformers/issues/8616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8616/comments | https://api.github.com/repos/huggingface/transformers/issues/8616/events | https://github.com/huggingface/transformers/pull/8616 | 745,541,886 | MDExOlB1bGxSZXF1ZXN0NTIzMDg3MzI5 | 8,616 | Add pip install update to resolve import error in transformers notebook | {
"login": "jessicayung",
"id": 11069586,
"node_id": "MDQ6VXNlcjExMDY5NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/11069586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessicayung",
"html_url": "https://github.com/jessicayung",
"followers_url": "https://api.github.com/users/jessicayung/followers",
"following_url": "https://api.github.com/users/jessicayung/following{/other_user}",
"gists_url": "https://api.github.com/users/jessicayung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessicayung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessicayung/subscriptions",
"organizations_url": "https://api.github.com/users/jessicayung/orgs",
"repos_url": "https://api.github.com/users/jessicayung/repos",
"events_url": "https://api.github.com/users/jessicayung/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessicayung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are right, thanks!"
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | Add pip install upgrade tensorflow-gpu to remove error below:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-094fadb93f3f> in <module>()
1 import torch
----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer
3
4 torch.set_grad_enabled(False)
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>()
133
134 # Pipelines
--> 135 from .pipelines import (
136 Conversation,
137 ConversationalPipeline,
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>()
46 import tensorflow as tf
47
---> 48 from .modeling_tf_auto import (
49 TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
50 TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>()
49 from .configuration_utils import PretrainedConfig
50 from .file_utils import add_start_docstrings
---> 51 from .modeling_tf_albert import (
52 TFAlbertForMaskedLM,
53 TFAlbertForMultipleChoice,
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>()
22 import tensorflow as tf
23
---> 24 from .activations_tf import get_tf_activation
25 from .configuration_albert import AlbertConfig
26 from .file_utils import (
/usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>()
52 "gelu": tf.keras.layers.Activation(gelu),
53 "relu": tf.keras.activations.relu,
---> 54 "swish": tf.keras.activations.swish,
55 "silu": tf.keras.activations.swish,
56 "gelu_new": tf.keras.layers.Activation(gelu_new),
AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'
```
I have tried running the colab after this change and it seems to work fine (all the cells run with no errors).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8616",
"html_url": "https://github.com/huggingface/transformers/pull/8616",
"diff_url": "https://github.com/huggingface/transformers/pull/8616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8616.patch",
"merged_at": 1606143532000
} |
https://api.github.com/repos/huggingface/transformers/issues/8615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8615/comments | https://api.github.com/repos/huggingface/transformers/issues/8615/events | https://github.com/huggingface/transformers/issues/8615 | 745,479,020 | MDU6SXNzdWU3NDU0NzkwMjA= | 8,615 | Batch Size error | {
"login": "bhavaygg",
"id": 43617111,
"node_id": "MDQ6VXNlcjQzNjE3MTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/43617111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavaygg",
"html_url": "https://github.com/bhavaygg",
"followers_url": "https://api.github.com/users/bhavaygg/followers",
"following_url": "https://api.github.com/users/bhavaygg/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavaygg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavaygg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavaygg/subscriptions",
"organizations_url": "https://api.github.com/users/bhavaygg/orgs",
"repos_url": "https://api.github.com/users/bhavaygg/repos",
"events_url": "https://api.github.com/users/bhavaygg/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavaygg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It would be great to have all the information relative to your environment, as asked in the template, as well as the full error stack trace so that we may help you.",
"```\r\ntrainer.train()\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py\", line 499, in train\r\n tr_loss += self._training_step(model, inputs, optimizer)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py\", line 622, in _training_step\r\n outputs = model(**inputs)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/modeling_roberta.py\", line 357, in forward\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/loss.py\", line 932, in forward\r\n ignore_index=self.ignore_index, reduction=self.reduction)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py\", line 2317, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py\", line 2113, in nll_loss\r\n .format(input.size(0), target.size(0)))\r\nValueError: Expected input batch_size (64) to match target batch_size (8192).\r\n```\r\n\r\nFor my environment I had `pytorch == 1.6` on a Linux based system but while trying to solve this i have mixed up alot of my packages.",
"@LysandreJik the issue is still there in `pytorch== 1.7.0`",
"Could you please show us the result of running `transformers-cli env`?",
"@LysandreJik \r\n```\r\n- `transformers` version: 3.0.2\r\n- Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core\r\n- Python version: 3.7.9\r\n- PyTorch version (GPU?): 1.7.0 (False)\r\n- Tensorflow version (GPU?): 2.3.0 (False)\r\n- Using GPU in script?:yes\r\n- Using distributed or parallel set-up in script?: <\r\n```",
"Maybe @sgugger has an idea of what could be going on.",
"You're using `DataCollatorForLanguageModeling`, with a model for sequence classification. It can't work as `DataCollatorForLanguageModeling` prepares the labels for language modeling by duplicating the inputs, whereas you should have one label per sentence in your batch.",
"@sgugger Thanks for the help. I tried to use `DataCollatorForTokenClassification` but that throws\r\n```\r\n]Traceback (most recent call last):\r\n File \"m1.py\", line 122, in <module>\r\n trainer.train()\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py\", line 747, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py\", line 1075, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py\", line 1105, in compute_loss\r\n return outputs[\"loss\"] if isinstance(outputs, dict) else outputs[0]\r\n File \"/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/file_utils.py\", line 1338, in __getitem__\r\n return inner_dict[k]\r\nKeyError: 'loss'\r\n```",
"You're doing sequence classification, not token classification. Also `LineByLineDataset` can't be used as it doesn't deal with labels. For doing text classification, you should look at the [run_glue example](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).\r\n\r\nAlso, we don't use issues to debug user's code, so please switch to the [forum](https://discuss.huggingface.co/) where there is a bigger community of people that will be able to help."
] | 1,605 | 1,605 | 1,605 | NONE | null | Hello,
I wanted to use Roberta for Sentence classification on protein sequences which I have converted into sentences. So first I train a tokenizer for my custom vocabulary.
```tokenizer = CharBPETokenizer()
tokenizer.train("bert_vocab.txt",vocab_size=8000,special_tokens=[
"[CLS]",
"[SEP]",
"[UNK]",
"[MASK]",
])
tokenizer.save_model("EsperBERTo")
from tokenizers.implementations import CharBPETokenizer
from tokenizers.processors import BertProcessing
tokenizer._tokenizer.post_processor = BertProcessing(("[SEP]", tokenizer.token_to_id("[SEP]")), ("[CLS]", tokenizer.token_to_id("[CLS]")),)
print(tokenizer.encode("AAA ATA AKA"))
print(tokenizer.encode("AAA ATA AKA").tokens)
tokenizer.enable_truncation(max_length=512)
import torch
torch.cuda.is_available()
from transformers import RobertaConfig
config = RobertaConfig(
vocab_size=8000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512)
```
When i try to train the model, i get the error `ValueError: Expected input batch_size (64) to match target batch_size (8192).`
```
model = RobertaForSequenceClassification(config=config)
model.num_parameters()
from transformers import DataCollatorForLanguageModeling
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="bert_data.txt",
block_size=64,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./EsperBERTo",
overwrite_output_dir=True,
num_train_epochs=100,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model("./EsperBERTo")
```
I understand i do not specify the output labels anywhere in the code but I am yet to find any example which I could follow to figure this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8615/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8614/comments | https://api.github.com/repos/huggingface/transformers/issues/8614/events | https://github.com/huggingface/transformers/issues/8614 | 745,413,717 | MDU6SXNzdWU3NDU0MTM3MTc= | 8,614 | ValueError while running run_glue.py with xlnet model. | {
"login": "wuaibo",
"id": 43171029,
"node_id": "MDQ6VXNlcjQzMTcxMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/43171029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wuaibo",
"html_url": "https://github.com/wuaibo",
"followers_url": "https://api.github.com/users/wuaibo/followers",
"following_url": "https://api.github.com/users/wuaibo/following{/other_user}",
"gists_url": "https://api.github.com/users/wuaibo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wuaibo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wuaibo/subscriptions",
"organizations_url": "https://api.github.com/users/wuaibo/orgs",
"repos_url": "https://api.github.com/users/wuaibo/repos",
"events_url": "https://api.github.com/users/wuaibo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wuaibo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This is a duplicate of #7584 I think. Some workarounds are mentioned in that issue and a fix is on its way.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | ## Environment info
+ transformers version: 3.5.1
+ Platform: Linux version 3.10.107-1-tlinux2-0050
+ Python version: 3.7.6
+ PyTorch version (GPU?): 1.6.0+cu101
+ Tensorflow version (GPU?): no
+ Using GPU in script?: yes
+ Using distributed or parallel set-up in script?: yes
## Who can help
@sgugger
## To reproduce
```shell
CUDA_VISIBLE_DEVICES=1,2 python run_glue.py \
--model_name_or_path xlnet-base-cased \
--task_name stsb \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 8 \
--max_steps 1200 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir ./output/tranformer/xlnet \
--cache_dir ./pretained_model/xlnet \
--overwrite_output_dir \
--overwrite_cache \
--eval_accumulation_steps 2 \
--gradient_accumulation_steps 1 \
--disable_tqdm True\
--dataloader_drop_last \
--past_index 2
```
## error
```shell
[INFO|trainer.py:1387] 2020-11-18 15:21:21,084 >> ***** Running Evaluation *****
[INFO|trainer.py:1388] 2020-11-18 15:21:21,084 >> Num examples = 4000
[INFO|trainer.py:1389] 2020-11-18 15:21:21,085 >> Batch size = 16
./sim/lib/python3.7/site-packages/transformers/modeling_xlnet.py:297: UserWarning: Mixed memory format inputs detected while calling the operator. The operator will output contiguous tensor even if some of the inputs are in channels_last format. (Triggered internally at /pytorch/aten/src/ATen/native/TensorIterator.cpp:918.)
attn_score = (ac + bd + ef) * self.scale
./sim/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
Traceback (most recent call last):
File "run_glue.py", line 414, in <module>
main()
File "run_glue.py", line 366, in main
eval_result = trainer.evaluate(eval_dataset=eval_dataset)
File "./sim/lib/python3.7/site-packages/transformers/trainer.py", line 1313, in evaluate
prediction_loss_only=True if self.compute_metrics is None else None,
File "./sim/lib/python3.7/site-packages/transformers/trainer.py", line 1431, in prediction_loop
preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 330, in add_arrays
slice_len = self._nested_set_tensors(self._storage, arrays)
File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 337, in _nested_set_tensors
slice_len = self._nested_set_tensors(x, y)
File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 337, in _nested_set_tensors
slice_len = self._nested_set_tensors(x, y)
File "./sim/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 349, in _nested_set_tensors
i * slice_len : (i + 1) * slice_len
ValueError: could not broadcast input array from shape (512,8,768) into shape (416,8,768)
```shell
Could you please help me? Thanks a lot ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8614/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8613/comments | https://api.github.com/repos/huggingface/transformers/issues/8613/events | https://github.com/huggingface/transformers/pull/8613 | 745,396,420 | MDExOlB1bGxSZXF1ZXN0NTIyOTY4ODg0 | 8,613 | [s2s] multigpu skip | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi\r\nI am getting this issue during eval on multiple gpus, is there a temporary fix I could run the codes on multiple gpus? thanks ",
"Fixed in https://github.com/huggingface/transformers/pull/8716",
"Hi\ncould you tell me in which version it is fixed?\nthanks\nRabeeh\n\nOn Sun, Nov 22, 2020 at 8:45 PM Stas Bekman <[email protected]>\nwrote:\n\n> Fixed in #8716 <https://github.com/huggingface/transformers/pull/8716>\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8613#issuecomment-731844378>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCFX4OKJX7X3LFZJQFLSRFZ5DANCNFSM4TZSIEOQ>\n> .\n>\n",
"I just made a PR, so until it's accepted you need to apply the change yourself or use the PR branch - it's a 2-line change:\r\nhttps://github.com/huggingface/transformers/pull/8716/files",
"Hi\nthis does not resolve the issue, could you please have a look at my\nresponse here\n\nhttps://github.com/huggingface/transformers/issues/7146\n\nthanks\nRabeeh\n\nOn Sun, Nov 22, 2020 at 8:48 PM Stas Bekman <[email protected]>\nwrote:\n\n> I just made a PR, so until it's accepted you need to apply the change\n> yourself or use the PR branch - it's a 2-line change:\n> https://github.com/huggingface/transformers/pull/8716/files\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8613#issuecomment-731844857>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGZT4AE7ABEX5UGPTSRF2I7ANCNFSM4TZSIEOQ>\n> .\n>\n",
"That's a totally different issue. Please kindly file a new issue about it.\r\n\r\nAlso when you link to a comment, please click on the [...] in the right upper corner of the comment and get the link to that comment. Otherwise you're linking to the whole PR/issue and there is no telling what you're talking about. Hope this makes sense.\r\n\r\nAlso please use code formatting for backtraces as \"code\" using the menu. \r\n\r\nFinally, you need to fully follow the Issue template and provide full information on how the issue can be reproduced. Giving just the backtrace most of the time doesn't help the developer to know how to reproduce the problem and thus solve it.",
"Hi Stas\nthank you, sure\nBest\nRabeeh\n\nOn Sun, Nov 22, 2020 at 9:13 PM Stas Bekman <[email protected]>\nwrote:\n\n> That's a totally different issue. Please kindly file a new issue about it.\n>\n> Also when you link to a comment, please click on the [...] in the right\n> upper corner of the comment and get the link to that comment. Otherwise\n> you're linking to the whole PR/issue and there is no telling what you're\n> talking about. Hope this makes sense.\n>\n> Also please use code formatting for backtraces as \"code\" using the menu.\n>\n> Finally, you need to fully follow the Issue template and provide full\n> information on how the issue can be reproduced. Giving just the backtrace\n> most of the time doesn't help the developer to know how to reproduce the\n> problem and thus solve it.\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8613#issuecomment-731848044>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGAOXRH7KU6RL3TIDDSRF5ILANCNFSM4TZSIEOQ>\n> .\n>\n"
] | 1,605 | 1,606 | 1,605 | CONTRIBUTOR | null | I have a hard time remembering whether this test worked on multi-gpu or not, it currently fails there, but work on a single gpu. So putting a band-aid `require_torch_non_multi_gpu_but_fix_me` skip for now, unless someone wants to work on it. For some reason I thought it was fine - I think it used to be fine.
The error:
```
CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pyt test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_bert2bert
...
test_finetune_trainer.py:159:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../src/transformers/trainer.py:774: in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
../../src/transformers/trainer.py:838: in _maybe_log_save_evaluate
metrics = self.evaluate()
../../src/transformers/trainer.py:1241: in evaluate
output = self.prediction_loop(
../../src/transformers/trainer.py:1343: in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
seq2seq_trainer.py:188: in prediction_step
generated_tokens = model.generate(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DataParallel(
(module): EncoderDecoderModel(
(encoder): BertModel(
(embeddings): BertEmbeddings(
(...)
)
(decoder): Linear(in_features=128, out_features=30522, bias=True)
)
)
)
)
)
name = 'generate'
def __getattr__(self, name: str) -> Union[Tensor, 'Module']:
if '_parameters' in self.__dict__:
_parameters = self.__dict__['_parameters']
if name in _parameters:
return _parameters[name]
if '_buffers' in self.__dict__:
_buffers = self.__dict__['_buffers']
if name in _buffers:
return _buffers[name]
if '_modules' in self.__dict__:
modules = self.__dict__['_modules']
if name in modules:
return modules[name]
> raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
type(self).__name__, name))
E torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'generate'
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:795: ModuleAttributeError
```
@patrickvonplaten, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8613/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8613",
"html_url": "https://github.com/huggingface/transformers/pull/8613",
"diff_url": "https://github.com/huggingface/transformers/pull/8613.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8613.patch",
"merged_at": 1605712554000
} |
https://api.github.com/repos/huggingface/transformers/issues/8612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8612/comments | https://api.github.com/repos/huggingface/transformers/issues/8612/events | https://github.com/huggingface/transformers/pull/8612 | 745,378,791 | MDExOlB1bGxSZXF1ZXN0NTIyOTUzNTI5 | 8,612 | [s2s] fix finetune.py to adjust for #8530 changes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger if you want to see why `.logits` isn't working you can try:\r\n```\r\ncd examples/seq2seq\r\nPYTHONPATH=\"../../src\" CUDA_VISIBLE_DEVICES=0 python finetune.py --learning_rate 3e-5 --gpus 1 --do_train --val_check_interval 1 --num_train_epochs 1 --freeze_encoder --freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length 142 --train_batch_size 1 --eval_batch_size 1 --gradient_accumulation_steps 1 --model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large --output_dir distilbart-cnn-12-6 --overwrite_output_dir --num_sanity_val_steps 0 --fp16 --eval_beams 1 --amp_backend=apex --n_train 1 --n_val 1 --warmup_steps 1\r\n```\r\n\r\nIf I `pprint(outputs)` I get a dict.\r\n\r\nIf you need `cnn_dm`:\r\n```\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\n```",
"I'll investigate but at first glance it looks like it's PyTorch Lightning that is messing with the new output types, not PyTorch. We will add a caveat in #8530 if that's the case.",
"So this is not linked to mixed precision directly but purely something in PL. Somewhere when dealing with mixed precision, it looks like there is something happening is the output is an instance of a dict, and they don't respect the class of the dict.",
"More investigation shows the problem is actually coming from apex, when using `opt_level=\"O2\"` (the default level is fine). For reference, the following code shows apex-converted model lose their output type:\r\n```\r\nfrom apex import amp\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ninputs = tokenizer(\"Hi, my name is Sylvain!\", return_tensors=\"pt\")\r\ninputs = {k: v.cuda() for k, v in inputs.items()}\r\n\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\nmodel = model.cuda()\r\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\r\n\r\nmodel, optimizer = amp.initialize(model, optimizer, opt_level=\"O2\")\r\noutputs = model(**inputs)\r\ntype(outputs)\r\n```\r\nshould return `BaseModelOutputWithPoolingAndCrossAttentions` but returns `dict`.",
"(The fix is the right one, so while the discussion may continue, I'm merging this PR.)",
"Thank you for investigating this, @sgugger.\r\n\r\nAs expected distillation.py fails too with apex / default level 2\r\n```\r\npython distillation.py --teacher facebook/bart-large-xsum --data_dir xsum --tokenizer_name facebook/bart-large-xsum --student_decoder_layers 6 --student_encoder_layers 12 --freeze_encoder --freeze_embeds --learning_rate=3e-4 --do_train --do_predict --fp16 --val_check_interval 0.1 --n_val 1 --eval_beams 1 --length_penalty=0.5 --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 --model_name_or_path IGNORED --alpha_hid=3. --train_batch_size=16 --eval_batch_size=16 --gradient_accumulation_steps=2 --sortish_sampler --num_train_epochs=6 --warmup_steps 1 --output_dir distilbart_xsum_12_6 --amp_backend=apex --n_train 1 --gpus 1\r\n[...]\r\n File \"distillation.py\", line 157, in _step\r\n lm_logits = student_outputs.logits\r\nAttributeError: 'dict' object has no attribute 'logits'\r\n```\r\nthere are multiple situations like this in this program.\r\n\r\nwhat's the best way to proceed, @sgugger? switch to dict keys for now and report this issue to apex? (which is not being watched - even PRs aren't being merged/attended to).",
"@ptrblck, can https://github.com/huggingface/transformers/pull/8612#issuecomment-729721261 be fixed in apex, or is it a lost cause (as I noticed apex is not actively supported anymore). Please let me know if we should ask someone else? ",
"I think we should fix the scripts by accessing elements in the outputs with their keys.",
"I will do that. Thank you for the feedback, @sgugger "
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Making the script work again after https://github.com/huggingface/transformers/pull/8530 change.
As mentioned in https://github.com/huggingface/transformers/pull/8612 `.logits` doesn't seem to work with apex/PL. No idea why.
So `distillation.py` is probably a problem too since it uses `.logits`. I haven't checked it.
I don't think any tests use apex.
```
RUN_SLOW=1 pyt -sv test_bash_script.py::TestMbartCc25Enro::test_train_mbart_cc25_enro_script
```
tests finetune with fp16, but it runs it in a different way.
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8612/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8612",
"html_url": "https://github.com/huggingface/transformers/pull/8612",
"diff_url": "https://github.com/huggingface/transformers/pull/8612.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8612.patch",
"merged_at": 1605713101000
} |
https://api.github.com/repos/huggingface/transformers/issues/8611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8611/comments | https://api.github.com/repos/huggingface/transformers/issues/8611/events | https://github.com/huggingface/transformers/pull/8611 | 745,308,227 | MDExOlB1bGxSZXF1ZXN0NTIyODk0MjI4 | 8,611 | tf_bart typo - self.self.activation_dropout | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM, cc @LysandreJik. I won't merge on my own anymore."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Fix one line typo in `modeling_tf_bart` :
`self.self.activation_dropout` -> `self.activation_dropout`
BTW, there's no error in forward pass.
I met the error when I played around with `model.fit()` :)
## Who can review ?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8611/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8611",
"html_url": "https://github.com/huggingface/transformers/pull/8611",
"diff_url": "https://github.com/huggingface/transformers/pull/8611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8611.patch",
"merged_at": 1605713429000
} |
https://api.github.com/repos/huggingface/transformers/issues/8610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8610/comments | https://api.github.com/repos/huggingface/transformers/issues/8610/events | https://github.com/huggingface/transformers/issues/8610 | 745,300,390 | MDU6SXNzdWU3NDUzMDAzOTA= | 8,610 | How to train EncoderDecoderModel using bert for seq-to-seq model | {
"login": "jithincheriyan",
"id": 52187221,
"node_id": "MDQ6VXNlcjUyMTg3MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/52187221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jithincheriyan",
"html_url": "https://github.com/jithincheriyan",
"followers_url": "https://api.github.com/users/jithincheriyan/followers",
"following_url": "https://api.github.com/users/jithincheriyan/following{/other_user}",
"gists_url": "https://api.github.com/users/jithincheriyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jithincheriyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jithincheriyan/subscriptions",
"organizations_url": "https://api.github.com/users/jithincheriyan/orgs",
"repos_url": "https://api.github.com/users/jithincheriyan/repos",
"events_url": "https://api.github.com/users/jithincheriyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jithincheriyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @jithincheriyan, \r\n\r\nPlease note that `.eval()` and `.train()` are not used to evaluate / train the model in PyTorch. In PyTorch `.eval()` and `.train()` are simply used to set the model into \"training\" or \"evaluation\" model. See the functions' API here: https://pytorch.org/docs/master/generated/torch.nn.Flatten.html#torch.nn.Flatten.train\r\n\r\nPlease refer to this blog post to understand how to train an `EncoderDecoderModel`: https://huggingface.co/blog/warm-starting-encoder-decoder#warm-starting-encoder-decoder-models-with-%F0%9F%A4%97transformers-practice"
] | 1,605 | 1,605 | 1,605 | NONE | null | Hi @patrickvonplaten,
I am trying to make a sequence to sequence model using EncoderDecodermodel and BERT
Please find the following code,
```python
import pandas as pd
from transformers import EncoderDecoderModel
train_data = [
["one", "1"],
["two", "2"],
]
train_df = pd.DataFrame(train_data, columns=["input_text", "target_text"])
eval_data = [["three", "3"]]
eval_df = pd.DataFrame(eval_data, columns=["input_text", "target_text"])
#using BERT encoder decoder
model=EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
model.train(train_df)
results=model.eval([eval_df],)
#results=model.generate([["Five"],])
print(results)
```
But while evaluating, ends up in an error as shown in the figure

Any suggestions on the same?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8610/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8609/comments | https://api.github.com/repos/huggingface/transformers/issues/8609/events | https://github.com/huggingface/transformers/issues/8609 | 745,299,793 | MDU6SXNzdWU3NDUyOTk3OTM= | 8,609 | Missing `tokenizers` file? | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\ntokenizers is not a file. It's an entire [library](https://github.com/huggingface/tokenizers) built by the Hugging Face team. The code that you show will import some functions from that library, if it's available."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | In latest https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py

But there is no `tokenizers` file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8608/comments | https://api.github.com/repos/huggingface/transformers/issues/8608/events | https://github.com/huggingface/transformers/issues/8608 | 745,188,735 | MDU6SXNzdWU3NDUxODg3MzU= | 8,608 | Extracting word representations from BPE-tokenization-based models (GPT-2, RoBERTa, etc.) | {
"login": "kanishkamisra",
"id": 10777197,
"node_id": "MDQ6VXNlcjEwNzc3MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/10777197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kanishkamisra",
"html_url": "https://github.com/kanishkamisra",
"followers_url": "https://api.github.com/users/kanishkamisra/followers",
"following_url": "https://api.github.com/users/kanishkamisra/following{/other_user}",
"gists_url": "https://api.github.com/users/kanishkamisra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kanishkamisra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kanishkamisra/subscriptions",
"organizations_url": "https://api.github.com/users/kanishkamisra/orgs",
"repos_url": "https://api.github.com/users/kanishkamisra/repos",
"events_url": "https://api.github.com/users/kanishkamisra/events{/privacy}",
"received_events_url": "https://api.github.com/users/kanishkamisra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"SOLVED! Prefixing spaces before the word seems to do the trick for now. But I will wait to close the issue for people who have more elegant solutions to this problem.",
"> SOLVED! Prefixing spaces before the word seems to do the trick for now. But I will wait to close the issue for people who have more elegant solutions to this problem.\r\n\r\nBut how do you deal with the case that there are no spaces between two words? For example\r\n```\r\ntokenizer.tokenize(\"I have a dog, which loves eating meat!\")\r\n#> [['I', 'Δ have', 'Δ a', 'Δ dog', ',', 'Δ which', 'Δ loves', 'Δ eat', 'ing', 'Δ meat', '!']]\r\n```\r\n\r\nThere is no space before ',' and '!', but both are words. How do you distinguish them from 'ing'? THX for your reply!"
] | 1,605 | 1,614 | 1,606 | NONE | null | Hi! I am trying to extract representations from models (GPT-2 and RoBERTa) when the words are segmented into pieces. For this issue, let's assume I'm trying to extract the representation of `afoot` in the sentence: `the game is afoot !`
For this I first load my libraries and instantiate the RoBERTa tokenizer:
```py
import torch
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
```
My inputs are in the form of `(sentence, idx)` where sentence is the context in which the desired word occurs in and idx which the index of the word (`afoot`) in the space-segmented form of the sentence `['the', 'game', 'is', 'afoot', '!']`:
```py
sentence = ("The game is afoot !", 3)
```
The problem occurs when I compare the encoded token_ids for the word when it appears individually:
```py
tokenizer.encode_plus('afoot', add_special_tokens = False)['input_ids']
#> [2001, 9210]
```
to the token_ids for the same word when it appears in a sentence:
```py
tokenizer.encode_plus('the game is afoot !', add_special_tokens = False)['input_ids']
#> [627, 177, 16, 10, 2917, 27785]
```
**Why do I even need the individually split token_ids?**
Because I want to know which indices correspond to the desired word so that I can query the model when I pass the sentence to it as an input and extract the representations for the wordpieces/BPEs and then average them to represent the word's vector.
This is not a problem when I use BERT, where individual vs words as part of a sentence have the same segmentation.
Any ideas how I can solve this issue?
Environment:
```
python = 3.8.3
transformers = 3.1
torch = 1.6
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8608/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8607/comments | https://api.github.com/repos/huggingface/transformers/issues/8607/events | https://github.com/huggingface/transformers/pull/8607 | 745,161,880 | MDExOlB1bGxSZXF1ZXN0NTIyNzY4NTQ5 | 8,607 | Fixed link to the wrong paper. | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
documentation: @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8607",
"html_url": "https://github.com/huggingface/transformers/pull/8607",
"diff_url": "https://github.com/huggingface/transformers/pull/8607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8607.patch",
"merged_at": 1605657644000
} |
https://api.github.com/repos/huggingface/transformers/issues/8606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8606/comments | https://api.github.com/repos/huggingface/transformers/issues/8606/events | https://github.com/huggingface/transformers/issues/8606 | 745,155,826 | MDU6SXNzdWU3NDUxNTU4MjY= | 8,606 | converting REALM tensorflow checkpoints to pytorch | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I changed --tf_checkpoint_path=\"./cc_news_pretrained/embedder/encoded to \r\n--tf_checkpoint_path=\"./cc_news_pretrained/embedder/encoded/encoded.ckpt.\r\n\r\nand used _convert_bert_original_tf_checkpoint_to_pytorch.py_ .\r\nI see the following messages :\r\nLoading TF weight block_emb with shape [13353718, 128]\r\nSkipping block_emb\r\n\r\nBuilding PyTorch model from configuration: BertConfig {\r\n\"attention_probs_dropout_prob\": 0.1,\r\n\"gradient_checkpointing\": false,\r\n\"hidden_act\": \"gelu\",\r\n\"hidden_dropout_prob\": 0.1,\r\n\"hidden_size\": 1024,\r\n\"initializer_range\": 0.02,\r\n\"intermediate_size\": 4096,\r\n\"layer_norm_eps\": 1e-12,\r\n\"max_position_embeddings\": 512,\r\n\"model_type\": \"bert\",\r\n\"num_attention_heads\": 16,\r\n\"num_hidden_layers\": 24,\r\n\"pad_token_id\": 0,\r\n\"type_vocab_size\": 2,\r\n\"vocab_size\": 30522\r\n}\r\n\r\nTraceback (most recent call last):\r\nFile \"convert_tf_checkpoint_to_pytorch.py\", line 78, in\r\nargs.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path\r\nFile \"convert_tf_checkpoint_to_pytorch.py\", line 44, in convert_tf_checkpoint_to_pytorch\r\nload_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\nFile \"/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 155, in load_tf_weights_in_bert\r\npointer.shape == array.shape\r\nFile \"/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 772, in getattr\r\ntype(self).name, name))\r\ntorch.nn.modules.module.ModuleAttributeError: 'BertForPreTraining' object has no attribute 'shape'\r\n\r\n",
"Following the suggestion #393, I I hacked transformers/src/transformers/modeling_bert.py, and I now see the following :\r\nConverting TensorFlow checkpoint from ./cc_news_pretrained/embedder/encoded/encoded.ckpt\r\nLoading TF weight block_emb with shape [13353718, 128]\r\nSkipping block_emb\r\nInitialize PyTorch weight ['block_emb']\r\n\r\nBuilding PyTorch model from configuration: BertConfig {\r\n\"attention_probs_dropout_prob\": 0.1,\r\n\"gradient_checkpointing\": false,\r\n\"hidden_act\": \"gelu\",\r\n\"hidden_dropout_prob\": 0.1,\r\n\"hidden_size\": 1024,\r\n\"initializer_range\": 0.02,\r\n\"intermediate_size\": 4096,\r\n\"layer_norm_eps\": 1e-12,\r\n\"max_position_embeddings\": 512,\r\n\"model_type\": \"bert\",\r\n\"num_attention_heads\": 16,\r\n\"num_hidden_layers\": 24,\r\n\"pad_token_id\": 0,\r\n\"type_vocab_size\": 2,\r\n\"vocab_size\": 30522\r\n}\r\n\r\nSave PyTorch model to /gstore/home/madabhuc/hayEnv/pytorch/pytorch.bin\r\n\r\nSkipping and intializing 'block_emb' tells me that I have lost the weights info from the checkpoint. Don't believe this is correct.",
"Initializing did copy the weights from the checkpoint.",
"When I try to open another REALM tensorflow checkpoint, I get the following error message : \r\ntransformers/modeling_bert.py\", line 135, in load_tf_weights_in_bert\r\n pointer = getattr(pointer, \"bias\")\r\n File \"/gstore/home/madabhuc/hayEnv/env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 772, in __getattr__\r\n type(self).__name__, name))\r\ntorch.nn.modules.module.ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'\r\n\r\n@sgugger , @jplu , @LysandreJik : Any ideas ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@mchari , do you manage to convert the REALM pre-trained TF models into pytorch models?",
"No, I didn't....\n\nOn Tue, Jun 29, 2021 at 10:54 PM Wei-Cheng Chang ***@***.***>\nwrote:\n\n> @mchari <https://github.com/mchari> , do you manage to convert the REALM\n> pre-trained TF models into pytorch models?\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8606#issuecomment-871116705>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHIXZJ4KXQQFNMXSPSAKWRDTVKWPZANCNFSM4TZGZWCQ>\n> .\n>\n"
] | 1,605 | 1,625 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: <Yes>
- Using distributed or parallel set-up in script?: <No>
### Who can help
@jplu, @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
## To reproduce
Steps to reproduce the behavior:
1. Download checkpoint from the http link https://console.cloud.google.com/storage/browser/realm-data/cc_news_pretrained/embedded/.
2. python convert_bert_original_tf2_checkpoint_to_pytorch.py \
--tf_checkpoint_path="./cc_news_pretrained/embedder/encoded" \
--bert_config_file="./bert_config.json" \
--pytorch_dump_path="./pytorch"
The checkpoint file has the following entries which are probably internal developer files(?):
model_checkpoint_path: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
all_model_checkpoint_paths: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
1)When I set tf_checkpoint_path to the directory containing the checkpoint, I get the error :
tensorflow.python.framework.errors_impl.NotFoundError: /cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded; No such file or directory
2)When I set tf_checkpoint_path to the checkpoint file encode.ckpt.data-00000-of-00001, I get the error:
env/lib/python3.6/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 95, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern))
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./cc_news_pretrained/embedder/encoded/encode.ckpt.data-00000-of-00001
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8606/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8605/comments | https://api.github.com/repos/huggingface/transformers/issues/8605/events | https://github.com/huggingface/transformers/pull/8605 | 745,050,868 | MDExOlB1bGxSZXF1ZXN0NTIyNjc1OTc0 | 8,605 | Add Harry Potter Model Card | {
"login": "ceostroff",
"id": 22174832,
"node_id": "MDQ6VXNlcjIyMTc0ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/22174832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceostroff",
"html_url": "https://github.com/ceostroff",
"followers_url": "https://api.github.com/users/ceostroff/followers",
"following_url": "https://api.github.com/users/ceostroff/following{/other_user}",
"gists_url": "https://api.github.com/users/ceostroff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceostroff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceostroff/subscriptions",
"organizations_url": "https://api.github.com/users/ceostroff/orgs",
"repos_url": "https://api.github.com/users/ceostroff/repos",
"events_url": "https://api.github.com/users/ceostroff/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceostroff/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"That's really cool, thanks for sharing!",
"[model page](https://huggingface.co/ceostroff/harry-potter-gpt2-fanfiction)"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
We made this model that creates new Harry Potter fanfiction based off of popular stories. We hope this will be a fun and useful tool. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8605/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8605",
"html_url": "https://github.com/huggingface/transformers/pull/8605",
"diff_url": "https://github.com/huggingface/transformers/pull/8605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8605.patch",
"merged_at": 1605649859000
} |
https://api.github.com/repos/huggingface/transformers/issues/8604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8604/comments | https://api.github.com/repos/huggingface/transformers/issues/8604/events | https://github.com/huggingface/transformers/pull/8604 | 745,050,368 | MDExOlB1bGxSZXF1ZXN0NTIyNjc1NTU0 | 8,604 | Remove deprecated | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I noticed that https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py#L294 and https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py#L302 are still referencing is_world_master() instead of using is_world_process_zero(). I have made a change to run_ner_old.py locally and running the process again to see if it fixes the issue for me (AttributeError: 'Trainer' object has no attribute 'is_world_master')",
"We're not actively maintaining the old examples scripts anymore and they're still there for people using older versions of transformers. The good fix would be to add a minimum version to the README of that example.",
"I'm at loss here - which ones are old examples and which ones are new?\r\n\r\nhttps://github.com/huggingface/transformers/issues/8792",
"If there should be an old version of something to be used with an old version of something - why not send users to the branch of that release that they want to use - they will end up with the version of examples that work for that release. And master examples should be working with master version of the code, IMHO. Does it make sense?\r\n\r\nIf there are fixes to the old branch's examples, not pertaining to master, the fix can go into that branch.",
"Yes it all makes sense. Please be patient while I clean up the examples folder, as it's a long work. I promise this will all be clean when I'm done :-)"
] | 1,605 | 1,606 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR removes old deprecated arguments and adjust tests/examples accordingly.
Co-authored with @LysandreJik. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8604",
"html_url": "https://github.com/huggingface/transformers/pull/8604",
"diff_url": "https://github.com/huggingface/transformers/pull/8604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8604.patch",
"merged_at": 1605643890000
} |
https://api.github.com/repos/huggingface/transformers/issues/8603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8603/comments | https://api.github.com/repos/huggingface/transformers/issues/8603/events | https://github.com/huggingface/transformers/issues/8603 | 745,039,635 | MDU6SXNzdWU3NDUwMzk2MzU= | 8,603 | TFTrainer & Eager mode | {
"login": "JulesGM",
"id": 3231217,
"node_id": "MDQ6VXNlcjMyMzEyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3231217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JulesGM",
"html_url": "https://github.com/JulesGM",
"followers_url": "https://api.github.com/users/JulesGM/followers",
"following_url": "https://api.github.com/users/JulesGM/following{/other_user}",
"gists_url": "https://api.github.com/users/JulesGM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JulesGM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JulesGM/subscriptions",
"organizations_url": "https://api.github.com/users/JulesGM/orgs",
"repos_url": "https://api.github.com/users/JulesGM/repos",
"events_url": "https://api.github.com/users/JulesGM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JulesGM/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @jplu for TFTrainer.",
"Hello @JulesGM!\r\n\r\nYes, the TF Trainer don't support eager execution, partially because of the usage of `tf.gradients` that we use because sometime when loading a model for a specific task, not all the layers are used and bring some None values when computing the gradients. `tf.gradients` allows to ignore these None values.\r\n\r\nI'm really sorry that it is not enough detailed in the documentation, we are currently working on an improved version of the TF Trainer that you can find [here](https://github.com/huggingface/transformers/pull/8264). And it should be much easier and convenient to use. Sorry once again for the inconvenience you have encountered with the TFTrainer until now.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | Is eager mode really not supported with TFTrainer? It's telling me that it is using `tf.gradients`, which is not supported with eager mode.
If that's true, then I maybe it could be displayed a lot more prominently in your documentation ... I wasted so much time implementing custom functions, π€¦ββοΈ
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8603/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8602/comments | https://api.github.com/repos/huggingface/transformers/issues/8602/events | https://github.com/huggingface/transformers/pull/8602 | 745,029,095 | MDExOlB1bGxSZXF1ZXN0NTIyNjU3Nzkx | 8,602 | New TF model inputs | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger Thanks for these useful comments, I should have addressed them as you proposed. All the modification have been directly integrated in the new `input_processing` function. can you please also check the `modeling_tf_bart.py`, I have updated the documentation, let me know if I did something wrong.\r\n\r\n@patrickvonplaten can you please check the same BART file as I have done some updates in order to make it able to properly handle the `return_dict=True` and `return_dict=False` my updates don't affect the usual behaviour of the model even if the tests are ok.",
"@LysandreJik @sgugger @patrickvonplaten Do you think it would be interesting to have the same thing for the outputs? knowing that for graph mode compliance we will need to have such or such output behaviour depending the state we are (eager or not). ",
"> Remember to do separate PRs for separate issues please, a lot of changes in TFBart here are unrelated to the main focus of the PR.\r\n\r\nDo you prefer that I move the Bart changes into another PR and keep only the inputs changes? I don't mind to do this :)",
"I have also updated the TF template.",
"> Do you prefer that I move the Bart changes into another PR and keep only the inputs changes? I don't mind to do this :)\r\n\r\nSince @patrickvonplaten approved, I think they're okay here for this time (unless he says otherwise ;-) ) ",
"I would prefer the same, it looks a bit too big to be added at this point of the release. I should have addressed all the comments. Let me know if I missed some."
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
This PR improves the way the inputs are handled in the TensorFlow models:
- It should be easier now to write a TensorFlow model
- The order of the inputs are now better handled, mostly when using Keras Tensors
- Replace all the occurrences of `inputs` by `input_ids` to make easier to understand what this parameter is for and to align with PyTorch input names and the Tokenizers outputs.
@LysandreJik @sgugger @patrickvonplaten let me know what you think about this new input processing, it is not finished yet but any comment will be helpful for me :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8602",
"html_url": "https://github.com/huggingface/transformers/pull/8602",
"diff_url": "https://github.com/huggingface/transformers/pull/8602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8602.patch",
"merged_at": 1606244101000
} |
https://api.github.com/repos/huggingface/transformers/issues/8601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8601/comments | https://api.github.com/repos/huggingface/transformers/issues/8601/events | https://github.com/huggingface/transformers/issues/8601 | 744,939,658 | MDU6SXNzdWU3NDQ5Mzk2NTg= | 8,601 | Accessing gradients of Bart hidden states | {
"login": "thoppe",
"id": 2707106,
"node_id": "MDQ6VXNlcjI3MDcxMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2707106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thoppe",
"html_url": "https://github.com/thoppe",
"followers_url": "https://api.github.com/users/thoppe/followers",
"following_url": "https://api.github.com/users/thoppe/following{/other_user}",
"gists_url": "https://api.github.com/users/thoppe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thoppe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thoppe/subscriptions",
"organizations_url": "https://api.github.com/users/thoppe/orgs",
"repos_url": "https://api.github.com/users/thoppe/repos",
"events_url": "https://api.github.com/users/thoppe/events{/privacy}",
"received_events_url": "https://api.github.com/users/thoppe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"@joeddav - feel free to ping me again if you're too busy. Leaving it up to you for now :-) ",
"Hey thanks for opening the detailed issue. As I mentioned this is a Bart issue, nothing specific to zero shot, so I've renamed it to get the right eyes on it.\r\n\r\nThe problem here is that the hidden states are transposed _after_ they're passed forward in the computation graph (with the exception of the last encoder layer), which means that the hidden states returned are no longer upstream from the logits in the graph and therefore don't have any gradient information. I'm not sure I see a trivial fix though βΒ any ideas @patrickvonplaten? We could just do the transposes inside `EncoderLayer.forward` instead but would the superfluous transpose ops slow things down?",
"At the very least, having an option to return the value _before_ the transpose would allow access to the gradients. "
] | 1,605 | 1,606 | 1,606 | NONE | null | The forums suggested that this be filed as a bug report:
https://discuss.huggingface.co/t/finding-gradients-in-zero-shot-learning/2033/5
The solution to the problem was solved on SO:
https://stackoverflow.com/questions/64823332/gradients-returning-none-in-huggingface-module/64866990#64866990
The question and answer are reproduced below. Filling as an issue as we should be able to compute gradients on output without a monkey-patch. It looks like the `transpose` is causing it.
## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.27
- Python version: 3.8.1
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: CPU & GPU
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import pipeline
import torch
model_name = 'facebook/bart-large-mnli'
nlp = pipeline("zero-shot-classification", model=model_name)
responses = ["I'm having a great day!!"]
hypothesis_template = 'This person feels {}'
candidate_labels = ['happy', 'sad']
nlp(responses, candidate_labels, hypothesis_template=hypothesis_template)
```
This works well! The output is:
```
{'sequence': "I'm having a great day!!",
'labels': ['happy', 'sad'],
'scores': [0.9989933371543884, 0.0010066736722365022]}
```
What I'd like to do however, is look at the gradients of the input tokens to see which tokens are important. This is in contrast to looking at the attention heads (which is also another viable tactic). Trying to rip apart the internals of the module, I can get the logics and embedding layers:
```
inputs = nlp._parse_and_tokenize(responses, candidate_labels, hypothesis_template)
predictions = nlp.model(**inputs, return_dict=True, output_hidden_states=True)
predictions['logits']
tensor([[-3.1864, -0.0714, 3.2625],
[ 4.5919, -1.9473, -3.6376]], grad_fn=<AddmmBackward>)
```
This is expected, as the label for "happy" is index 0 and the entailment index for this model is 2, so the value of 3.2625 is an extremely strong signal. The label for "sad" is 1 and the contradiction index is 0, so the value of 4.5919 is also the correct answer.
Great! Now I should be able to look at the first embedding layer and check out the gradient with respect to the happy entailment scalar:
```
layer = predictions['encoder_hidden_states'][0]
layer.retain_grad()
predictions['logits'][0][2].backward(retain_graph=True)
```
Unfortunately, `layer.grad` is `None`.
## [Solution from StackOverflow](https://stackoverflow.com/a/64866990/249341)
I was also very surprised of this issue. Although I have never used the library I went down and did some debugging and found out that the issue is coming from the library transformers. The problem is comming from from this [line][1] :
encoder_states = tuple(hidden_state.transpose(0, 1) for hidden_state in encoder_states)
If you comment it out, you will get the gradient just with some dimensions transposed.
This issue is related to the fact that Pytorch Autograd does not do very well on inplace operations as mentioned [here][2].
So to recap the solution is to comment line 382 in *`modeling_bart.py`*.
You will get the gradient with this shape T x B x C instead of B x T x C, but you can reshape it as you want later.
[1]: https://github.com/huggingface/transformers/blob/1073a2bde5d608f9891d6da6df7b63921dca1b71/src/transformers/modeling_bart.py#L382
[2]: https://discuss.pytorch.org/t/encounter-the-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/836/5
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8601/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8600/comments | https://api.github.com/repos/huggingface/transformers/issues/8600/events | https://github.com/huggingface/transformers/pull/8600 | 744,931,569 | MDExOlB1bGxSZXF1ZXN0NTIyNTc1MDEy | 8,600 | Fix check repo utils | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR fixes the `check_repo` script with the recent repo reorganization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8600/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8600",
"html_url": "https://github.com/huggingface/transformers/pull/8600",
"diff_url": "https://github.com/huggingface/transformers/pull/8600.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8600.patch",
"merged_at": 1605639706000
} |
https://api.github.com/repos/huggingface/transformers/issues/8599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8599/comments | https://api.github.com/repos/huggingface/transformers/issues/8599/events | https://github.com/huggingface/transformers/pull/8599 | 744,910,633 | MDExOlB1bGxSZXF1ZXN0NTIyNTU3MTgz | 8,599 | Tokenizers should be framework agnostic | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Can't ask you to review here @stas00 but would love your review as FSMT is impacted by this.",
"LGTM"
] | 1,605 | 1,605 | 1,605 | MEMBER | null | The `prepare_seq2seq_batch` method should not return PyTorch tensors by default. It does not in the base class, and all our tokenizer methods should be agnostic to the framework.
Updated Marian, Pegasus, mBART, FSMT that had `return_tensors="pt"` and RAG that had `return_tensors="np"`.
The documentation for these methods was inconsistent, added the docstrings via the decorator where it was needed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8599/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8599",
"html_url": "https://github.com/huggingface/transformers/pull/8599",
"diff_url": "https://github.com/huggingface/transformers/pull/8599.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8599.patch",
"merged_at": 1605639784000
} |
https://api.github.com/repos/huggingface/transformers/issues/8598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8598/comments | https://api.github.com/repos/huggingface/transformers/issues/8598/events | https://github.com/huggingface/transformers/pull/8598 | 744,898,815 | MDExOlB1bGxSZXF1ZXN0NTIyNTQ3Mjkw | 8,598 | Vectorize RepetitionPenaltyLogitsProcessor to improve performance | {
"login": "bdalal",
"id": 3478378,
"node_id": "MDQ6VXNlcjM0NzgzNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bdalal",
"html_url": "https://github.com/bdalal",
"followers_url": "https://api.github.com/users/bdalal/followers",
"following_url": "https://api.github.com/users/bdalal/following{/other_user}",
"gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bdalal/subscriptions",
"organizations_url": "https://api.github.com/users/bdalal/orgs",
"repos_url": "https://api.github.com/users/bdalal/repos",
"events_url": "https://api.github.com/users/bdalal/events{/privacy}",
"received_events_url": "https://api.github.com/users/bdalal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```\r\nValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\r\n```\r\nSeems like a flake. Tests passes locally."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR replaces the nested loops in the [`RepetitionPenaltyLogitsProcessor`](https://github.com/huggingface/transformers/blob/a1bbcf3f6c20e15fe799a8659d6b7bd36fdf11ed/src/transformers/generation_logits_process.py#L147-L155) with a vectorized implementation to provide speedups on long sequences of roughly 3 orders of magnitude on GPUs and 2 orders of magnitude on CPUs.
<!-- Remove if not applicable -->
Fixes # [8596](https://github.com/huggingface/transformers/issues/8596)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8598/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8598/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8598",
"html_url": "https://github.com/huggingface/transformers/pull/8598",
"diff_url": "https://github.com/huggingface/transformers/pull/8598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8598.patch",
"merged_at": 1605898747000
} |
https://api.github.com/repos/huggingface/transformers/issues/8597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8597/comments | https://api.github.com/repos/huggingface/transformers/issues/8597/events | https://github.com/huggingface/transformers/pull/8597 | 744,893,665 | MDExOlB1bGxSZXF1ZXN0NTIyNTQyOTI1 | 8,597 | BART & FSMT: fix decoder not returning hidden states from the last layer | {
"login": "MaksymDel",
"id": 8141935,
"node_id": "MDQ6VXNlcjgxNDE5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8141935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaksymDel",
"html_url": "https://github.com/MaksymDel",
"followers_url": "https://api.github.com/users/MaksymDel/followers",
"following_url": "https://api.github.com/users/MaksymDel/following{/other_user}",
"gists_url": "https://api.github.com/users/MaksymDel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaksymDel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaksymDel/subscriptions",
"organizations_url": "https://api.github.com/users/MaksymDel/orgs",
"repos_url": "https://api.github.com/users/MaksymDel/repos",
"events_url": "https://api.github.com/users/MaksymDel/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaksymDel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"yay, a first fsmt user that found an issue! Thank you!\r\n\r\nOK, here I literally copied the bart implementation where it didn't have that line you added:\r\nhttps://github.com/huggingface/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/models/bart/modeling_bart.py#L627-L629\r\n\r\nSo most likely if this is indeed a bug then it affects many `transformers` models. \r\n\r\nNow let us diagnose what's going on. I see that the `x` is stored in the loop above at the beginning of a layers iteration:\r\n\r\nhttps://github.com/huggingface/transformers/blob/36a19915ea4fc3dc337a310e4a1af43eb3c81c9a/src/transformers/models/bart/modeling_bart.py#L597-L600\r\n\r\nLooking closely, the current code doesn't add the `x` from the last iteration of the `for idx, decoder_layer in enumerate(self.layers)` loop, which is clearly a bug. We have a one-off problem here. \r\n\r\nThe only thing I'm not sure about is whether we need the `x` before the loop, if not then `all_hidden_states += (x,)` needs to be moved to the end of the loop. If we do need it, then your change is due. \r\n\r\nEither way it is I'd code it differently. I'd add add `x` before the loop starts if it is needed, and then add it for each layer once we have a new x defined in the loop. \r\n\r\nAdding it after the loop is likely to cause other bugs in the future where the wrong x will be added.\r\n\r\nCould you please share the use case so that we could write a test for it? Or if you could write the test that's even better - either way works.\r\n\r\nI didn't have a use case for this myself so relied on `transformers` common tests to catch this.\r\n\r\nThank you!",
"So this is what I propose, which does the same as your PR, but respects the locality rule better, if that makes sense.\r\n\r\n```\r\n # XXX: do we need to save this hidden state?\r\n if output_hidden_states:\r\n all_hidden_states += (x,)\r\n \r\n for idx, decoder_layer in enumerate(self.layers):\r\n dropout_probability = random.uniform(0, 1)\r\n if self.training and (dropout_probability < self.layerdrop):\r\n continue\r\n\r\n layer_state = past_key_values[idx] if past_key_values is not None else None\r\n\r\n x, layer_self_attn, layer_past, layer_cross_attn = decoder_layer(\r\n x,\r\n encoder_hidden_states,\r\n encoder_attn_mask=encoder_padding_mask,\r\n decoder_padding_mask=decoder_padding_mask,\r\n layer_state=layer_state,\r\n causal_mask=decoder_causal_mask,\r\n output_attentions=output_attentions,\r\n )\r\n\r\n # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)\r\n if output_hidden_states:\r\n all_hidden_states += (x,)\r\n \r\n```\r\n\r\n@patrickvonplaten, how should we proceed - solve this for fsmt and then replicate to other copy-cats - or solve it at once in a new PR - and need to create a new common test I suppose. I, unfortunately, have no perms to make suggestions directly in the code. so passing the flag to you if the former.",
"Thanks, Stas @stas00!\r\n\r\nI implemented a fix the way I did just to be consistent with how the analogous code is written in other places (e.g. FSMTEncoder, BERT model, etc.):\r\nhttps://github.com/huggingface/transformers/blob/dd52804f5fce0a568ffbb3dc7fd088d2de0a0e56/src/transformers/models/bert/modeling_bert.py#L491-L492\r\n\r\nHowever, I would also personally prefer adding contextualized embedding before the loop first and then collecting hidden states at the end of the loop, just like you described. It just has to be changed for all the models in the repo if we want to keep the codebase consistent.\r\n\r\nThe test might check that the size of the list with output hidden states aligns in shape with what we expect it to be based on the model configuration. It would catch the error and be general enough for many usecases. It is just that it is a job for a bigger PR if we want to cover all the models in the repo.\r\n\r\nRegarding whether to return ```decoder``` input uncontextualized embeddings, GPT2 already does it (GPT2 can be viewed as a transformer decoder):\r\nhttps://github.com/huggingface/transformers/blob/5cf9c79665266e49cf498839da90d7aeeff21c3a/src/transformers/models/gpt2/modeling_gpt2.py#L618-L620\r\nAlso, decoder input embeddings from layer 0 get fed into further complex layers analogously to how it is done for encoders. And for all the encoders in the lib (like BERT) we do return the outputs from this layer. So I would vote for not dropping it for the decoder.\r\n ",
"Well, based on the research that you shared, it's easy then - keep them all. \r\n\r\nSo we just need to decide whether to:\r\n\r\n1. a. keep the current implementation in most (all?) modules where the incoming states are stored first and then the last state is stored as sort of an afterthought and potentially is forgotten which is the case with every bart-copy, b. and fix `modeling_bart` and every other module that copied it to add the missing state.\r\n2. or recode it in a more clean way as I suggested [here](https://github.com/huggingface/transformers/pull/8597#issuecomment-729119379) and you concurred with me, which will fix the bug on the way and prevent it from re-appearing in the future.\r\n\r\nSince I wasn't there when the code was written and since it impacts the whole project let's see what @LysandreJik, @patrickvonplaten, @sgugger think.\r\n\r\nThank you for the detailed answer, the research, and the suggestion on how to write the missing test, @maksym-del! ",
"I would avoid changing the existing code since it produces the desired output, I think we can all employ our time to do more meaningful contributions to the library :-) I don't think one implementation is better than the other in the sense you have to remember to either add the first hidden state or the last.\r\n\r\nOn the models that do not produce the desired outputs, you can fix it the way you prefer. The modeling files don't need to do everything the exact same way and since you're the contributor fixing things, you get to choose which one you like better. What interests me more however is how this got the tests passing, since the number of hidden states is tested and we're discovering there is one missing, a things the common tests should have caught.",
"While I disagree about your suggestion to two ways being equal, since the current implementation is a bug waiting to occur, should some code be added after the loop and before the last layer's hidden state is added, especially with all the code copying. I am in agreement with the rest. \r\n\r\nTo clarify, you're saying:\r\n\r\n- Do not change anything in models that don't have this bug.\r\n- You can change things in models that do have this bug besides fixing the bug (i.e. all bart copy-cats)\r\n\r\n> What interests me more however is how this got the tests passing, since the number of hidden states is tested and we're discovering there is one missing, a things the common tests should have caught.\r\n\r\nMy intuition is that since it counts, it counted the \"incoming\" hidden state as one of the layer hidden states. If this is a common test, then the correct models should have failed this test instead. But will need to look at the actual test to tell for sure.\r\n\r\n",
"@maksym-del thanks so much for finding this bug -> you are correct this should be corrected. \r\n\r\nI think we should do two things here (@maksym-del let me or @stas00 know if you need help here):\r\n\r\n1. Apply the same change to `modeling_bart.py`\r\n2. Improve the test (this might be a bit more difficult, but I'll help you if needed :-)):\r\n - If you compare the common test of the hidden states output: https://github.com/huggingface/transformers/blob/0ad45e108d156e24b0cbd0fe0f5a27a4e7a3c1c3/tests/test_modeling_common.py#L653 with the common test of the attention output: https://github.com/huggingface/transformers/blob/0ad45e108d156e24b0cbd0fe0f5a27a4e7a3c1c3/tests/test_modeling_common.py#L295 you can see that the test of the attention output does an extra check for `is_encoder_decoder=True` models while the hidden states test does not. This is why this bug was unnoticed -> so we should add a `if config.is_encoder_decoder:` clause to the hidden states test that checks that the decoder also has the correct number of layers and that those hidden states have the correct size. \r\n\r\nIf you have trouble adding the test ping me or @stas00 again and we'll finish the PR for you!\r\n\r\nThanks for spotting the bug :-) \r\n\r\n",
"Thanks a lot for rebasing this! I think the only thing left to do now is to add a test as explained above :-) ",
"Thanks, @patrickvonplaten , @stas00 and @sgugger !\r\n\r\nI added the test and think this PR is ready to be merged. ",
"Unrelated to this PR, as it's replicating the existing approach, but won't it be simpler to replace:\r\n```\r\n x = x.transpose(0, 1)\r\n all_hidden_states += (x,)\r\n x = x.transpose(0, 1)\r\n```\r\nwith just:\r\n``` \r\n all_hidden_states += (x.transpose(0, 1),)\r\n```\r\n\r\n@patrickvonplaten, replying inside my comment:\r\n\r\nthis doesn't work. `x` needs to be kept in the graph `x.transpose(0, 1)` would return a new view on the tensor which is not in the graph anymore",
"@patrickvonplaten - I edited your editing of my comment to make it readable. otherwise it made no sense as you made it look like I was saying something and then saying that it is not so.\r\n\r\nThank you for the clarification!\r\n\r\np.s. github doesn't send notification on edits, so this is probably not the most ideal way to reply ;)",
"> @patrickvonplaten - I edited your editing of my comment to make it readable. otherwise it made no sense as you made it look like I was saying something and then saying that it is not so.\r\n> \r\n> Thank you for the clarification!\r\n> \r\n> p.s. github doesn't send notification on edits, so this is probably not the most ideal way to reply ;)\r\n\r\nOh, I'm sorry. I meant to reply to your comment :D "
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | # What does this PR do?
The activations from the last decoder layer accidentally were not a part of the output.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@stas00
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8597/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8597",
"html_url": "https://github.com/huggingface/transformers/pull/8597",
"diff_url": "https://github.com/huggingface/transformers/pull/8597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8597.patch",
"merged_at": 1606498534000
} |
https://api.github.com/repos/huggingface/transformers/issues/8596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8596/comments | https://api.github.com/repos/huggingface/transformers/issues/8596/events | https://github.com/huggingface/transformers/issues/8596 | 744,865,353 | MDU6SXNzdWU3NDQ4NjUzNTM= | 8,596 | Speed up repetition penalty logits processor | {
"login": "bdalal",
"id": 3478378,
"node_id": "MDQ6VXNlcjM0NzgzNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bdalal",
"html_url": "https://github.com/bdalal",
"followers_url": "https://api.github.com/users/bdalal/followers",
"following_url": "https://api.github.com/users/bdalal/following{/other_user}",
"gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bdalal/subscriptions",
"organizations_url": "https://api.github.com/users/bdalal/orgs",
"repos_url": "https://api.github.com/users/bdalal/repos",
"events_url": "https://api.github.com/users/bdalal/events{/privacy}",
"received_events_url": "https://api.github.com/users/bdalal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"PR is up [8598](https://github.com/huggingface/transformers/pull/8598)",
"Can you please check speed of this implementation also:\r\n```\r\n# if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability\r\nscores = torch.where(scores < 0, scores * penalty, scores / penalty)\r\n```",
"@LSinev Thanks for the suggestion! This is slightly faster than my implementation and much more elegant. Also makes sense because we don't really need to look at previous tokens for modifying the score. \r\nI'll replace my implementation with this. Credit to you!",
"Oh, no. I didn't mean that this should be the only code used. Of course, only input ids should be penalized, not everything. And of course, one should ensure (and probably add tests) that this solution works per row for input batches (batches as input and batches after the expansion due to num_return_sequences > 1). This was just a hypothesis that such code may work faster.",
"Hmm, that makes sense. I hadn't considered that earlier. All - greedy and beam search and sampling may produce incorrect tokens because only the `torch.where` approach will alter scores for all other tokens.\r\nHaving accounted for only the input_ids, the `torch.where` approach is still marginally faster than performing the score modification in 2 steps.\r\nI'll update the code accordingly. Thanks for your help!"
] | 1,605 | 1,607 | 1,607 | CONTRIBUTOR | null | # π Feature request
Hey Team, Thanks for the great work on the project to date! This is more of an enhancement so putting it here instead of as a bug.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The [`RepetitionPenaltyLogitsProcessor`](https://github.com/huggingface/transformers/blob/a1bbcf3f6c20e15fe799a8659d6b7bd36fdf11ed/src/transformers/generation_logits_process.py#L147-L155) which is used to enforce the repetition penalty when generating tokens from a Seq2Seq head is extremely slow for long sequences due to the nested loops.
A vectorized implementation will be much much faster.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Here's a minimal example to reproduce the slow behavior:
```
import torch
from transformers import RepetitionPenaltyLogitsProcessor
import timeit
def vectorized(input_ids, scores, penalty):
score_range = torch.arange(scores.shape[0])
score = scores[score_range[:, None], input_ids]
score[score >= 0] = score[score >= 0] / penalty
score[score < 0] = score[score < 0] * penalty
scores[score_range[:, None], input_ids] = score
input_ids = torch.randint(0, 10000, (256, 256))
scores = torch.randn(256, 10000)
rep_proc = RepetitionPenaltyLogitsProcessor(2.0)
print(f"Existing impl time for 10 iterations on CPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=10)}")
print(f"Vectorized impl time for 10 iterations on CPU = {timeit.timeit(lambda: vectorized(input_ids, scores, 2.0), number=10)}")
if torch.cuda.is_available():
input_ids = input_ids.cuda()
scores = scores.cuda()
print(f"Existing impl time for 10 iterations on GPU = {timeit.timeit(lambda: rep_proc(input_ids, scores), number=10)}")
print(f"Vectorized impl time for 10 iterations on GPU = {timeit.timeit(lambda: vectorized(input_ids, scores, 2.0), number=10)}")
```
Here's the speedups on CPU and GPU with the vectorized version:
```
Existing impl time for 10 iterations on CPU = 23.23520456800179
Vectorized impl time for 10 iterations on CPU = 0.035849231004249305
```
```
Existing impl time for 10 iterations on GPU = 42.0977192690043
Vectorized impl time for 10 iterations on GPU = 0.008036320999963209
```
These numbers are from a machine with [email protected] and an Nvidia T4 GPU.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I'll have a PR up for this shortly.
@patrickvonplaten pinging you on this for your thoughts because I saw your last few commits on this code.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8596/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8595/comments | https://api.github.com/repos/huggingface/transformers/issues/8595/events | https://github.com/huggingface/transformers/pull/8595 | 744,842,260 | MDExOlB1bGxSZXF1ZXN0NTIyNDk5NzQ1 | 8,595 | Fix model templates | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR fixes the model templates that were broken by the recent reorganization (in truth they never worked, it fixes that too :-p). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8595/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8595",
"html_url": "https://github.com/huggingface/transformers/pull/8595",
"diff_url": "https://github.com/huggingface/transformers/pull/8595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8595.patch",
"merged_at": 1605627338000
} |
https://api.github.com/repos/huggingface/transformers/issues/8594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8594/comments | https://api.github.com/repos/huggingface/transformers/issues/8594/events | https://github.com/huggingface/transformers/issues/8594 | 744,786,025 | MDU6SXNzdWU3NDQ3ODYwMjU= | 8,594 | PEGASUS do not have mask token | {
"login": "ShichaoSun",
"id": 13548568,
"node_id": "MDQ6VXNlcjEzNTQ4NTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/13548568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShichaoSun",
"html_url": "https://github.com/ShichaoSun",
"followers_url": "https://api.github.com/users/ShichaoSun/followers",
"following_url": "https://api.github.com/users/ShichaoSun/following{/other_user}",
"gists_url": "https://api.github.com/users/ShichaoSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShichaoSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShichaoSun/subscriptions",
"organizations_url": "https://api.github.com/users/ShichaoSun/orgs",
"repos_url": "https://api.github.com/users/ShichaoSun/repos",
"events_url": "https://api.github.com/users/ShichaoSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShichaoSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @ShichaoSun - thanks for the issue! \r\n I agree with you that Pegasus should have some mask tokens defined and I'd set `tokenizer.mask_token` to Pegasus' MLM mask token => `[MASK_2]` and add an additional `mask_token_sent` ... \r\n\r\nI'm still waiting for some more insight on Pegasus of the Pegasus expert @sshleifer -> https://github.com/huggingface/transformers/issues/8689 .\r\n\r\nI'll hope to get an answer there to be sure that adding the `[MASK_1]` and `[MASK_2]` tokens is the correct thing to do here!",
"Hi @patrickvonplaten ,\r\nReally thanks for your reply and great job !",
"Is it possible to MASK several tokens using Pegasus?"
] | 1,605 | 1,606 | 1,606 | CONTRIBUTOR | null | @mfuntowicz @patrickvonplaten
Hi,
I am using PEGASUS google/pegasus-large
I would like to fill the mask sentence of a document , i.e. try the pretraining task. But I don't find the mask_token.
Steps to reproduce the behavior:
1. from transformers import PegasusTokenizer
2. tok = PegasusTokenizer.from_pretrained("google/pegasus-large")
3. tok.mask_token
output is "Using mask_token, but it is not set yet."
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8593/comments | https://api.github.com/repos/huggingface/transformers/issues/8593/events | https://github.com/huggingface/transformers/pull/8593 | 744,785,532 | MDExOlB1bGxSZXF1ZXN0NTIyNDUxODQy | 8,593 | Fix missing space in unavailable PyTorch/TensorFlow warning | {
"login": "mipo57",
"id": 22924102,
"node_id": "MDQ6VXNlcjIyOTI0MTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/22924102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mipo57",
"html_url": "https://github.com/mipo57",
"followers_url": "https://api.github.com/users/mipo57/followers",
"following_url": "https://api.github.com/users/mipo57/following{/other_user}",
"gists_url": "https://api.github.com/users/mipo57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mipo57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mipo57/subscriptions",
"organizations_url": "https://api.github.com/users/mipo57/orgs",
"repos_url": "https://api.github.com/users/mipo57/repos",
"events_url": "https://api.github.com/users/mipo57/events{/privacy}",
"received_events_url": "https://api.github.com/users/mipo57/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Fixes missing space in unavailable PyTorch/TensorFlow warning
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8593/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8593",
"html_url": "https://github.com/huggingface/transformers/pull/8593",
"diff_url": "https://github.com/huggingface/transformers/pull/8593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8593.patch",
"merged_at": 1605712166000
} |
https://api.github.com/repos/huggingface/transformers/issues/8592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8592/comments | https://api.github.com/repos/huggingface/transformers/issues/8592/events | https://github.com/huggingface/transformers/issues/8592 | 744,742,425 | MDU6SXNzdWU3NDQ3NDI0MjU= | 8,592 | Improving performance results for BERT | {
"login": "Stimmot",
"id": 29411999,
"node_id": "MDQ6VXNlcjI5NDExOTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/29411999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stimmot",
"html_url": "https://github.com/Stimmot",
"followers_url": "https://api.github.com/users/Stimmot/followers",
"following_url": "https://api.github.com/users/Stimmot/following{/other_user}",
"gists_url": "https://api.github.com/users/Stimmot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stimmot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stimmot/subscriptions",
"organizations_url": "https://api.github.com/users/Stimmot/orgs",
"repos_url": "https://api.github.com/users/Stimmot/repos",
"events_url": "https://api.github.com/users/Stimmot/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stimmot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,605 | 1,605 | 1,605 | NONE | null | I'm using the bert-base-german-cased model to perform token classification with custom NER labels on a dataset of German court documents. I have 11 labels in total (including the O label), which are however not tagged in BIO form. I'm letting the model train and evaluate on an NVidia GeForce GTX Titan X.
But despite the good ressources and the model, which was actually pretrained on German judicial documents, the results are rather lacking.
```
precision recall f1-score support
Date 0.87 0.99 0.93 407
Schadensbetrag 0.77 0.58 0.66 112
Delikt 0.59 0.50 0.54 44
Gestaendnis_ja 0.60 0.71 0.65 21
Vorstrafe_nein 0.00 0.00 0.00 6
Strafe_Gesamtfreiheitsstrafe_Dauer 0.76 0.91 0.83 35
Strafe_Gesamtsatz_Betrag 0.42 0.52 0.46 25
Strafe_Gesamtsatz_Dauer 0.52 0.82 0.64 28
Strafe_Tatbestand 0.30 0.29 0.30 283
micro avg 0.65 0.68 0.66 961
macro avg 0.54 0.59 0.56 961
weighted avg 0.64 0.68 0.66 961
```
What could be some steps to improve these results?
Perhaps it's the low data count for some of the labels, or that the labels often are not single tokens but text spans of multiple tokens?
I would be glad for every hint of some more experienced users. I can also share data or other files, if they are relevant.
This is my config file:
```
{
"data_dir": "./Data",
"labels": "./Data/labels.txt",
"model_name_or_path": "bert-base-german-cased",
"output_dir": "./Data/Models",
"task_type": "NER",
"max_seq_length": 180,
"num_train_epochs": 6,
"per_device_train_batch_size": 48,
"seed": 7,
"fp16": true,
"do_train": true,
"do_predict": true,
"do_eval": true
}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8591/comments | https://api.github.com/repos/huggingface/transformers/issues/8591/events | https://github.com/huggingface/transformers/pull/8591 | 744,717,465 | MDExOlB1bGxSZXF1ZXN0NTIyMzk0MjM4 | 8,591 | Fix init for MT5 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
Fix the init, the config should be imported outside of tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8591/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8591",
"html_url": "https://github.com/huggingface/transformers/pull/8591",
"diff_url": "https://github.com/huggingface/transformers/pull/8591.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8591.patch",
"merged_at": 1605621133000
} |
https://api.github.com/repos/huggingface/transformers/issues/8590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8590/comments | https://api.github.com/repos/huggingface/transformers/issues/8590/events | https://github.com/huggingface/transformers/issues/8590 | 744,710,669 | MDU6SXNzdWU3NDQ3MTA2Njk= | 8,590 | Cannot train model from scratch using `run_mlm.py`. | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm, that is weird as `None` is the default for that argument. Will investigate this when I'm finished with v4 stuff, thanks for flagging!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Looks like the trainer does not like when it gets a `None`, so when we train from scratch, there is a `None` in this `if` and crashes:
https://github.com/huggingface/transformers/blob/a6cf9ca00b74a8b2244421a6101b83d8cf43cd6b/examples/language-modeling/run_mlm.py#L357
I solved it by deleting that line, but I guess it could affect to other use cases.
To reproduce, call `run_mlm` this way (I guess it is easier to reproduce, but this might be enough):
```
python run_mlm.py \
--model_type bert \
--train_file ./data/oscar_1000.txt \
--validation_file ./data/oscar_1000_valid.txt \
--output_dir testing_model \
--tokenizer_name bert-base-spanish-wwm-cased \
--overwrite_output_dir \
--do_train \
--do_eval \
--evaluation_strategy steps \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--max_steps 500 \
--save_steps 2000 \
--save_total_limit 15 \
--overwrite_cache \
--max_seq_length 512 \
--eval_accumulation_steps 10 \
--logging_steps 1000 \
```
The dataset I'm using I guess that isn't relevant so any corpus will do.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8589/comments | https://api.github.com/repos/huggingface/transformers/issues/8589/events | https://github.com/huggingface/transformers/pull/8589 | 744,658,841 | MDExOlB1bGxSZXF1ZXN0NTIyMzQ0NTg4 | 8,589 | [MT5] More docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sgugger for notification."
] | 1,605 | 1,605 | 1,605 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
^^I knew that I forgot something with the docs...
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8589/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8589",
"html_url": "https://github.com/huggingface/transformers/pull/8589",
"diff_url": "https://github.com/huggingface/transformers/pull/8589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8589.patch",
"merged_at": 1605613677000
} |
https://api.github.com/repos/huggingface/transformers/issues/8588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8588/comments | https://api.github.com/repos/huggingface/transformers/issues/8588/events | https://github.com/huggingface/transformers/issues/8588 | 744,653,620 | MDU6SXNzdWU3NDQ2NTM2MjA= | 8,588 | Hosting and online deployment of a transformer chatbot (built with huggingface library) | {
"login": "AlexTS1980",
"id": 8479872,
"node_id": "MDQ6VXNlcjg0Nzk4NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8479872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexTS1980",
"html_url": "https://github.com/AlexTS1980",
"followers_url": "https://api.github.com/users/AlexTS1980/followers",
"following_url": "https://api.github.com/users/AlexTS1980/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexTS1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexTS1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexTS1980/subscriptions",
"organizations_url": "https://api.github.com/users/AlexTS1980/orgs",
"repos_url": "https://api.github.com/users/AlexTS1980/repos",
"events_url": "https://api.github.com/users/AlexTS1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexTS1980/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,605 | 1,605 | 1,605 | NONE | null | I'm building a chatbot using BERT for a startup company. At some point it will be deployed online. It turns out, most chatbot hosting services actually want to sell you a chatbot rather than host the one you developed, which is obviously not an option for us, especially that the solution is opensource (pytorch+huggingface).
I would like to know of a hosting service that 1) accepts custom-built chatbot solution, 2) can accommodate up to 500 online users at any given time, 3) does not charge exorbitant prices. We can't buy a solution, or even augment an existing one, because it is a specific requirement of the project funding (research rather than development).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8587/comments | https://api.github.com/repos/huggingface/transformers/issues/8587/events | https://github.com/huggingface/transformers/issues/8587 | 744,648,407 | MDU6SXNzdWU3NDQ2NDg0MDc= | 8,587 | The Albert tokenizer file cannot download automatically and the official Albert tokenizer file is wrong, I cannot use it. | {
"login": "probe2",
"id": 27693280,
"node_id": "MDQ6VXNlcjI3NjkzMjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/27693280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/probe2",
"html_url": "https://github.com/probe2",
"followers_url": "https://api.github.com/users/probe2/followers",
"following_url": "https://api.github.com/users/probe2/following{/other_user}",
"gists_url": "https://api.github.com/users/probe2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/probe2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/probe2/subscriptions",
"organizations_url": "https://api.github.com/users/probe2/orgs",
"repos_url": "https://api.github.com/users/probe2/repos",
"events_url": "https://api.github.com/users/probe2/events{/privacy}",
"received_events_url": "https://api.github.com/users/probe2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you mind following and filing in the issue template?\r\nThanks",
"I cannot load the offiline vocab as well:\r\nRuntimeError: Internal: C:\\projects\\sentencepiece\\src\\sentencepiece_processor.cc(824) [model_proto->ParseFromArray(serialized.data(), serialized.size())]\r\nbut it can run for me to download the vocab automatically.\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | Have anybody meet the same problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8586/comments | https://api.github.com/repos/huggingface/transformers/issues/8586/events | https://github.com/huggingface/transformers/pull/8586 | 744,630,901 | MDExOlB1bGxSZXF1ZXN0NTIyMzIxMTAz | 8,586 | Tokenizers: ability to load from model subfolder | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | MEMBER | null | Should fix #8447 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8586/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8586",
"html_url": "https://github.com/huggingface/transformers/pull/8586",
"diff_url": "https://github.com/huggingface/transformers/pull/8586.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8586.patch",
"merged_at": 1605621525000
} |
https://api.github.com/repos/huggingface/transformers/issues/8585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8585/comments | https://api.github.com/repos/huggingface/transformers/issues/8585/events | https://github.com/huggingface/transformers/pull/8585 | 744,619,061 | MDExOlB1bGxSZXF1ZXN0NTIyMzExMzky | 8,585 | Fix rag finetuning + add finetuning test | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq \r\n\r\nHi, I tried to execute finetune.py on two GPUs. It mainly fails with the following error. But when I run with the single GPU it works. I have also attached a screen shot.\r\n\r\n\r\n\r\n**RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:575] Connection closed by peer [130.216.209.142]:55728**\r\n\r\n\r\n\r\n\r\n",
"What command did you run exactly ?",
"> What command did you run exactly ?\r\n\r\n`python examples/rag/finetune.py --data_dir ./examples/rag/test_data/dummy_seq2seq --output_dir ./examples/rag/outputs --model_name_or_path facebook/rag-token-base --model_type rag_sequence --do_train --do_predict --n_val -1 --val_check_interval 0.25 --train_batch_size 1 --eval_batch_size 1 --max_source_length 128 --max_target_length 25 --val_max_target_length 25 --test_max_target_length 25 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 100 --warmup_steps 500 --gradient_accumulation_steps 1 --index_name custom --passages_path ./examples/rag/data/my_knowledge_dataset --index_path ./examples/rag/data/my_knowledge_dataset_hnsw_index.faiss --gpus 2\r\n`",
"Does changing the port with `--distributed_port 8888` help in your case ?",
"It says, \r\n`finetune.py: error: unrecognized arguments: --distributed-port 8888`\r\n",
"I tried with `--distributed-port 8888` still gives the same error. \r\nbtw my torch version is **Version: 1.7.0+cu110**\r\n\r\n\r\n",
"What's your pytorch lightning version ?\r\n(also sorry I misspelled distributed-port)",
"> pytorch lightning\r\n\r\n**Version: 1.0.4**\r\n",
"@lhoestq \n\nHi just wanted to know .. did you managed to run the finetune.sh script\nwithout any errors.\n\n",
"Yes I have no issue on my side. The finetuning test works fine too.\r\nCould you try to update pytorch lightning and see if it fixes your issue ?\r\nLet me know if you manage to fix it",
"Did you try with custom index ?\n\nOn Fri, Nov 20, 2020, 22:43 Quentin Lhoest <[email protected]> wrote:\n\n> Yes I have no issue on my side. The finetuning test works fine too.\n> Could you try to update pytorch lightning and see if it fixes your issue ?\n> Let me know if you manage to fix it\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731061351>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGRRHXMLZB2KC5WTSJ3SQY25LANCNFSM4TYMOBIQ>\n> .\n>\n",
"Let me test right now",
"Can you also send me your pytorch and tranformers versions.\n\n\n\n\nOn Fri, Nov 20, 2020, 22:49 Shamane Siriwardhana <[email protected]> wrote:\n\n> Did you try with custom index ?\n>\n> On Fri, Nov 20, 2020, 22:43 Quentin Lhoest <[email protected]>\n> wrote:\n>\n>> Yes I have no issue on my side. The finetuning test works fine too.\n>> Could you try to update pytorch lightning and see if it fixes your issue ?\n>> Let me know if you manage to fix it\n>>\n>> β\n>> You are receiving this because you commented.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731061351>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/AEA4FGRRHXMLZB2KC5WTSJ3SQY25LANCNFSM4TYMOBIQ>\n>> .\n>>\n>\n",
"Awesome I managed to reproduce your issue using the custom index :)\r\nI will investigate\r\nAnd I'm using pytorch 1.7.0 cu11 and transformers with the latest changes from master and this PR",
"Perfect. I feel it is some tensor issue that happens in the validation\nsanity check.\n\nOn Fri, Nov 20, 2020, 23:02 Quentin Lhoest <[email protected]> wrote:\n\n> Awesome I managed to reproduce your issue using the custom index :)\n> I will investigate\n> And I'm using pytorch 1.7.0 cu11 and transformers with the latest changes\n> from master and this PR\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731071657>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGXTIZMVU76CGOR3FZTSQY5EHANCNFSM4TYMOBIQ>\n> .\n>\n",
"Indeed it was an issue with the precision of the tensor. I'm fixing it",
"Amazing. If you can .. once you fixed this can you please add finetuning\ncommands for a custom dataset in the read me. The current one is not with\nall commands. I really think this RAG framework will be a game changer if\nwe can apply cor other tasks:)\n\nOn Fri, Nov 20, 2020, 23:18 Quentin Lhoest <[email protected]> wrote:\n\n> Indeed it was an issue with the precision of the tensor. I'm fixing it\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731080487>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWGLNSO3ITQGNLLUYTSQY7BFANCNFSM4TYMOBIQ>\n> .\n>\n",
"Ok I fixed the tensor issue and updated the readme\r\n\r\nI also had to rename some the examples files of RAG to avoid collisions with the files of the seq2seq examples. The name collision broke the CI tests with failed imports. \r\n\r\nI did:\r\n```\r\nexamples/rag/utils.py -> exmaples/rag/utils_rag.py\r\nexamples/rag/callbacks.py -> exmaples/rag/callbacks_rag.py\r\nexamples/rag/finetune.py -> exmaples/rag/finetune_rag.py\r\nexamples/rag/finetune.sh -> exmaples/rag/finetune_rag.sh\r\n```\r\n\r\nAll tests are green now :)",
"Thanks a lot for your quick response.\n\nOn Sat, Nov 21, 2020, 00:15 Quentin Lhoest <[email protected]> wrote:\n\n> Ok I fixed the tensor issue and updated the readme\n>\n> I also had to rename some the examples files of RAG to avoid collisions\n> with the files of the seq2seq examples. The name collision broke the CI\n> tests with failed imports.\n>\n> I did:\n>\n> examples/rag/utils.py -> exmaples/rag/utils_rag.py\n> examples/rag/callbacks.py -> exmaples/rag/callbacks_rag.py\n> examples/rag/finetune.py -> exmaples/rag/finetune_rag.py\n> examples/rag/finetune.sh -> exmaples/rag/finetune_rag.sh\n>\n> All tests are green now :)\n>\n> β\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8585#issuecomment-731108211>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTA626DKBJ5S7JXLFDSQZFVNANCNFSM4TYMOBIQ>\n> .\n>\n",
"I took your comment into account @patrickvonplaten \r\nThe only thing I didn't change is the return_dict=True - I kept them to avoid playing with tuples indices.",
"@lhoestq hello, thank you for this amazing feature.\r\n\r\nwhen I try to create my custom dataset I receveing this error:\r\n\r\n`2020-12-16 00:48:44.645715: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nINFO:__main__:Step 1 - Create the dataset\r\nUsing custom data configuration default\r\nReusing dataset csv (/root/.cache/huggingface/datasets/csv/default-d44cf86c96b535d8/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)\r\nLoading cached processed dataset at /root/.cache/huggingface/datasets/csv/default-d44cf86c96b535d8/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2/cache-ad363af188e673b0.arrow\r\n100% 1/1 [00:00<00:00, 10.92ba/s]\r\nINFO:__main__:Step 2 - Index the dataset\r\nTraceback (most recent call last):\r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 200, in <module>\r\n main(rag_example_args, processing_args, index_hnsw_args)\r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 102, in main\r\n index = faiss.IndexHNSWFlat(index_hnsw_args.d, index_hnsw_args.m, faiss.METRIC_INNER_PRODUCT)\r\n File \"/usr/local/lib/python3.6/dist-packages/faiss/swigfaiss.py\", line 3746, in __init__\r\n this = _swigfaiss.new_IndexHNSWFlat(*args)\r\nNotImplementedError: Wrong number or type of arguments for overloaded function 'new_IndexHNSWFlat'.\r\n Possible C/C++ prototypes are:\r\n faiss::IndexHNSWFlat::IndexHNSWFlat()\r\n faiss::IndexHNSWFlat::IndexHNSWFlat(int,int)`\r\n\r\n\r\nI'm using Google Colab to test this - https://colab.research.google.com/drive/1Cjj18rYmeS0Bueis_KPB5Wbybl-JNDLL?usp=sharing\r\n\r\n",
"Well, i didn't install the specific dependencies you defined. excuse me.\r\n\r\nSolved running - !pip install -r /transformers/examples/rag/requirements.txt\r\n\r\nAt least it is registered if someone has the same problem. haha"
] | 1,605 | 1,608 | 1,605 | MEMBER | null | Following #7715 we need more test coverage of the RAG example scripts.
In this PR I'm adding a test for the finetuning script.
The test includes a single gpu test and a multi gpu test. Both are passing.
As mentioned in #7816 and #8345 there were some errors in the script that I had to fix.
Moreover since @amogkam has been working on the finetuning script as well to integrate Ray, I made sure to reduce the possible conflicts with his PR #8583 . More precisely I'm reusing the CustomAccel class that will allow to init either the pytorch distributed retrieval or the ray distributed retrieval.
Also fixed a bug in RAG forward pass (see #8665 )
Fix #7816
Fix #8345 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8585/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8585",
"html_url": "https://github.com/huggingface/transformers/pull/8585",
"diff_url": "https://github.com/huggingface/transformers/pull/8585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8585.patch",
"merged_at": 1605895504000
} |
https://api.github.com/repos/huggingface/transformers/issues/8584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8584/comments | https://api.github.com/repos/huggingface/transformers/issues/8584/events | https://github.com/huggingface/transformers/pull/8584 | 744,613,687 | MDExOlB1bGxSZXF1ZXN0NTIyMzA2ODM2 | 8,584 | Add output control for TFGPT2LMHeadModel | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nApparently you are rewriting the `output_attentions` and `output_hidden_states` parameters.",
"Thanks for your advice. Yes, so it's a bad idea to only return `last_hidden_state` like this?",
"I mean, why you are not using the `output_attentions` or `output_hidden_states` parameters that basically do what you are proposing?",
"I'm sorry to make you confused, maybe my mastery of gpt2 is not enough. Could you please tell me where `output_attentions` and \r\n`output_hidden_states`(except last) will be used during training and text generation, I can't find the answer from the source code directly. \r\n\r\nI mean, if `output_attentions` or `output_hidden_states` is not using basically, it's better to hide it by default? And call something like `return_multilayer=True` to return those if we need it.\r\n\r\nThanks in advance. :)",
"No worries! Everything is nicely explained in the [documentation](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel):\r\n\r\n```\r\noutput_attentions (bool, optional) β Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.\r\noutput_hidden_states (bool, optional) β Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.\r\n```",
"Thank @jplu , this RP exactly rewriting the `output_attentions` and `output_hidden_states` parameters of `GPT2Config`, close it.\r\n\r\n",
"You're welcome, happy to help :)"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Hi, guys. This pull request is going to add a parameter for `GPT2Config` and `TFGPT2LMHeadModel`, to control whether return multi-layer logits or not when training and fine-tuning.
Related issue #8503
Before this change, we need to assign the 'special' loss and metric:
```python
class MyMetrice(tf.keras.metrics.SparseCategoricalAccuracy):
def update_state(self, y_true, y_pred, sample_weight=None):
# It will receive output from all layers by default.
if len(y_pred.shape) > 3:
return 0
return super(Mymetrice, self).update_state(y_true, y_pred, sample_weight)
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = MyMetrice('accuracy')
model.compile(
optimizer=optimizer,
loss=[loss, *[None] * model.config.n_layer], # It's hard to guess if there is no example
metrics=[metric]
)
```
After this change, we can train or finetune samples easily.
```python
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", return_dict=False)
model.compile(
optimizer=optimizer,
loss=model.compute_loss,
metrics=[tf.keras.metrics.SparseCategoricalAccuracy]
)
```
GPT2: @LysandreJik, @patrickvonplaten
tensorflow: @jplu
Hope this helps. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8584/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8584",
"html_url": "https://github.com/huggingface/transformers/pull/8584",
"diff_url": "https://github.com/huggingface/transformers/pull/8584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8584.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8583/comments | https://api.github.com/repos/huggingface/transformers/issues/8583/events | https://github.com/huggingface/transformers/pull/8583 | 744,505,582 | MDExOlB1bGxSZXF1ZXN0NTIyMjE4OTM2 | 8,583 | [RAG] Add Ray implementation for distributed retrieval | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi ! This looks awesome :)\r\nI was about to create a PR that fixes the init_ddp_connection in finetune.py and that adds a test script to make sure the finetuning script works as expected. With minimal changes on my side I can easily reduce conflicts between our two changes to finetune.py (I guess I'll just reuse the CustomAccelerator). Does that sound good to you ?",
"@lhoestq yes that sounds great!",
"@amogkam \r\n\r\n\r\nHi seems like finetune.sh is not working in multi gpu training.",
"@shamanez Hmm that's odd, I was able to get this working on a single node with 4 GPUs. Do you have a stack trace?",
"> @shamanez Hmm that's odd, I was able to get this working on a single node with 4 GPUs. Do you have a stack trace?\r\n\r\nI tried to run without RAY , but with pytorch DDP. Here is the error I got.\r\n\r\n\r\n\r\n\r\n\r\n\r\nThis is the command-line argument I used, \r\n`python examples/rag/finetune.py --data_dir ./examples/rag/test_data/dummy_seq2seq --output_dir ./examples/rag/outputs --model_name_or_path facebook/rag-token-base --model_type rag_sequence --do_train --do_predict --n_val -1 --val_check_interval 0.25 --train_batch_size 1 --eval_batch_size 1 --max_source_length 128 --max_target_length 25 --val_max_target_length 25 --test_max_target_length 25 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 100 --warmup_steps 500 --gradient_accumulation_steps 1 --index_name custom --passages_path ./examples/rag/data/my_knowledge_dataset --index_path ./examples/rag/data/my_knowledge_dataset_hnsw_index.faiss --gpus 2\r\n`\r\n",
"I think this has to do with using a custom index which I didn't try out. Can you try with just the wiki_dpr index to confirm? It seems like the training workers are expecting a tensor of type float, but a tensor of type double is being sent instead. I think the fix might just be to set an explicit target_type in line 137 of distributed_pytorch_retriever.py- @lhoestq does this seem right?",
"Ok I will also give it a try\n\nOn Fri, Nov 20, 2020, 15:30 Amog Kamsetty <[email protected]> wrote:\n\n> I think this has to do with using a custom index which I didn't try out.\n> Can you try with just the wiki_dpr index to confirm? It seems like the\n> training workers are expecting a tensor of type float, but a tensor of type\n> double is being sent instead. I think the fix might just be to set an\n> explicit target_type in line 137 of distributed_pytorch_retriever.py-\n> @lhoestq <https://github.com/lhoestq> does this seem right?\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/8583#issuecomment-730805824>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGVO47GE3WRYB2VWLLTSQXIFZANCNFSM4TYGZUWA>\n> .\n>\n",
"@lhoestq now that https://github.com/huggingface/transformers/pull/8585 is merged, should I mark this PR as ready for review?",
"Yes indeed ! Feel free to set this PR to ready for review\r\n\r\nAlso it looks like the CI fails because of a failed import of `ray`.\r\nTo fix that you need to move the import of ray into the test functions decorated with `require_distributed_retrieval `.\r\n\r\nYou should also add `ray` to the test dependencies, or the test will simply be ignored",
"@lhoestq CI is passing now!",
"@lhoestq any ETA on when this PR can get reviewed? Thanks",
"Hi ! I've already started to look at the changes and it looks pretty good so far :) I'll finish my review soon, probably tomorrow",
"Awesome thanks!",
"@sgugger it would be cool if you could review as this changes some things in the trainer/integrations.",
"Hi @lhoestq @sgugger I addressed the feedback you guys gave. Do you think you can take another look? Thanks",
"Hi there, sorry for the delay. Could you close and reopen your PR? Because of a bad force-push on our side, the diff has become unreadable. Also, the examples folder has slightly changed structure, so you might need to move the folder.\r\n\r\nPing me, @patrickvonplaten and @LysandreJik on the PR you reopen and we'll look at it quickly.",
"Opened a new one here: https://github.com/huggingface/transformers/pull/9197!"
] | 1,605 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR adds a new distributed retriever implementation for RAG built on Ray, as an alternative to the current retriever implementation that uses torch.distributed. With Ray it's possible to load the index on multiple processes instead of just the rank 0 training worker, allowing fine tuning to scale out better to multiple GPUs, and also allowing the index to potentially be fit in GPU memory. This also removes a core dependency on Pytorch, allowing a Tensorflow implementation of `finetune.py`.
This PR also makes changes to support finetune.py with Pytorch Lightning >v1.0.
A benchmark of Pytorch distribtued retrieval vs. Ray distributed retrieval

## Implementation Details
In the current Pytorch retrieval implementation, the index is loaded once on just the rank 0 training workers. Training worker 0 gathers the inputs from all other workers, performs the index lookup, and scatters the results back to the other workers.

With the Ray implementation, the index is loaded on *separate* processes, which are referred to as Ray actors. Each training worker randomly selects a retrieval actor to query for documents and Ray handles all the communication between the processes. Because the index can be loaded in *multiple* processes, training can scale up since no synchronization needs to happen for the index lookup.

Note that Pytorch Lightning is still handling distributed *training*, but Ray manages distributed *retrieval*. Because PTL calls the entire training script under the hood multiple times, we have to use Ray's named actors feature (https://docs.ray.io/en/master/actors.html?highlight=named%20actors#named-actors) allowing the retrieval actors to be referenced by all training processes. The use of named actors is necessitated by how PTL handles distributed training, and a simpler approach could probably be used for a Tensorflow implentation.
## Testing Strategy
Unit tests were added to `test_distributed_retriever.py`. Note that the local Ray cluster for the tests had to be started with `local_mode=True` because the test file modifies `sys.path` and these changes are not propagated to remote processes. See https://stackoverflow.com/questions/54338013/parallel-import-a-python-file-from-sibling-folder for more info.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8583/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8583/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8583",
"html_url": "https://github.com/huggingface/transformers/pull/8583",
"diff_url": "https://github.com/huggingface/transformers/pull/8583.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8583.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8582/comments | https://api.github.com/repos/huggingface/transformers/issues/8582/events | https://github.com/huggingface/transformers/pull/8582 | 744,406,841 | MDExOlB1bGxSZXF1ZXN0NTIyMTM2MTU4 | 8,582 | [examples tests] tests that are fine on multi-gpu | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CI failure is unrelated.",
"Hi @stas00! We're currently strongly focusing on the v4.0.0 release. Your proposal here is definitely interesting, and we can take a look at doing this when we do a large test dive, since we have a few things to fix:\r\n- The multi-gpu tests you mention\r\n- The tests for torch v1.3.0+\r\n- The current slow tests are not all passing\r\n- The model templates tasting framework desperately needs an improvement.\r\n\r\nI'll come back to you next week regarding this once everything has settled down. Thanks for your patience! ",
"It doesn't sound like my comment drove the point across - the problem is that now most examples tests are skipped if the developer has 2+ cards, resulting in commits that break master https://github.com/huggingface/transformers/pull/8073 - this is a problem for the pending release obviously.\r\n\r\nI originally suggested to explicitly enable tests that have been ported to be run on multi-gpu CI, but since it was decided to run them all and instead to disable them on masse and then re-enable them in groups as they get ported, but nothing has been done about it, we now have the situation where a huge part of the test suite is practically disabled.\r\n\r\nPlease let me know whether you still think this is secondary...",
"If I understand correctly, you're saying that now examples tests are skipped if the environment has 2+ cards. So the CI still runs the example tests on single-gpu, correct?\r\n\r\nAnd I believe we never had a multi-gpu ci that worked for examples, so we're essentially at the same point we've always been: examples are not tested in a multi-gpu setting. Is that correct? If it is, how is that a problem for the pending release, as examples are independent of releases? ",
"You are correct with regards to CIs, yes. But who is now impacted is the developers. If @thomwolf run the tests before committing https://github.com/huggingface/transformers/pull/8073 he would have noticed the failure. My fantasy is that he has 2+ cards, did run the tests, which they got skipped and hence he happily committed the change unware that it had issues. Now seq2seq is broken. \r\n\r\nI was surprised that this happened, scrambled to run the tests and promptly discovered that they were skipped since I have 2 cards. Once I forced 1-gpu with CUDA_VISIBLE_DEVICES the tests failed. That's why I urgently made this PR and encouraged we bring that incomplete effort to completion.\r\n\r\nIt's possible that my fantasy was incorrect, and this is just the situation were we rely on CIs to catch the errors. But since CIs don't run gpu, merges look good.\r\n\r\n(and nothing personal against @thomwolf, on the contrary I feel that I'm to blame that I disabled the tests on multi-gpu and didn't see it through for them to be re-enabled)"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | This PR removes `require_torch_non_multi_gpu_but_fix_me` for those tests I know should work. Well, they used to work before some recent PRs https://github.com/huggingface/transformers/pull/8073#issuecomment-728677627 - my suspicion is that the problem wasn't detected before it was merged because these tests were skipped as the dev was probably on a multi-gpu machine. So we need to sort this issue out sooner than later.
Currently a bunch of `examples` tests get skipped for devs with multi-gpus - it's probably a good idea for each dev with more than 1 gpu to take a sub-folder under `examples` and test which tests can be run on multi-gpu and which can't.
1. ensure you're on a multi-gpu machine - don't take this assignment if you don't have 2+ gpus
2. pick a test file and remove `@require_torch_non_multi_gpu_but_fix_me` from its tests if any
3. don't forget RUN_SLOW=1 - probably just always add it so tests aren't missed.
4. run tests
5. if any tests fail either fix those or restore `@require_torch_non_multi_gpu_but_fix_me` for the failing tests. if the failure is not multi-gpu related please file an issue.
6. go to 2 until no more test files is left.
7. send a PR with changes for this sub-folder you chose.
To remind, the initial skip-all-examples-tests-on-multi-gpu tests was added so that we could start multi-gpu tests on github runner CI.
If meanwhile you have a problem with important examples tests skipping, please force a single gpu mode with:
```
CUDA_VISIBLE_DEVICES=0 pytest
```
I trust @LysandreJik or someone else can coordinate this effort?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8582",
"html_url": "https://github.com/huggingface/transformers/pull/8582",
"diff_url": "https://github.com/huggingface/transformers/pull/8582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8582.patch",
"merged_at": 1605639642000
} |
https://api.github.com/repos/huggingface/transformers/issues/8581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8581/comments | https://api.github.com/repos/huggingface/transformers/issues/8581/events | https://github.com/huggingface/transformers/pull/8581 | 744,405,569 | MDExOlB1bGxSZXF1ZXN0NTIyMTM1MTE5 | 8,581 | Add early stopping callback to pytorch trainer | {
"login": "cbrochtrup",
"id": 24980609,
"node_id": "MDQ6VXNlcjI0OTgwNjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/24980609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cbrochtrup",
"html_url": "https://github.com/cbrochtrup",
"followers_url": "https://api.github.com/users/cbrochtrup/followers",
"following_url": "https://api.github.com/users/cbrochtrup/following{/other_user}",
"gists_url": "https://api.github.com/users/cbrochtrup/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cbrochtrup/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cbrochtrup/subscriptions",
"organizations_url": "https://api.github.com/users/cbrochtrup/orgs",
"repos_url": "https://api.github.com/users/cbrochtrup/repos",
"events_url": "https://api.github.com/users/cbrochtrup/events{/privacy}",
"received_events_url": "https://api.github.com/users/cbrochtrup/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. Thanks your PR! When I was designing the callbacks, it was to be them small independent pieces of code. I would prefer if early stopping had its own callback that the user would then choose to add or not. Do you think you could amend your PR in that direction?",
"Hello, thank you for your feedback! I will amend the PR in that direction.\r\n\r\nCould you clarify which pieces of early stopping should be in `TrainerState` and which should be in the callback? I'm grappling with the similarities between `best_model_checkpoint` and early stopping attributes.\r\n\r\n```python\r\nclass EarlyStoppingCallback(TrainerCallback):\r\n best_metric: Optional[float] = None # maybe not this\r\n best_model_checkpoint: Optional[str] = None # maybe not this either\r\n early_stopping_patience: int = None\r\n early_stopping_patience_counter: int = None\r\n\r\n def on_evaluate(self, args, state, control, **kwargs):\r\n # Keep track of patience\r\n # End training via early stopping\r\n if (\r\n self.early_stopping_patience is not None\r\n and self.early_sotpping_patience_counter >= self.early_stopping_patience\r\n ):\r\n control.should_training_stop = True\r\n```",
"Or do you mean I just move the if statement I added to its own callback and keep `TrainerState` as is?",
"The `TrainerState` shouldn't change, so the callback you are writing above sounds fine, without the arguments marked with `# maybe not this`, which should already be in the `TrainerState`, I think.\r\nDoes that sound right to you?",
"That makes sense. I think [this](https://github.com/huggingface/transformers/blob/e812753736f475b62849ef0e72149306408c1395/src/transformers/trainer.py#L910) block of code (to line 933) could be a callback because it's all about the best metric. Then users could customize the best model calculations. Is that desirable?\r\n\r\nIf you think that's out of scope I'll keep the early stopping callback simple and separate from the best metric calculation.",
"I had put it in `Trainer` because I thought multiple callbacks could need it and it's used by `load_best_model_at_end` which is kind of a core feature.",
"Sounds good, you know best! I keep `load_best_model_at_end` in the `Trainer` and push up an early stopping callback sometime this week.",
"Thanks for your thorough and affable review!"
] | 1,605 | 1,606 | 1,606 | NONE | null | # Summary
Address PyTorch half of https://github.com/huggingface/transformers/issues/4894 by adding early stopping patience and a minimum threshold metrics must improve to prevent early stopping. I piggybacked heavily off of https://github.com/huggingface/transformers/pull/7431/ since the two functions are very similar.
Since https://github.com/huggingface/transformers/pull/4186 seems to be abandoned and behind master, I figured I'd take a crack at this.
## Who can review?
Anyone! But @julien-c and @sgugger seem the most appropriate. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8581",
"html_url": "https://github.com/huggingface/transformers/pull/8581",
"diff_url": "https://github.com/huggingface/transformers/pull/8581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8581.patch",
"merged_at": 1606170336000
} |
https://api.github.com/repos/huggingface/transformers/issues/8580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8580/comments | https://api.github.com/repos/huggingface/transformers/issues/8580/events | https://github.com/huggingface/transformers/pull/8580 | 744,356,394 | MDExOlB1bGxSZXF1ZXN0NTIyMDk2NjQ1 | 8,580 | Reorganize repo | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Got approval offline from @LysandreJik so merging. If anything needs fixing, we'll do so tomorrow morning!",
"The GPU tests throw this error: \r\n\r\n```\r\n2020-11-17 11:24:06.538757: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/bin/transformers-cli\", line 5, in <module>\r\n from transformers.commands.transformers_cli import main\r\n File \"/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/__init__.py\", line 34, in <module>\r\n from .data import (\r\n File \"/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/__init__.py\", line 6, in <module>\r\n from .processors import (\r\n File \"/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/processors/__init__.py\", line 6, in <module>\r\n from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features\r\n File \"/home/hf/actions-runner_transformers/_work/transformers/transformers/.env/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 10, in <module>\r\n from ...models.bert.tokenization_bert import whitespace_tokenize\r\nModuleNotFoundError: No module named 'transformers.models'\r\n```\r\n\r\nhttps://github.com/huggingface/transformers/runs/1411825179\r\n\r\nNot 100% sure what's going on there",
"Normal import throw this error aswell:\r\n\r\n```\r\nSuccessfully installed sacremoses-0.0.43 tokenizers-0.9.4 transformers-4.0.0.dev0\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-2-11bca228d6cd> in <module>()\r\n 1 get_ipython().system(' pip install ./transformers/')\r\n----> 2 import transformers\r\n\r\n3 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>()\r\n 32 \r\n 33 # Data\r\n---> 34 from .data import (\r\n 35 DataProcessor,\r\n 36 InputExample,\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/__init__.py in <module>()\r\n 4 \r\n 5 from .metrics import glue_compute_metrics, xnli_compute_metrics\r\n----> 6 from .processors import (\r\n 7 DataProcessor,\r\n 8 InputExample,\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/processors/__init__.py in <module>()\r\n 4 \r\n 5 from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels\r\n----> 6 from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features\r\n 7 from .utils import DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor\r\n 8 from .xnli import xnli_output_modes, xnli_processors, xnli_tasks_num_labels\r\n\r\n/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py in <module>()\r\n 8 \r\n 9 from ...file_utils import is_tf_available, is_torch_available\r\n---> 10 from ...models.bert.tokenization_bert import whitespace_tokenize\r\n 11 from ...tokenization_utils_base import BatchEncoding, PreTrainedTokenizerBase, TruncationStrategy\r\n 12 from ...utils import logging\r\n\r\nModuleNotFoundError: No module named 'transformers.models'\r\n```\r\n",
"I think maybe the `models` folder needs a `__init__.py`",
"I'm always confused by why Python lets it works in some cases and sometimes not. Will push an init directly on master."
] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR reorgnaizes the structure of the repository by putting all model-related files (modeling, configuration, tokenization, conversion) in subfolders.
**Breaking change**
This breaks any import for model/config/tokenizer objects that is not done at the top level:
```
from transformers import BertModel
```
works but not
```
from transformers.modeling_bert import BertModel
```
It needs to be updated to
```
from transformers.models.bert.modeling_bert import BertModel
```
Internally, after this PR is merged the following will need fixing:
- the check_repo script does not properly finds the models now (so it does not check if they are properly tested/documented/ in auto classes)
- the new model template needs to be updated
The other internal scripts work as usual. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8580/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8580/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8580",
"html_url": "https://github.com/huggingface/transformers/pull/8580",
"diff_url": "https://github.com/huggingface/transformers/pull/8580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8580.patch",
"merged_at": 1605581023000
} |
https://api.github.com/repos/huggingface/transformers/issues/8579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8579/comments | https://api.github.com/repos/huggingface/transformers/issues/8579/events | https://github.com/huggingface/transformers/pull/8579 | 744,314,101 | MDExOlB1bGxSZXF1ZXN0NTIyMDYzNDMx | 8,579 | Create README.md | {
"login": "fajri91",
"id": 12390478,
"node_id": "MDQ6VXNlcjEyMzkwNDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12390478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fajri91",
"html_url": "https://github.com/fajri91",
"followers_url": "https://api.github.com/users/fajri91/followers",
"following_url": "https://api.github.com/users/fajri91/following{/other_user}",
"gists_url": "https://api.github.com/users/fajri91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fajri91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fajri91/subscriptions",
"organizations_url": "https://api.github.com/users/fajri91/orgs",
"repos_url": "https://api.github.com/users/fajri91/repos",
"events_url": "https://api.github.com/users/fajri91/events{/privacy}",
"received_events_url": "https://api.github.com/users/fajri91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks for sharing, looks really cool!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding model card for `indolem/indobert-base-uncased`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8579/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8579",
"html_url": "https://github.com/huggingface/transformers/pull/8579",
"diff_url": "https://github.com/huggingface/transformers/pull/8579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8579.patch",
"merged_at": 1605602210000
} |
https://api.github.com/repos/huggingface/transformers/issues/8578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8578/comments | https://api.github.com/repos/huggingface/transformers/issues/8578/events | https://github.com/huggingface/transformers/issues/8578 | 744,308,305 | MDU6SXNzdWU3NDQzMDgzMDU= | 8,578 | Error: Asking to return token_type_ids while setting add_special_tokens to False | {
"login": "chrk623",
"id": 31636428,
"node_id": "MDQ6VXNlcjMxNjM2NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/31636428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrk623",
"html_url": "https://github.com/chrk623",
"followers_url": "https://api.github.com/users/chrk623/followers",
"following_url": "https://api.github.com/users/chrk623/following{/other_user}",
"gists_url": "https://api.github.com/users/chrk623/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrk623/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrk623/subscriptions",
"organizations_url": "https://api.github.com/users/chrk623/orgs",
"repos_url": "https://api.github.com/users/chrk623/repos",
"events_url": "https://api.github.com/users/chrk623/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrk623/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, indeed this seems to be an error. Fixing this in #8854 if it is so."
] | 1,605 | 1,607 | 1,607 | NONE | null | In the code below, while using `batch_encode_plus`, i get the error saying that i asked ***βto return token_type_ids while setting add_special_tokens to Falseβ***, when `return_token_type_ids` is `False`. I'm not sure if i am comprehending the error message correctly. For this specific case, I also found that the behaviour between `BertTokenizer` and `BertTokenizerFast` is different.
```python
from transformers import AutoTokenizer
t = AutoTokenizer.from_pretrained("bert-base-uncased", add_special_tokens=False)
txt = ["huggingface", "transformers"]
t.batch_encode_plus(
txt,
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False,
)
```
Error message
```
Traceback (most recent call last):
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-30-226d8ccb1c88>", line 5, in <module>
return_token_type_ids=False,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2399, in batch_encode_plus
**kwargs,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 567, in _batch_encode_plus
verbose=verbose,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 617, in _batch_prepare_for_model
verbose=verbose,
File "/Users/charcohui/Desktop/env/mlenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2681, in prepare_for_model
"Asking to return token_type_ids while setting add_special_tokens to False "
ValueError: Asking to return token_type_ids while setting add_special_tokens to False results in an undefined behavior. Please set add_special_tokens to True or set return_token_type_ids to None.
```
However, it works when `BertTokenizerFast` is used:
```python
tfast = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True, add_special_tokens=False)
tfast.batch_encode_plus(
txt,
add_special_tokens=False,
return_attention_mask=False,
return_token_type_ids=False,
)
# {'input_ids': [[17662, 12172], [19081]]}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8578/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8578/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8577/comments | https://api.github.com/repos/huggingface/transformers/issues/8577/events | https://github.com/huggingface/transformers/pull/8577 | 744,155,637 | MDExOlB1bGxSZXF1ZXN0NTIxOTI4NjA2 | 8,577 | [examples/seq2seq] fix PL deprecation warning | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | This PR fixes a PL deprecation warning, since we require PL-1.0.4 this is a safe switch.
Reference: https://pytorch-lightning.readthedocs.io/en/latest/generated/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint.params.filepath
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8577/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8577",
"html_url": "https://github.com/huggingface/transformers/pull/8577",
"diff_url": "https://github.com/huggingface/transformers/pull/8577.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8577.patch",
"merged_at": 1605818765000
} |
https://api.github.com/repos/huggingface/transformers/issues/8576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8576/comments | https://api.github.com/repos/huggingface/transformers/issues/8576/events | https://github.com/huggingface/transformers/issues/8576 | 744,086,930 | MDU6SXNzdWU3NDQwODY5MzA= | 8,576 | run_pl_glue.py token_type_id error on fresh install | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Ah, indeed. Out of curiosity, have you tried using `run_glue.py` instead of `run_pl_glue.py`? Does the error still happen?",
"Closed by mistake.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | CONTRIBUTOR | null | If you try to run the run_glue.py example with e.g. roberta from a fresh install of the library, it errors out with the following error:
```
Traceback (most recent call last):
File "examples/text-classification/run_pl_glue.py", line 228, in <module>
main()
File "examples/text-classification/run_pl_glue.py", line 218, in main
trainer = generic_train(model, args)
File "/home/ejp416/complexity/examples/lightning_base.py", line 400, in generic_train
trainer.fit(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1072, in fit
model = self.accelerator_backend.setup(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 34, in setup
self.trainer.call_setup_hook(model)
File "/home/ejp416/miniconda3/envs/complexity2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1444, in call_setup_hook
model.setup(stage_name)
File "/home/ejp416/complexity/examples/lightning_base.py", line 175, in setup
self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True)
File "examples/text-classification/run_pl_glue.py", line 98, in get_dataloader
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
To reproduce, run e.g.
```
python examples/text-classification/run_pl_glue.py --model_name_or_path roberta-base --output_dir ./blah --task mnli --do_train --data_dir ./glue_data/MNLI --max_seq_length 512 --max_grad_norm inf --adam_epsilon 1e-6 --weight_decay 0.1 --num_train_epochs 2 --train_batch_size 2 --eval_batch_size 4 --learning_rate 1e-5 --seed 12 --gradient_accumulation_steps 8 --gpus 1
```
The reason is that roberta does not have segment ids so token_type_ids is set to null in the data loader, causing torch.tensor to freak out. There's probably a more elegant long-term solution for this, but it's easy to fix by just setting it to 0 instead of null for those models. This issue has come up before in other scripts:
- https://github.com/huggingface/transformers/pull/3801
- https://github.com/huggingface/transformers/issues/3810 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8575/comments | https://api.github.com/repos/huggingface/transformers/issues/8575/events | https://github.com/huggingface/transformers/issues/8575 | 744,081,188 | MDU6SXNzdWU3NDQwODExODg= | 8,575 | REALM checkpoints to pytorch checkpoints | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | NONE | null | Will it be possible to convert checkpoints in https://console.cloud.google.com/storage/browser/realm-data/cc_news_pretrained to pytorch implementations ?
I am not very familiar with Tensorflow....
I tried using convert_bert_original_tf_checkpoint_to_pytorch.py but I am not sure I am invoking it correctly.
Here is how I am invoking the python script...
--tf_checkpoint_path="./cc_news_pretrained/embedder/encoded/" \
--bert_config_file="./bert_config.json" \
--pytorch_dump_path="./pytorch"
I am using tensorflow 2.3.0.
The checkpoint file has the following entries which are probably internal developer files(?):
model_checkpoint_path: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
all_model_checkpoint_paths: "/cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded/encoded.ckpt"
1)When I set tf_checkpoint_path to the directory containing the checkpoint, I get the error :
tensorflow.python.framework.errors_impl.NotFoundError: /cns/li-d/home/lumiere/public/models/gatoatigrado/ner-with-dates/10923195/1-active_losses=mlm_loss/export/temp/1580364602/retriever/encoded; No such file or directory
2)When I set tf_checkpoint_path to the checkpoint file encode.ckpt.data-00000-of-00001, I get the error:
env/lib/python3.6/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 95, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern))
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./cc_news_pretrained/embedder/encoded/encode.ckpt.data-00000-of-00001
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8575/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8574/comments | https://api.github.com/repos/huggingface/transformers/issues/8574/events | https://github.com/huggingface/transformers/issues/8574 | 744,081,079 | MDU6SXNzdWU3NDQwODEwNzk= | 8,574 | [Improvements] Enable `git push` without requiring login when uploading model | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes. Two ways:\r\n\r\n- you can add your credentials when cloning, e.g. `git clone https://username:[email protected]/user/model_id` (you can also use your token instead of your password if you know it).\r\n- but the cleaner way is to have a credential store set-up in git (https://git-scm.com/book/en/v2/Git-Tools-Credential-Storage) which means git will only ask you once. GitHub's doc is good as well (and not specific to GitHub): https://docs.github.com/en/free-pro-team@latest/github/using-git/caching-your-github-credentials-in-git\r\n\r\nLet me know if this helps",
"Thanks! I'll try this out and reopen this issue if any problem arises!",
"Is there any documentation for \"How to set the credential helper for Hugging Face URLs without changing the helper used for all other repos?\"",
"you can set the helper for a specific repo clone, but I'm not sure if you can set the helper system-wide for specific hosts"
] | 1,605 | 1,661 | 1,605 | CONTRIBUTOR | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Is there a way to allow `git push` a new model version without requiring login?
`transformers-cli` will auto generate a key for the user. Is there a way to leverage this key?
## Motivation
GitHub allows you to push in VSCode without explicitly login in each time.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8574/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8573/comments | https://api.github.com/repos/huggingface/transformers/issues/8573/events | https://github.com/huggingface/transformers/issues/8573 | 744,062,987 | MDU6SXNzdWU3NDQwNjI5ODc= | 8,573 | Bert that receives text triplet as an input | {
"login": "MiriamFarber",
"id": 35157503,
"node_id": "MDQ6VXNlcjM1MTU3NTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/35157503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MiriamFarber",
"html_url": "https://github.com/MiriamFarber",
"followers_url": "https://api.github.com/users/MiriamFarber/followers",
"following_url": "https://api.github.com/users/MiriamFarber/following{/other_user}",
"gists_url": "https://api.github.com/users/MiriamFarber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MiriamFarber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiriamFarber/subscriptions",
"organizations_url": "https://api.github.com/users/MiriamFarber/orgs",
"repos_url": "https://api.github.com/users/MiriamFarber/repos",
"events_url": "https://api.github.com/users/MiriamFarber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MiriamFarber/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Unfortunately we do not have such a method, as it would imply having an opinionated approach on how to do it. The `encode_plus` method follows what was done during each model's training, since our aim is to replicate as closely as possible the original approach.\r\n\r\nI would recommend encoding your sequences by placing your own special tokens, and by specifying `add_special_tokens=False` so that the encoding method does not add these tokens automatically. Let me know if you want a code sample showing this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik If you can share a code sample that would be great!\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I wonder why `text_pair` gets a special treatment. I guess datasets which have two strings as \"input\" (like glue's mnli with premise and hypothesis) are more common, but aren't there datasets with three or more input strings?"
] | 1,605 | 1,652 | 1,614 | NONE | null | I would like to train bert on triplets of texts as inputs (for example, something like (context, question, answer)). encode_plus (https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode_plus) receives either a single text, or a text_pair. Is there a way to use it with triplets? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8572/comments | https://api.github.com/repos/huggingface/transformers/issues/8572/events | https://github.com/huggingface/transformers/pull/8572 | 744,056,234 | MDExOlB1bGxSZXF1ZXN0NTIxODQzMTA0 | 8,572 | Fix mixed precision issue for GPT2 | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
This PR makes GPT2 to be trained and run in any mixed precision.
Fixes # (issue)
#8559
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8572/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8572",
"html_url": "https://github.com/huggingface/transformers/pull/8572",
"diff_url": "https://github.com/huggingface/transformers/pull/8572.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8572.patch",
"merged_at": 1605555859000
} |
https://api.github.com/repos/huggingface/transformers/issues/8571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8571/comments | https://api.github.com/repos/huggingface/transformers/issues/8571/events | https://github.com/huggingface/transformers/pull/8571 | 744,000,638 | MDExOlB1bGxSZXF1ZXN0NTIxNzk3MTY3 | 8,571 | [WIP] Move BERT and ALBERT | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This is a PoC for the reorg of the model on just two files. I didn't update very reference everywhere, just enough to give a sense of what it will render. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8571/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8571",
"html_url": "https://github.com/huggingface/transformers/pull/8571",
"diff_url": "https://github.com/huggingface/transformers/pull/8571.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8571.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8570/comments | https://api.github.com/repos/huggingface/transformers/issues/8570/events | https://github.com/huggingface/transformers/issues/8570 | 743,981,148 | MDU6SXNzdWU3NDM5ODExNDg= | 8,570 | [T5] Add open / closed book answering models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Done: https://huggingface.co/models?search=ssm"
] | 1,605 | 1,607 | 1,607 | MEMBER | null | # π New model addition
## Model description
Check here: https://github.com/google-research/google-research/tree/master/t5_closed_book_qa
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8570/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8570/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8569/comments | https://api.github.com/repos/huggingface/transformers/issues/8569/events | https://github.com/huggingface/transformers/issues/8569 | 743,947,097 | MDU6SXNzdWU3NDM5NDcwOTc= | 8,569 | After 2nd iteration: always same result when training in a loop | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Here the same code as a .py script with transformers 3.4.0 (instead of 3.5.1), CUDA 11.0 and torch 1.7.0\r\n\r\n\r\n",
"Same problem also happening with torch 1.6.0+cu101 - see here: https://colab.research.google.com/drive/1-9jcAsFGf79kpiCSQa4QaBdXvZQstE_n?usp=sharing\r\n\r\n\r\n",
"Same Bug also with torch==1.5.1+cu101 and transformers==3.3.1\r\n\r\nsee here: https://colab.research.google.com/drive/1HqMOQ_UzGI4z_OWOd0qFsfdVpKfLbHaM?usp=sharing",
"Ok I think I found the reason why this happens.\r\n\r\nWhen `TrainingArguments` is called with default params a seed is not just used to init the network but also set everywhere else.\r\nThis is done by calling this: https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/trainer_utils.py#L29\r\n\r\nAfter that point everything that should be randomized in the next iteration like shuffling the training data and so on is not random anymore but dependent on the seed. Since this seed is set again and again to the same value everything seems to be deterministic and not random anymore. The reson why the 1st iteration has a different value is because the seed is set relativly late after data is loaded.\r\n\r\nClosing this...\r\n",
"Well - I was thinking about this:\r\n\r\nThere is a default param \"hidden\" in a class like `TrainingArguments`. This default value sets all seeds of all libs like python default, numpy, torch to a fixed value. This can be the source of some nasty problems because it removes randomness where nobody would have ever suspected it.\r\n\r\nBest positive example are the `sklearn.model_selection` classes (functions). Most of them accept a seed but they just use the seed internaly and do not set it in every module you could think of.\r\n\r\nI am not sure if I would call this a bug but at least it is an issue. That's why I reopen.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,605 | 1,611 | 1,611 | CONTRIBUTOR | null | I train a BERT model on a binary classification task. I do the training 4 times in a row. With same train and validation data and with the exact same hyperparameters. I use the default seed in TrainerArguments and set no other seeds.
The results of the 2nd, 3rd and 4th iteration are 100% the same. The result of the 1st run is unique. This behavior is 100% reproducible.
It is not clear why this is the case. Since I set a seet and work with same data.
PIP Libs I use (no conda libs):
- sentencepiece-0.1.91
- tokenizers-0.9.3
- transformers-3.5.1
- torch-1.7.0+cu101
Screenshot:

Colab to reproduce:
https://colab.research.google.com/drive/1HjQ7p5AlDY9kcWo7uSXzteh7x1ustEdN?usp=sharing
I already did some facy stuff after each iteration:
```python
del trainer
del training_args
del model
del config
del train_result
del tokenizer
del labeled_dataset_test
del labeled_dataset_train
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
gc.collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
```
But that does not help. Can someone please help me here and claify whats up?
PS: I do not think that a seed can be the reason. If a seed would be the reason the 1st and 2nd run would also be the same and AFAIK a GPU training is not 100% deterministic. So a small difference would still remain.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8568/comments | https://api.github.com/repos/huggingface/transformers/issues/8568/events | https://github.com/huggingface/transformers/pull/8568 | 743,918,603 | MDExOlB1bGxSZXF1ZXN0NTIxNzI5MzQ4 | 8,568 | Update version to v4.0.0-dev | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
This PR puts the right version in the setup and `__init__` and adds one last step to the release guide in the setup.
Fixes #8566 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8568/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8568/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8568",
"html_url": "https://github.com/huggingface/transformers/pull/8568",
"diff_url": "https://github.com/huggingface/transformers/pull/8568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8568.patch",
"merged_at": 1605540080000
} |
https://api.github.com/repos/huggingface/transformers/issues/8567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8567/comments | https://api.github.com/repos/huggingface/transformers/issues/8567/events | https://github.com/huggingface/transformers/pull/8567 | 743,915,033 | MDExOlB1bGxSZXF1ZXN0NTIxNzI2MzU1 | 8,567 | [XLNet] Fix mems behavior | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> The new arguments are perfectly acceptable, but I think it would be nice to keep backwards compatibility. It should be easy enough by checking if there's a `use_cache` value in the configuration and setting it to the value of `use_mems`.\r\n\r\nYes, true! The problem is that `use_cache` is always in `configuration_utils.py` so we can't really add something that checks whether `use_cache` is in the config (It's always there) and sets `use_mems` to the same value (It'd always do it)...\r\n\r\nThinking more about it `use_cache` should maybe not even be in `configuration_utils.py` but just in all \"LMHead\" configs?! I could move `use_cache` to all individual config files. I think we could do this without breaking changes -> Wdyt? @LysandreJik @sgugger ?",
"I have mixed feelings about this, on one hand it makes sense to remove that argument which is very specific to a few models from the main `configuration_utils`, but on the other I fear the associated breaking changes.",
"I think your last proposal would be great @patrickvonplaten, moving `use_cache` to the appropriate configuration files. This way we could have this PR with no breaking changes, and a more robust `use_cache`!\r\n\r\nIf it is not possible to do the move without breaking change, then let's forget this and fix the mems behavior with the small breaking change for version 4. Would still like to prevent this, if possible.",
"## Update \r\n\r\n@LysandreJik @sgugger - `use_cache` is only used in the following modeling files (TF & PT):\r\n```\r\n- modeling_t5.py\r\n- modeling_bart.py (+ inherited)\r\n- modeling_fstm.py\r\n- modeling_prophetnet.py\r\n- modeling_gpt2.py\r\n- modeling_openai.py\r\n- modeling_xlnet.py\r\n- modeling_ctrl.py\r\n```\r\n\r\nTherefore, we can move `use_cache` to the respective configuration files and delete it from the general `configuration_utils.py` file.\r\nI cannot really think of a use case where this would lead to breaking changes. If *e.g.* a saved BERT config includes a `use_cache`, this parameter will still be set to the config: https://github.com/huggingface/transformers/blob/18c8cf000bed04ec03470270ec2fbd9d49cce5c4/src/transformers/configuration_utils.py#L235 and still remain unused as before => I don't think this is a problem. A lot of old configs have unused config parameters that are then just ignored... For all models that make use of `use_cache`, the `use_cache` param is now \"caught\" and saved directly in the model's config class. \r\n\r\n=> this allows us to have 0 breaking changes for this PR and is also cleaner IMO. ",
"## Update\r\n\r\nTo have a bit more clarity for this PR.\r\nThis PR mainly solved the issue that XLNet cannot be trained at the moment. It does so by depreciating `use_cache` and replacing it by `use_mems_eval` and `use_mems_train`.\r\nThe PR keeps 100% backward compatibility for the PyTorch model.\r\n\r\nThe TF XLNet model was previously not kept up-to-date with the PT model. This PR notably forgot to update the TF model: https://github.com/huggingface/transformers/pull/5770 when the behavior of the PT model was changed. \r\nThis PR brings the TF model up to date with the PT model, but does not keep 100% backward compatibility by completely removing the `use_cache` parameter from the models forward functions. I decided to not add depreciate the `use_cache` parameter because a) Before this PR `use_cache` was used incorrectly (as was corrected in #5770 but only for PT), thus b) bringing the TF model up to date with PT is breaking anyways and c) there are breaking changes for boolean inputs in the moment for TF. So in conclusion:\r\n\r\n- for PT: No breaking changes -> `use_cache` is depreciated both in the config and in the model forward's\r\n- for TF: One breaking change for TF that `use_cache` cannot be forwarded anymore to the models and has to be replaced by `use_mems`\r\n\r\nAt the same time this PR removes `use_cache` from `configuration_utils.py` which has no breaking changes except for models that don't use `use_cache` don't add an unused `use_cache` that defaults to `True` to their config anymore. But since none of those models is using `use_cache` this is hardly breaking.\r\n\r\n@LysandreJik - is that ok for you?",
"Thank you for clarifying, I think that's fine. We should add that to the v4.0.0 as well. \r\n\r\nThanks for taking care of it @patrickvonplaten "
] | 1,605 | 1,606 | 1,606 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7584
XLNet is arguably not the cleanent implementation with a lot of config parameters flying around that all interact with each other in an overly complicated way: `use_cache`, `mem_len`, and `reuse_len`. Due to backward compatibility we cannot really get rid of those. I'd love to just completely get rid of `use_cache` and `reuse_len`, but this would require to update all configs which is not possible...
At first, this PR removes the `use_cache` and replaces it with `use_mems_eval` => `use_mems_eval` decides whether the mems should be used in evaluation mode, which defaults to `True` so that the arguably most important model `XLNetLMHeadModel` keeps full backward compatibility at inference. `use_cache` is a confusing name IMO because it does not correspond to the `use_cache` we know from GPT2 (we had a longer discussion on this internally on Slack).
The issue #7584 shows that if there is one training batch that is smaller than the other batches in the train data, the training breaks. Also as can be read upon in the issue linked below, the authors also don't use the memory mechanism for fine-tuning => therefore we add another param `use_mems_train` which defaults to `False` so that training works as a default.
If for some special reason the user wants to use the memory mechanism during fine-tuning, he/she has to make sure that the batch_size of all training batches is the same.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8567",
"html_url": "https://github.com/huggingface/transformers/pull/8567",
"diff_url": "https://github.com/huggingface/transformers/pull/8567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8567.patch",
"merged_at": 1606341300000
} |
https://api.github.com/repos/huggingface/transformers/issues/8566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8566/comments | https://api.github.com/repos/huggingface/transformers/issues/8566/events | https://github.com/huggingface/transformers/issues/8566 | 743,904,449 | MDU6SXNzdWU3NDM5MDQ0NDk= | 8,566 | "setup.py" does not seem to have been updated for v3.5.1 | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found https://github.com/huggingface/transformers/commit/d5b3e56de5376aa85ef46e7f0325139d9e299a41 and it seems the related files are updated there.\r\nIf there is a reason (or rules for releases) not to merge the change into the master branch, I'm sorry for opening this issue that I haven't fully considered.",
"This was a hotfix on a branch which is why we didn't update master (mostly because we forgot).\r\nTo do things right, we'll actually put v4.0.0-dev since that's what master is right now :-)",
"@sgugger \r\nThank you for your quick and detailed response!\r\nNow I understood what is the reason.\r\nI appreciate your creating a new PR for this issue."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | ## Environment info
I try
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers/
!pip install -e .
```
on Colaboratory.
- `transformers` version: 3.5.0 <- this seems strange.
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
documentation: @sgugger
## Information
"setup.py" does not seem to have been updated for v3.5.1.
When I install transformers by `pip install -e .`, the version of transformers is shown as v3.5.0.
## To reproduce
Steps to reproduce the behavior:
I try
```
!git clone https://github.com/huggingface/transformers.git
%cd transformers/
!pip install -e .
```
on Colaboratory after v3.5.1 release.
Then,
```
import transformers
transformers.__version__
```
returns
```
'3.5.0'
```
## Expected behavior
The return of `transformers.__version__` is expected to be '3.5.1' now, if my understanding is not wrong.
Maybe, in https://github.com/huggingface/transformers/blob/afb50c663a5d5623906ead1e87481926467d59fa/setup.py#L120
'3.5.0' should be changed to '3.5.1'.
Is my understanding correct? Sorry if I misunderstand your intension.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8566/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8565/comments | https://api.github.com/repos/huggingface/transformers/issues/8565/events | https://github.com/huggingface/transformers/pull/8565 | 743,895,036 | MDExOlB1bGxSZXF1ZXN0NTIxNzEwMDMw | 8,565 | replace performance table with markdown | {
"login": "smanjil",
"id": 11598535,
"node_id": "MDQ6VXNlcjExNTk4NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/11598535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smanjil",
"html_url": "https://github.com/smanjil",
"followers_url": "https://api.github.com/users/smanjil/followers",
"following_url": "https://api.github.com/users/smanjil/following{/other_user}",
"gists_url": "https://api.github.com/users/smanjil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smanjil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smanjil/subscriptions",
"organizations_url": "https://api.github.com/users/smanjil/orgs",
"repos_url": "https://api.github.com/users/smanjil/repos",
"events_url": "https://api.github.com/users/smanjil/events{/privacy}",
"received_events_url": "https://api.github.com/users/smanjil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks! you can also add a sample input for the widget if you'd like (https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs)"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8565/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8565",
"html_url": "https://github.com/huggingface/transformers/pull/8565",
"diff_url": "https://github.com/huggingface/transformers/pull/8565.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8565.patch",
"merged_at": 1605723466000
} |
https://api.github.com/repos/huggingface/transformers/issues/8564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8564/comments | https://api.github.com/repos/huggingface/transformers/issues/8564/events | https://github.com/huggingface/transformers/pull/8564 | 743,895,012 | MDExOlB1bGxSZXF1ZXN0NTIxNzEwMDEx | 8,564 | Make BART more ONNX friendly | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I believe @mfuntowicz ran the slow tests on this one and there were no failures, but this PR is so cryptic I don't understand what's happening.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,651 | 1,614 | MEMBER | null | **Major concerns:**
1. ONNX complains about Python raw integer usage.
2. ONNX doesn't support boolean indexing with something else than vector. The code was using 2D Tensor indices (batch, token).
**PR workarounds:**
1. Remove the call to `len(..)` and prefer the use of `.size(-1)`
2. Attempt to index the output tensors to retrieve only the **last** EOS token over the sequence axis through `.index_select(..)` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8564/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8564",
"html_url": "https://github.com/huggingface/transformers/pull/8564",
"diff_url": "https://github.com/huggingface/transformers/pull/8564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8564.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8563/comments | https://api.github.com/repos/huggingface/transformers/issues/8563/events | https://github.com/huggingface/transformers/issues/8563 | 743,805,806 | MDU6SXNzdWU3NDM4MDU4MDY= | 8,563 | Wrong model_max_length for BERTOverflow tokenizer | {
"login": "greav",
"id": 32651336,
"node_id": "MDQ6VXNlcjMyNjUxMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/32651336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greav",
"html_url": "https://github.com/greav",
"followers_url": "https://api.github.com/users/greav/followers",
"following_url": "https://api.github.com/users/greav/following{/other_user}",
"gists_url": "https://api.github.com/users/greav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greav/subscriptions",
"organizations_url": "https://api.github.com/users/greav/orgs",
"repos_url": "https://api.github.com/users/greav/repos",
"events_url": "https://api.github.com/users/greav/events{/privacy}",
"received_events_url": "https://api.github.com/users/greav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Ah, this is an error, indeed. The uploader should have uploaded a `tokenizer_config.json` containing the model maximum length. I see it only contains the `do_lower_case` argument right now.\r\n\r\n@julien-c what should we do here? Should we update that configuration ourselves?\r\n\r\nAlso @thomwolf @julien-c, I feel like we should have sensible defaults for each model. This is a `BertModel` which currently only has access to absolute positional embeddings. It doesn't make sense to have an unlimited `model_max_length`. I think `BertTokenizer` and all its relatives should have a default of `512`.",
"@LysandreJik Yes, our policy here is that we fix the config ourselves and notify the model author (via email/GitHub mention/anything). Feel free to do it and link the resulting commit from here.\r\n\r\n (use your hf.co email/author name for your commit to be linked to your hf.co user profile)",
"Updated with [212cd3e](https://huggingface.co/jeniya/BERTOverflow/commit/212cd3ef9615e7292da65a49162e0beb0cd3d604) and sent an email.",
"see [`huggingface.co:212cd3e`](https://huggingface.co/jeniya/BERTOverflow/commit/212cd3ef9615e7292da65a49162e0beb0cd3d604)",
"I happen to get the same error for all of:\r\n\r\n```\r\ndmis-lab/biobert-base-cased-v1.1\r\ndmis-lab/biobert-large-cased-v1.1\r\nmicrosoft/BiomedNLP-PubMedBERT-base-uncased-abstract\r\nemilyalsentzer/Bio_ClinicalBERT\r\n```\r\n\r\nI understand that in user code I should just do a fallback to sensible default if I get that large value, but that seem like a problem not related to a single model.",
"This issue has been stale for 1 month."
] | 1,605 | 1,618 | 1,618 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: N
- Using distributed or parallel set-up in script?: N
## Information
Hi. I used [BERTOverflow](https://huggingface.co/jeniya/BERTOverflow) and found strange behavior for the tokenizer property `model_max_length`. It is equal 1000000000000000019884624838656, although it should be 512.
## To reproduce
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('jeniya/BERTOverflow')
print(tokenizer.model_max_length)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8562/comments | https://api.github.com/repos/huggingface/transformers/issues/8562/events | https://github.com/huggingface/transformers/pull/8562 | 743,750,052 | MDExOlB1bGxSZXF1ZXN0NTIxNTkzOTYw | 8,562 | Clearer Model Versioning Example in Model Card | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Clearer model card example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8562",
"html_url": "https://github.com/huggingface/transformers/pull/8562",
"diff_url": "https://github.com/huggingface/transformers/pull/8562.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8562.patch",
"merged_at": 1605527950000
} |
https://api.github.com/repos/huggingface/transformers/issues/8561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8561/comments | https://api.github.com/repos/huggingface/transformers/issues/8561/events | https://github.com/huggingface/transformers/pull/8561 | 743,749,650 | MDExOlB1bGxSZXF1ZXN0NTIxNTkzNjEx | 8,561 | Reset loss to zero on logging in Trainer to avoid bfloat16 issues | {
"login": "bminixhofer",
"id": 13353204,
"node_id": "MDQ6VXNlcjEzMzUzMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bminixhofer",
"html_url": "https://github.com/bminixhofer",
"followers_url": "https://api.github.com/users/bminixhofer/followers",
"following_url": "https://api.github.com/users/bminixhofer/following{/other_user}",
"gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions",
"organizations_url": "https://api.github.com/users/bminixhofer/orgs",
"repos_url": "https://api.github.com/users/bminixhofer/repos",
"events_url": "https://api.github.com/users/bminixhofer/events{/privacy}",
"received_events_url": "https://api.github.com/users/bminixhofer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, we don't want to do `loss.item()` at each step since it slows down a lot the training on TPUs. We can cast this scalar to Float if it overflows. Not super familiar with `bfloat16` on TPUs so we may have overlooked something there (on mixed precision for GPUs, the loss is always a float).",
"Oh ok, I hadn't considered that. I'll take a closer look then to check what exactly causes the overflow.\r\n\r\nAnd just curious, do you have any metrics regarding slowdown from `loss.item()` on TPUs? I'm currently using the code from this PR and see good TPU utilization and training time.",
"Alright, this might be a problem with XLA? I'm not at all familiar with how TPUs and XLA work internally but here is a minimal example of the problem:\r\n\r\n```bfloat_demo.py```\r\n```py\r\nimport torch\r\n\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\n\r\ndevice = xm.xla_device()\r\n\r\nloss_tensor = torch.tensor(0.0).to(device)\r\nloss_float = 0.0\r\n\r\nprint(f\"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}\")\r\n\r\nto_add = torch.tensor(10.0).to(device)\r\n\r\nfor i in range(10):\r\n for _ in range(100):\r\n loss_tensor += to_add\r\n loss_float += to_add.item()\r\n\r\n print(loss_tensor, loss_float)\r\n```\r\n\r\nRunning this regularly:\r\n\r\n```\r\n(torch-xla-1.7) bminixhofer@gerpt2:~$ python bfloat_demo.py\r\n0.0 is on device xla:1 with dtype torch.float32\r\ntensor(1000., device='xla:1') 1000.0\r\ntensor(2000., device='xla:1') 2000.0\r\ntensor(3000., device='xla:1') 3000.0\r\ntensor(4000., device='xla:1') 4000.0\r\ntensor(5000., device='xla:1') 5000.0\r\ntensor(6000., device='xla:1') 6000.0\r\ntensor(7000., device='xla:1') 7000.0\r\ntensor(8000., device='xla:1') 8000.0\r\ntensor(9000., device='xla:1') 9000.0\r\ntensor(10000., device='xla:1') 10000.0\r\n```\r\n\r\nand with `bfloat16`:\r\n\r\n```\r\n(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py \r\n2020-11-16 14:52:12.488382: I 1663 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values\r\n0.0 is on device xla:1 with dtype torch.float32\r\ntensor(904., device='xla:1') 1000.0\r\ntensor(1704., device='xla:1') 2000.0\r\ntensor(2960., device='xla:1') 3000.0\r\ntensor(4096., device='xla:1') 4000.0\r\ntensor(4096., device='xla:1') 5000.0\r\ntensor(4096., device='xla:1') 6000.0\r\ntensor(4096., device='xla:1') 7000.0\r\ntensor(4096., device='xla:1') 8000.0\r\ntensor(4096., device='xla:1') 9000.0\r\ntensor(4096., device='xla:1') 10000.0\r\n```\r\n\r\nbut notably the issue doesn't seem to be the magnitude at all but rather how often a value is added:\r\n\r\n(setting `to_add = torch.tensor(0.1).to(device)`)\r\n```\r\n(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py \r\n2020-11-16 14:57:57.844438: I 1860 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values\r\nloss_tensor is on device xla:1 with dtype torch.float32\r\ntensor(10.0625, device='xla:1') 10.009765625\r\ntensor(22.5000, device='xla:1') 20.01953125\r\ntensor(32., device='xla:1') 30.029296875\r\ntensor(32., device='xla:1') 40.0390625\r\ntensor(32., device='xla:1') 50.048828125\r\ntensor(32., device='xla:1') 60.05859375\r\ntensor(32., device='xla:1') 70.068359375\r\ntensor(32., device='xla:1') 80.078125\r\ntensor(32., device='xla:1') 90.087890625\r\ntensor(32., device='xla:1') 100.09765625\r\n```\r\n\r\nSo the dtype does not seem to be the problem, the problem seems to be something along the lines of not enough operations being tracked by XLA, but as I said I really don't know the internals at all so I don't want to go off on speculation here :)\r\nAny ideas how to proceed?",
"And if you do:\r\n```python\r\nimport torch\r\n\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\n\r\ndevice = xm.xla_device()\r\n\r\nloss_tensor = torch.tensor(0.0).to(device)\r\nloss_float = 0.0\r\n\r\nprint(f\"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}\")\r\n\r\nto_add = torch.tensor(10.0).to(device)\r\n\r\nfor i in range(10):\r\n for _ in range(100):\r\n loss_tensor += to_add.float()\r\n loss_float += to_add.item()\r\n\r\n print(loss_tensor, loss_float)\r\n```\r\ndoes this solve the issue?\r\n\r\nTrying to avoid the `.item` as it triggers a synchronization of TPUs :-)",
"No, still the same :(\r\n\r\n`bfloat_demo.py`\r\n```python\r\nimport torch\r\n\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\n\r\ndevice = xm.xla_device()\r\n\r\nloss_tensor = torch.tensor(0.0).to(device)\r\nloss_float = 0.0\r\n\r\nprint(f\"loss_tensor is on device {loss_tensor.device} with dtype {loss_tensor.dtype}\")\r\n\r\nto_add = torch.tensor(10.0).to(device)\r\n\r\nfor i in range(10):\r\n for _ in range(100):\r\n loss_tensor += to_add.float()\r\n loss_float += to_add.item()\r\n\r\n print(loss_tensor, loss_float)\r\n\r\n```\r\n\r\n```\r\n(torch-xla-1.7) bminixhofer@gerpt2:~$ XLA_USE_BF16=1 python bfloat_demo.py \r\n2020-11-16 16:00:35.131065: I 1197 torch_xla/csrc/tensor_util.cpp:28] Using BF16 data type for floating point values\r\nloss_tensor is on device xla:1 with dtype torch.float32\r\ntensor(904., device='xla:1') 1000.0\r\ntensor(1704., device='xla:1') 2000.0\r\ntensor(2960., device='xla:1') 3000.0\r\ntensor(4096., device='xla:1') 4000.0\r\ntensor(4096., device='xla:1') 5000.0\r\ntensor(4096., device='xla:1') 6000.0\r\ntensor(4096., device='xla:1') 7000.0\r\ntensor(4096., device='xla:1') 8000.0\r\ntensor(4096., device='xla:1') 9000.0\r\ntensor(4096., device='xla:1') 10000.0\r\n```",
"Ok investigated a bit more by asking some TPU experts and there is no way around this. So the proper fix will be to reset the loss to 0 each time we log it (instead of summing everything from the beginning). I can work on that later this week after the changes needed for v4, or you can work on it if it interests you.",
"Ok thanks for the quick followup!\r\n\r\n> So the proper fix will be to reset the loss to 0 each time we log it (instead of summing everything from the beginning)\r\n\r\nWouldn't that still have the same issue if `logging_steps` is sufficiently large?",
"I think something like this could be a solution:\r\n\r\n```python\r\nimport torch\r\n\r\nimport torch_xla\r\nimport torch_xla.core.xla_model as xm\r\n\r\n\r\ndevice = xm.xla_device()\r\n\r\nloss = 0.0\r\n\r\nloss_agg_steps = 52\r\nloss_agg = torch.tensor(0.0).to(device)\r\nzero = torch.tensor(0.0).to(device)\r\n\r\nto_add = torch.tensor(10.0).to(device)\r\n\r\nfor i in range(10):\r\n for j in range(100):\r\n loss_agg += to_add\r\n\r\n if (j + 1) % loss_agg_steps == 0:\r\n loss += loss_agg.item()\r\n loss_agg.copy_(zero)\r\n\r\n loss += loss_agg.item()\r\n loss_agg.copy_(zero)\r\n\r\n print(loss)\r\n```\r\n\r\nupdating the loss tensor for `n` steps and syncing with a Python float loss afterwards (and always syncing before logging and after the last batch in an epoch). `n = 52` is the highest that worked in this demo but maybe a more informed decision could be taken about that value.",
"The `agg_step` would be the `logging_step` we have as training argument, the user can then tune it to their need.",
"Ok, I thought it would be good to have a second parameter as the logging step is tied to other things as well but in that case I can give implementing it the way you described earlier a shot.",
"The latest commit should do the trick. Just not sure if `tr_loss -= tr_loss` is the best way to reset `tr_loss` to zero.",
"I think you also need to do something for the final reported training loss.",
"Oh right, thanks. I introduced a variable `_total_loss_scalar` which handles that (comparable to `loss` in the demo code)."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
I have had a weird issue with the logged loss going to zero after some training steps training with `bfloat16` on v3 TPUs:

while it works correctly using 32bit precision:

After some investigation I found that the `tr_loss` variable in `Trainer.train` seems to overflow after reaching 1024 (?).
I did not track this down more closely because it is easily fixed by making `tr_loss` a regular Python float instead of a tensor. It doesn't actually need to be a tensor as it is only ever accessed by `.item()`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8561/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8561",
"html_url": "https://github.com/huggingface/transformers/pull/8561",
"diff_url": "https://github.com/huggingface/transformers/pull/8561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8561.patch",
"merged_at": 1605711489000
} |
https://api.github.com/repos/huggingface/transformers/issues/8560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8560/comments | https://api.github.com/repos/huggingface/transformers/issues/8560/events | https://github.com/huggingface/transformers/issues/8560 | 743,732,467 | MDU6SXNzdWU3NDM3MzI0Njc= | 8,560 | Prophetnet - predicted n-future tokens | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @nsankar - that's a good question for the forum https://discuss.huggingface.co/ I think. Currently we don't really support sampling from the \"n-future\" tokens, but it should not be too difficult - just sample from each of the 1...n logit vectors and then you can use the tokenizer to decode the sampled tokens. So in short, you should do this operation: https://github.com/huggingface/transformers/blob/42111f1d56947797d9dfb0908908f42a22ca9823/src/transformers/generation_utils.py#L843 for all n-future logit vectors and then pass it to the tokenizer."
] | 1,605 | 1,606 | 1,606 | NONE | null | Hi,
How can we get the predicted n-future tokens **as a string** data from the model output. I couldn't find it in the API doc and sample code. Could you please guide / provide the code snippet for ProphetNet and XLM-Prophetnet ? Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8560/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8559/comments | https://api.github.com/repos/huggingface/transformers/issues/8559/events | https://github.com/huggingface/transformers/issues/8559 | 743,724,574 | MDU6SXNzdWU3NDM3MjQ1NzQ= | 8,559 | TFGPT2LMHeadModel fp16 support | {
"login": "mymusise",
"id": 6883957,
"node_id": "MDQ6VXNlcjY4ODM5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mymusise",
"html_url": "https://github.com/mymusise",
"followers_url": "https://api.github.com/users/mymusise/followers",
"following_url": "https://api.github.com/users/mymusise/following{/other_user}",
"gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mymusise/subscriptions",
"organizations_url": "https://api.github.com/users/mymusise/orgs",
"repos_url": "https://api.github.com/users/mymusise/repos",
"events_url": "https://api.github.com/users/mymusise/events{/privacy}",
"received_events_url": "https://api.github.com/users/mymusise/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello !\r\n\r\nIt is a known issue that will be fixed in a future release. Sorry.",
"Thanks, I'm looking forward to the future release.",
"Thank @jplu's work! :+1: It works now with the mixed-precision policy when training.\r\nBut I think it still has some problem with `TextGenerationPipeline`, for example:\r\n\r\n```python\r\nfrom transformers import TextGenerationPipeline\r\nfrom tensorflow.keras.mixed_precision import experimental as mixed_precision\r\n\r\npolicy = mixed_precision.Policy('mixed_float16')\r\nmixed_precision.set_policy(policy)\r\n\r\ntext_generator = TextGenerationPipeline(model, tokenizer)\r\ntext_generator(\"Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow\")\r\n```\r\n\r\nThen it raises an exception:\r\n```\r\n File \"./env/lib/python3.8/site-packages/transformers-4.0.0.dev0-py3.8.egg/transformers/generation_tf_utils.py\", line 386, in generate\r\n output = self._generate_no_beam_search(\r\n File \"./env/lib/python3.8/site-packages/transformers-4.0.0.dev0-py3.8.egg/transformers/generation_tf_utils.py\", line 457, in _generate_no_beam_search\r\n next_token_logits = tf.math.multiply(next_token_logits, next_token_logits_penalties)\r\n File \"./env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py\", line 206, in wrapper\r\n return target(*args, **kwargs)\r\n File \"./env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py\", line 519, in multiply\r\n return gen_math_ops.mul(x, y, name)\r\n File \"./env/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py\", line 6068, in mul\r\n _ops.raise_from_not_ok_status(e, name)\r\n File \"./env/lib/python3.8/site-packages/tensorflow/python/framework/ops.py\", line 6867, in raise_from_not_ok_status\r\n six.raise_from(core._status_to_exception(e.code, message), None)\r\n File \"<string>\", line 3, in raise_from\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a half tensor but is a float tensor [Op:Mul]\r\n```",
"Sorry for this, the generation part is not yet compliant with mixed precision. It is in the pipeline to add this but we don't know when yet.",
"Okay, thank you!",
"Hello~ @jplu. \r\nRecently, I try to train my `TFGPT2LMHeadModel` model with mixed_precision again, it performs badly after many epochs, it seems didn't learn anything.\r\nIf I train without mixed_precision, it performs well after training with the same epochs.\r\nI think maybe it will lose something when counting the loss with `logits` and `labels` in `fp16` here: https://github.com/mymusise/transformers/blob/master/src/transformers/models/gpt2/modeling_tf_gpt2.py#L742.",
"Here I make some change to `TFGPT2LMHeadModel` in https://github.com/huggingface/transformers/pull/10689\r\nI'm not sure it's the right way to do it, correct me if it's wrong.\r\n\r\nAnd I make a small test in colab, hope it can help to recur the problem.\r\nhttps://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/mixed_precision_test.ipynb",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,605 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version:
- Platform: ubuntu 18.04
- Python version: python3.8
- Tensorflow version (GPU?): tf-nightly==2.5
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Text Generation: @patrickvonplaten @TevenLeScao
tensorflow: @jplu
## Information
Hi, there. If I want to use the mixed precision setting with keras apis when training `TFGPT2LMHeadModel`, [like this](https://www.tensorflow.org/guide/mixed_precision):
```python
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
```
Then I will got this error:
```
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/transformers/modeling_tf_gpt2.py", line 154, in call
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions, training=training)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/transformers/modeling_tf_gpt2.py", line 101, in _attn
w = w / tf.math.sqrt(dk)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1181, in binary_op_wrapper
raise e
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1165, in binary_op_wrapper
return func(x, y, name=name)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1337, in truediv
return _truediv_python3(x, y, name)
File "/home/mymusise/pro/fast-gpt2/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1267, in _truediv_python3
raise TypeError("x and y must have the same dtype, got %r != %r" %
TypeError: x and y must have the same dtype, got tf.float16 != tf.float32
```
Here's a [example](https://gist.github.com/mymusise/7192b7c252ff67ff84496cd8b27a91ff) to reappear this.
Please help me guys. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8558/comments | https://api.github.com/repos/huggingface/transformers/issues/8558/events | https://github.com/huggingface/transformers/pull/8558 | 743,657,263 | MDExOlB1bGxSZXF1ZXN0NTIxNTIwMDAz | 8,558 | Readme for Wiki Summary [Persian] bert2bert | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8558/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8558",
"html_url": "https://github.com/huggingface/transformers/pull/8558",
"diff_url": "https://github.com/huggingface/transformers/pull/8558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8558.patch",
"merged_at": 1605521087000
} |
https://api.github.com/repos/huggingface/transformers/issues/8557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8557/comments | https://api.github.com/repos/huggingface/transformers/issues/8557/events | https://github.com/huggingface/transformers/pull/8557 | 743,557,489 | MDExOlB1bGxSZXF1ZXN0NTIxNDM0NDMz | 8,557 | Readme for News Headline Generation (bert2bert) | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Really cool!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8557",
"html_url": "https://github.com/huggingface/transformers/pull/8557",
"diff_url": "https://github.com/huggingface/transformers/pull/8557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8557.patch",
"merged_at": 1605521078000
} |
https://api.github.com/repos/huggingface/transformers/issues/8556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8556/comments | https://api.github.com/repos/huggingface/transformers/issues/8556/events | https://github.com/huggingface/transformers/pull/8556 | 743,372,686 | MDExOlB1bGxSZXF1ZXN0NTIxMjc4ODI5 | 8,556 | tokenization_bart.py: return_tensors default should be "pt" | {
"login": "Mehrad0711",
"id": 28717374,
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehrad0711",
"html_url": "https://github.com/Mehrad0711",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tokenizers are supposed to be framework (PyTorch/TensorFlow/FLAX) agnostic so we probably don't want to do in that direction.",
"Gotcha. Is this going to be the case for all tokenizers in the future? Because currently they default to PyTorch except for Bart's.\r\nAlso, I think the docstring for Bart tokenizer's `return_tensors` needs to be updated then since it says: `optional`, defaults to \"pt\"",
"The fact that the BART-like tokenizers have `return_tensors=\"pt\"` is a mistake. The tokenizers should be framework-agnostic.",
"We will have to update this, which will be a breaking change, so we'll try to put it in the v4.0.0. Do you want to open a PR to fix the issue?",
"Sure. I'll close this then and make a new PR for that.\r\n",
"Hi @Mehrad0711! We're rushing to `[email protected]`, and we don't want that in this release. I've taken the liberty of fixing it in https://github.com/huggingface/transformers/pull/8599. Sorry about that, I hope you had not started your development.\r\n\r\nIf you have, you can push your fixes and open a PR and I'll incorporate those changes in my PR and mark you as co-author.",
"No problem @LysandreJik . Thanks for fixing it!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | return_tensors default should be "pt" in bart's `prepare_seq2seq_batch`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8556/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8556",
"html_url": "https://github.com/huggingface/transformers/pull/8556",
"diff_url": "https://github.com/huggingface/transformers/pull/8556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8556.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8555/comments | https://api.github.com/repos/huggingface/transformers/issues/8555/events | https://github.com/huggingface/transformers/issues/8555 | 743,365,483 | MDU6SXNzdWU3NDMzNjU0ODM= | 8,555 | Allow the user to input positional embeddings | {
"login": "anicolson",
"id": 26111230,
"node_id": "MDQ6VXNlcjI2MTExMjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26111230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anicolson",
"html_url": "https://github.com/anicolson",
"followers_url": "https://api.github.com/users/anicolson/followers",
"following_url": "https://api.github.com/users/anicolson/following{/other_user}",
"gists_url": "https://api.github.com/users/anicolson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anicolson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anicolson/subscriptions",
"organizations_url": "https://api.github.com/users/anicolson/orgs",
"repos_url": "https://api.github.com/users/anicolson/repos",
"events_url": "https://api.github.com/users/anicolson/events{/privacy}",
"received_events_url": "https://api.github.com/users/anicolson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @anicolson,\r\n\r\nI'm really not sure whether such a functionality is general enough to be added to the lib. It would also be very different from our current design of the library in that we would allow `torch.nn.Embedding` types as input, so I'd rather not add it. @LysandreJik what do you think? ",
"I think this is specifically where tweaking the library so that it supports your use-case is the way to go. It should be easy enough to modify the files directly in order to do this, but it would add unnecessary complexity to several model files.\r\n\r\nLet's keep this issue open, and if other users are interested in this, we'll have a deeper look.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | # π Feature request
Hi,
I think that allowing the user to input positional embeddings in the same way that `inputs_embeds` can be fed directly to the Transformer would be greatly appreciated. (If this is already possible, please let me know :) ).
## Motivation
This is important for multimodal approaches, such as **VisualBERT** (https://arxiv.org/pdf/1908.03557.pdf), where each modality requires a separate positional embedding. The user could then use a `torch.nn.Embedding` per modality and concatenate the positional embeddings and feed this to the Transformer along with the concatenated input embeddings of the modalities.
This would also resolve other related issues:
https://github.com/huggingface/transformers/issues/5095
https://github.com/huggingface/transformers/issues/3395
## Your contribution
I can help in any form.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8555/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8554/comments | https://api.github.com/repos/huggingface/transformers/issues/8554/events | https://github.com/huggingface/transformers/pull/8554 | 743,353,627 | MDExOlB1bGxSZXF1ZXN0NTIxMjY1OTEx | 8,554 | `disable_ngram_loss` fix for prophetnet | {
"login": "Zhylkaaa",
"id": 18054828,
"node_id": "MDQ6VXNlcjE4MDU0ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18054828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhylkaaa",
"html_url": "https://github.com/Zhylkaaa",
"followers_url": "https://api.github.com/users/Zhylkaaa/followers",
"following_url": "https://api.github.com/users/Zhylkaaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhylkaaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhylkaaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhylkaaa/subscriptions",
"organizations_url": "https://api.github.com/users/Zhylkaaa/orgs",
"repos_url": "https://api.github.com/users/Zhylkaaa/repos",
"events_url": "https://api.github.com/users/Zhylkaaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhylkaaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Zhylkaaa,\r\n\r\nThanks a lot for your PR! @qiweizhen - could you maybe take a look and give your opinion on this PR? I don't have much experience with training ProphetNet",
"Hi @patrickvonplaten , thanks for informing me. It seems it's still related to the padding tokens (default -100 or **padding_idx**) which should not be calculated loss. \r\nIn the old version code, **expend_targets** is filled with **self.padding_idx** however in the loss function, **padding_idx** is not fed in. It results in calculating wrong loss. \r\n\r\nHere @Zhylkaaa set them consistent. I suggest that 1) the outside data preprocess padding function, 2) here **expend_targets** and 3) the loss function to be consistent. \r\n\r\nIf Huggingface Transformers default uses -100 for all the NLG models padding, then this code can be merged. If Huggingface Transformers default uses self.padding_idx for all the NLG models padding, then not merge this code, but feed **padding_idx** into the loss function.",
"Thanks @qiweizhen for reviewing, \r\nto my knowledge Huggingface Transformers use -100 to indicate ignored tokens during loss calculations, also I wanted to ask if reduction strategy is important (Transformers use reduction=`'mean'` to my knowledge, but hear it is set to `'sum'`) because for me it's not?\r\nI also noticed that I haven't changed this behaviour in another ProphetNet model, I will examine if it's necessary and commit changes, also I will write some tests to check for this behaviour in nearest future.",
"> Thanks @qiweizhen for reviewing,\r\n> to my knowledge Huggingface Transformers use -100 to indicate ignored tokens during loss calculations, also I wanted to ask if reduction strategy is important (Transformers use reduction=`'mean'` to my knowledge, but hear it is set to `'sum'`) because for me it's not?\r\n> I also noticed that I haven't changed this behaviour in another ProphetNet model, I will examine if it's necessary and commit changes, also I will write some tests to check for this behaviour in nearest future.\r\n\r\nThank you for pointing out this \"mean\" or \"sum\" problem! This line of code is converted from Fairseq version ProphetNet, which use loss sum here, to be consistent with [Fairseq Transformer] (https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/criterions/label_smoothed_cross_entropy.py#L26-L27). The reason is that in the training pipeline of Fairseq, they will do the [\"mean\" operation in their trainer](https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/trainer.py#L429). So we return the sum loss and sample_size for Fairseq to calculate sum loss / sample_size (mean).\r\nSo I agree here we should use \"mean\" as you suggested. Thank you @Zhylkaaa !",
"Hi @qiweizhen, I want to verify that I should mean label smoothing loss instead of summing it to be consistent with change of reduction strategy and also should I change `non_pad_mask` to mask where I exclude -100? (You can see this changes in last commit but I just want to be sure π)\r\n\r\nAlso @patrickvonplaten, I have messed up with rebasing so I needed to make reset hard, is it ok or should I close this PR and open one that doesn't change commit history when I finish?)",
"Great PR @Zhylkaaa! I did a small refactor and fixed the test. Thanks for your help @qiweizhen "
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes `disable_ngram_loss` behaviour for ProphetNetForConditionalGeneration and is related to #8553
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes #8553
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
I guess @patrickvonplaten was using this model (I saw models on hub), sorry if I am wrong, but there is no one to tag for ProphetNet
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8554/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8554",
"html_url": "https://github.com/huggingface/transformers/pull/8554",
"diff_url": "https://github.com/huggingface/transformers/pull/8554.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8554.patch",
"merged_at": 1605809887000
} |
https://api.github.com/repos/huggingface/transformers/issues/8553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8553/comments | https://api.github.com/repos/huggingface/transformers/issues/8553/events | https://github.com/huggingface/transformers/issues/8553 | 743,347,679 | MDU6SXNzdWU3NDMzNDc2Nzk= | 8,553 | `disable_ngram_loss` doesn't work correctly in ProphetNetForConditionalGeneration | {
"login": "Zhylkaaa",
"id": 18054828,
"node_id": "MDQ6VXNlcjE4MDU0ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18054828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhylkaaa",
"html_url": "https://github.com/Zhylkaaa",
"followers_url": "https://api.github.com/users/Zhylkaaa/followers",
"following_url": "https://api.github.com/users/Zhylkaaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhylkaaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhylkaaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhylkaaa/subscriptions",
"organizations_url": "https://api.github.com/users/Zhylkaaa/orgs",
"repos_url": "https://api.github.com/users/Zhylkaaa/repos",
"events_url": "https://api.github.com/users/Zhylkaaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhylkaaa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | When I am using ProphetNet with `disable_ngram_loss=True` I am getting loss that is greater than with `disable_ngram_loss=False`. It seems to me that this is the problem of setting `fill_(self.padding_idx)` in `_compute_loss` instead of -100 so that ngram part is omitted in loss calculation
Also I think that `loss = F.nll_loss(lprobs, expend_targets.view(-1), reduction="sum")` reduce should be set to `mean` so that model loss is comparable between models working on the same task (like `mbart`). Can somebody tell me if it's a good point or should I leave it as it is? I am planning to add PR with this changes.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0 from source
- Platform: macOS Catalina
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0
### Who can help
I can't figure out whom to tag.
## Information
Model I am using (Bert, XLNet ...): ProphetNetForConditionalGeneration
## To reproduce
```
from transformers import XLMProphetNetTokenizer, XLMProphetNetForConditionalGeneration
tokenizer = XLMProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased')
model = XLMProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased')
inputs = tokenizer('Hi my name is', return_tensors='pt').input_ids
targets = tokenizer('Hi my name is John', return_tensors='pt').input_ids
model_loss = model(input_ids=inputs, labels=targets, return_dict=True).loss
model.disable_ngram_loss = True
model_disable_loss = model(input_ids=inputs, labels=targets, return_dict=True).loss
from torch.nn import CrossEntropyLoss
loss_fct = CrossEntropyLoss(reduction='sum')
logits = model(input_ids=inputs, labels=targets, return_dict=True).logits
loss_cross_entropy = loss_fct(logits.view(-1, model.config.vocab_size), targets.view(-1))
```
the problem is `model_loss < model_disable_loss` and `model_disable_loss != loss_cross_entropy` which it should be I think.
Note:
`CrossEntropyLoss(reduction='sum')` is used to match implementation in `_compute_loss` (`loss = F.nll_loss(lprobs, expend_targets.view(-1), reduction="sum")`) but other models use default reduction which makes outputs incomparable (at least directly)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
when `model.disable_ngram_loss=True` `CrossEntropyLoss` should be equal to `model(input_ids=inputs, labels=targets, return_dict=True).loss` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8553/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8552/comments | https://api.github.com/repos/huggingface/transformers/issues/8552/events | https://github.com/huggingface/transformers/pull/8552 | 743,344,336 | MDExOlB1bGxSZXF1ZXN0NTIxMjU5MjEw | 8,552 | T5 & mT5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm in favor of this solution and happy you selected it :)\r\n\r\nOur philosophy of simple to use and modify standalone code is made not to be a hard requirement but a guide for user-experience and in this case I also think the best is to adapt T5 as you do.",
"Big thanks for integrating mT5! I checked out your PR and there seems to be a problem with extra_ids. I guess, sentencepiece vocab already has <extra_id> tokens but T5Tokenizer goes on to add it's own. E.g., real id of <external_id_0> is 250099 but T5Tokenizer translates it into 250199",
"> Big thanks for integrating mT5! I checked out your PR and there seems to be a problem with extra_ids. I guess, sentencepiece vocab already has <extra_id> tokens but T5Tokenizer goes on to add it's own. E.g., real id of <external_id_0> is 250099 but T5Tokenizer translates it into 250199\r\n\r\nYou're 100% correct - thanks a lot for letting me know! I corrected the behavior - should be good now: https://huggingface.co/google/mt5-small/tree/main",
"Ran the slow tests & added multiple new slow tests & checked the docs => good to merge."
] | 1,605 | 1,605 | 1,605 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds T5v1.1 and MT5.
I was really unsure whether I should make a new model file `modeling_5_v1_1.py` or not. I finally decided (also after discussions in https://github.com/huggingface/transformers/issues/6285) that in this case, it is better to add the few new T5 features to the existing `modeling_t5.py` file.
The context is the following:
The T5 team released weights for a **T5v1.1** model: https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md. These model architecture of T5v1.1 is equivalent to T5 besides two changes:
1) input and output word embedding of decoder is not shared anymore
2) different feed forward layer is used
In addition, the **mT5** model checkpoints were also released and are fully based on T5v1_1: https://github.com/google-research/multilingual-t5#released-model-checkpoints .
Now the philosophy of the library is to create a new model class if the architecture slightly differs from a previously integrated model, but I'd argue that in this case it is better to add a new config param called "feed_forward_proj" which defines whether a "relu" (T5) or a "gated-gelu" (T5v1.1) should be used. The arguments for not creating a new model class are the following:
1) Both T5 and T5v1.1 are "officially" the same model and both belong to the same code base: https://github.com/google-research/text-to-text-transfer-transformer
2) T5v1.1 has no official paper and it's quite difficult to find a good name as discussed in https://github.com/huggingface/transformers/issues/6285 => `T5v11` is a very cryptic name IMO and `T5v2` is not ideal either.
3) One could argue that it makes sense to create a mixture of T5 and T5v1v1 by sharing input and output word embedding of the decoder but using the "gated-gelu" feed-forward layer. This design would allow a user to define a model whereas creating a new T5v11 model file would make this impossible
4) `"feed-forward-proj"` is less of a model-specific architecture configuration than `"do_blenderbot_90_layernorm"` IMO
The obvious disadvantage is that I am adding new architecture code to an existing model to make a new model (or new model version) work, which is kinda what we did not want to do.
It's a tough decision here I think, so I'd be very happy for your feedback on this @LysandreJik @sgugger @thomwolf . I'd also understand if you guys think we should make a new model file here.
I already uploaded `google/t5-v1_1-small` and `google/mt5-small`. If you guys are ok with this PR, I'll add the TF code, add the new model to README.md and we're good for merge. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8552/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8552/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8552",
"html_url": "https://github.com/huggingface/transformers/pull/8552",
"diff_url": "https://github.com/huggingface/transformers/pull/8552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8552.patch",
"merged_at": 1605612189000
} |
https://api.github.com/repos/huggingface/transformers/issues/8551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8551/comments | https://api.github.com/repos/huggingface/transformers/issues/8551/events | https://github.com/huggingface/transformers/issues/8551 | 743,342,873 | MDU6SXNzdWU3NDMzNDI4NzM= | 8,551 | "special token {} has to be either str or AddedToken but got: | {
"login": "Shafi2016",
"id": 56795978,
"node_id": "MDQ6VXNlcjU2Nzk1OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/56795978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shafi2016",
"html_url": "https://github.com/Shafi2016",
"followers_url": "https://api.github.com/users/Shafi2016/followers",
"following_url": "https://api.github.com/users/Shafi2016/following{/other_user}",
"gists_url": "https://api.github.com/users/Shafi2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shafi2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shafi2016/subscriptions",
"organizations_url": "https://api.github.com/users/Shafi2016/orgs",
"repos_url": "https://api.github.com/users/Shafi2016/repos",
"events_url": "https://api.github.com/users/Shafi2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shafi2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I had the same issue while using SentenceTransformers. When I checked with my backed up pretrained model, the tokenizer_config.json file had been altered for some reaon. I changed it back to the original and it was working fine. Probably some issue with SentenceTransformers calling Huggingface functions on the model.",
"If one of you (or @nreimers) want share a colab exhibiting the behavior, happy to investigate further and fix if needed!",
"Thanks a lot!!\r\n\r\nI fine-tuned the Roberta language model again with hugging face and using it with a sentence transformer for semantic search. And no longer getting the error as mentioned above. But I don't know the reason for the error.\r\n\r\n\r\n\r\n\r\n",
"I also met this issue before. By uninstalling `sentence-transformers`(I think it may be also ok if you fix the version conflict issue), this bug disapeared.",
"Hi @Shafi2016 \r\nCould you share you model, so that I can have a look?",
"Has there been any movement on this? I'm having the same issue with BartTokenizer using sschelifer/cnn-12-6",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"> Hi @Shafi2016 Could you share you model, so that I can have a look?\r\n\r\nHello @nreimers , I am trying to use paraphrase-multilingual-MiniLM-L12-v2 pretrained model and getting the similar error - \r\nTypeError: special token mask_token has to be either str or AddedToken but got: <class 'dict'>\r\n\r\nWould it be possible for you share what change is needed in tokenizer_config.json? Accordingly I will change my tokenizer_config.json. \r\nThis is the current version of tokenizer_config.json's mask_token dict - \r\n\"mask_token\": {\"content\": \"<mask>\", \"single_word\": false, \"lstrip\": true, \"rstrip\": false, \"normalized\": true, \"__type\": \"AddedToken\"}\r\n\r\nOne of the reason I would like to edit the tokenizer_config.json is to keep my transformer and sentence_transformer versions as is - \r\nsentence-transformers 0.4.1.2\r\ntransformers 3.3.1",
"update transformers from 3.2.0 to 4.1.1 helps me solve the problem when call AutoTokenizer.from_pretrained get 'TypeError: special token bos_token has to be either str or AddedToken but got: <class 'dict'>'."
] | 1,605 | 1,666 | 1,614 | NONE | null | Hello, two weeks ago I fine-tune the Roberta language model on my data using hugging face (codes given below) and save output at the google drive. Now when using it for semantic search with sentence transformer. I am getting the error that
**"TypeError: special token bos_token has to be either str or AddedToken but got: <class 'dict'>"**
```
from sentence_transformers import SentenceTransformer
from sentence_transformers import models, losses
import scipy.spatial
import pickle as pkl
word_embedding_model = models.RoBERTa("/content/drive/My Drive/Ottawa_citit")
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
Previously fine-tuned Roberta
`!python "/content/transformers/examples/contrib/legacy/run_language_modeling.py" \
--output_dir "/content/drive/My Drive/Ottawa_citit" \
--model_name_or_path roberta-base \
--do_train \
--per_gpu_train_batch_size 8 \
--seed 42 \
--train_data_file "/content/input_text.txt" \
--block_size 256 \
--line_by_line \
--learning_rate 6e-4 \
--num_train_epochs 3 \
--save_total_limit 2 \
--save_steps 200 \
--weight_decay 0.01 \
--mlm`
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8550/comments | https://api.github.com/repos/huggingface/transformers/issues/8550/events | https://github.com/huggingface/transformers/pull/8550 | 743,312,447 | MDExOlB1bGxSZXF1ZXN0NTIxMjM1ODA5 | 8,550 | Create README.md for Chinese RoBERTa Miniatures | {
"login": "zhezhaoa",
"id": 10495098,
"node_id": "MDQ6VXNlcjEwNDk1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/10495098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhezhaoa",
"html_url": "https://github.com/zhezhaoa",
"followers_url": "https://api.github.com/users/zhezhaoa/followers",
"following_url": "https://api.github.com/users/zhezhaoa/following{/other_user}",
"gists_url": "https://api.github.com/users/zhezhaoa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhezhaoa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhezhaoa/subscriptions",
"organizations_url": "https://api.github.com/users/zhezhaoa/orgs",
"repos_url": "https://api.github.com/users/zhezhaoa/repos",
"events_url": "https://api.github.com/users/zhezhaoa/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhezhaoa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Create model card for Chinese RoBERTa Miniatures."
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Create model card for Chinese RoBERTa Miniatures. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8550/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8550",
"html_url": "https://github.com/huggingface/transformers/pull/8550",
"diff_url": "https://github.com/huggingface/transformers/pull/8550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8550.patch",
"merged_at": 1605520889000
} |
https://api.github.com/repos/huggingface/transformers/issues/8545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8545/comments | https://api.github.com/repos/huggingface/transformers/issues/8545/events | https://github.com/huggingface/transformers/issues/8545 | 743,191,065 | MDU6SXNzdWU3NDMxOTEwNjU= | 8,545 | Pretrain BERT with user defined vocabulary | {
"login": "Jess0-0",
"id": 29790071,
"node_id": "MDQ6VXNlcjI5NzkwMDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29790071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jess0-0",
"html_url": "https://github.com/Jess0-0",
"followers_url": "https://api.github.com/users/Jess0-0/followers",
"following_url": "https://api.github.com/users/Jess0-0/following{/other_user}",
"gists_url": "https://api.github.com/users/Jess0-0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jess0-0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jess0-0/subscriptions",
"organizations_url": "https://api.github.com/users/Jess0-0/orgs",
"repos_url": "https://api.github.com/users/Jess0-0/repos",
"events_url": "https://api.github.com/users/Jess0-0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jess0-0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Yes, you can definitely do so (not sure about the word level). \r\n\r\nYou can define a custom model configuration:\r\n```py\r\nfrom transformers import BertConfig\r\n\r\nconfig = BertConfig(vocab_size=xxx, num_attention_heads=xxx, ...)\r\n```\r\n\r\nYou can then use that configuration with the `run_mlm.py` script to train your model.\r\n\r\nRegarding the tokenizer, you could use a word-level, but you would need to manually modify the files for that. There exists a word-level tokenizer in the `tokenization_bert.py` file that you could leverage for this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> tokenization_bert.py\r\n\r\nexcuse me does this class have the functions that help in word-level, not sub-word level , please ? "
] | 1,605 | 1,642 | 1,610 | NONE | null | # π Feature request
I'm wondering if there is a way to pretrain BERT with user-defined vocabulary, number of layers/heads, and use a WordLevel Tokenizer instead of a WordPiece Tokenizer current BERT uses. Thank you!
## Motivation
This could help the BERT mode to adapt to different tasks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8545/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8544/comments | https://api.github.com/repos/huggingface/transformers/issues/8544/events | https://github.com/huggingface/transformers/pull/8544 | 743,183,791 | MDExOlB1bGxSZXF1ZXN0NTIxMTQ1NzEw | 8,544 | Update README.md | {
"login": "vishxl",
"id": 18505095,
"node_id": "MDQ6VXNlcjE4NTA1MDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/18505095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishxl",
"html_url": "https://github.com/vishxl",
"followers_url": "https://api.github.com/users/vishxl/followers",
"following_url": "https://api.github.com/users/vishxl/following{/other_user}",
"gists_url": "https://api.github.com/users/vishxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishxl/subscriptions",
"organizations_url": "https://api.github.com/users/vishxl/orgs",
"repos_url": "https://api.github.com/users/vishxl/repos",
"events_url": "https://api.github.com/users/vishxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishxl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"What do you think @mrm8488?",
"Great! The community rocks! So I think I have to update many T5 model cards"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Modified Model in Action section. The class `AutoModelWithLMHead` is deprecated so changed it to `AutoModelForSeq2SeqLM` for encoder-decoder models. Removed duplicate eos token.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8544/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8544",
"html_url": "https://github.com/huggingface/transformers/pull/8544",
"diff_url": "https://github.com/huggingface/transformers/pull/8544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8544.patch",
"merged_at": 1605723573000
} |
https://api.github.com/repos/huggingface/transformers/issues/8543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8543/comments | https://api.github.com/repos/huggingface/transformers/issues/8543/events | https://github.com/huggingface/transformers/issues/8543 | 743,145,475 | MDU6SXNzdWU3NDMxNDU0NzU= | 8,543 | Upload models using Git fails | {
"login": "agemagician",
"id": 6087313,
"node_id": "MDQ6VXNlcjYwODczMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agemagician",
"html_url": "https://github.com/agemagician",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agemagician/subscriptions",
"organizations_url": "https://api.github.com/users/agemagician/orgs",
"repos_url": "https://api.github.com/users/agemagician/repos",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"received_events_url": "https://api.github.com/users/agemagician/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @agemagician ,\r\n\r\nI think this is heavily related to #8480 and can only be fixed by HF team at the moment.",
"yep , @stefan-it . It seems to be a similar issue.\r\n\r\nIn this case @julien-c or @patrickvonplaten can someone update ProtT5-XL-BFD Model:\r\nhttps://huggingface.co/Rostlab/prot_t5_xl_bfd\r\n\r\nConfiguration:\r\nhttps://www.dropbox.com/s/pchr2vbvckanfu5/config.json?dl=1\r\n\r\nPytorch Model:\r\nhttps://www.dropbox.com/s/2brwq1cvxo116c7/pytorch_model.bin?dl=1\r\n\r\nThanks in advance for your help",
"Yes @agemagician, for clarity, I will re-open your initial issue which is indeed not yet closed (although we now know how to support it)",
"I will be grateful if someone could update the model from the above links until you fix this issue.",
"Should be good, I updated tf and pt - lemme know if something didn't work :-) ",
"Perfect, thanks a lot Patrick. We will test it right away.\r\n\r\nHopefully, the issue will be fixed soon, so I don't have to waste your time again with the 11B model soon.",
"> Perfect, thanks a lot Patrick. We will test it right away.\r\n> \r\n> Hopefully, the issue will be fixed soon, so I don't have to waste your time again with the 11B model soon.\r\n\r\nNo worries at all! Feel free to open a new issue -> doesn't take me long at all to upload it :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This can now be closed. Thanks for reporting!",
"```\r\n/content/russianpoetrymany# git push\r\nCounting objects: 11, done.\r\nDelta compression using up to 2 threads.\r\nCompressing objects: 100% (9/9), done.\r\nWriting objects: 100% (11/11), 6.26 GiB | 91.43 MiB/s, done.\r\nTotal 11 (delta 0), reused 2 (delta 0)\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out\r\nfatal: The remote end hung up unexpectedly\r\nfatal: The remote end hung up unexpectedly\r\nEverything up-to-date\r\n\r\n\r\n```\r\n\r\n@julien-c @patrickvonplaten uploading a mbart model from google colab and got it . any pointers ??",
"```Enumerating objects: 13, done.\r\nCounting objects: 100% (13/13), done.\r\nDelta compression using up to 16 threads\r\nCompressing objects: 100% (10/10), done.\r\nWriting objects: 100% (12/12), 2.55 GiB | 1.03 MiB/s, done.\r\nTotal 12 (delta 1), reused 1 (delta 0), pack-reused 0\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504\r\nsend-pack: unexpected disconnect while reading sideband packet\r\nfatal: the remote end hung up unexpectedly\r\nEverything up-to-date\r\n```\r\n\r\nMe too",
"```\r\nCounting objects: 3, done.\r\nDelta compression using up to 2 threads.\r\nCompressing objects: 100% (2/2), done.\r\nWriting objects: 100% (3/3), 1.80 GiB | 37.19 MiB/s, done.\r\nTotal 3 (delta 0), reused 1 (delta 0)\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out\r\nfatal: The remote end hung up unexpectedly\r\nfatal: The remote end hung up unexpectedly\r\nEverything up-to-date\r\n```\r\n\r\nSame here with `!git push` in `colab`. `!GIT_CURL_VERBOSE=1 git push` does not help.",
"Those are usually transient errors, can you try again with flags `GIT_CURL_VERBOSE=1 GIT_TRACE=1` and paste the output to a gist if it occurs again?",
"This happened to me too:\r\n```\r\nβ― git push origin main\r\nEnumerating objects: 5, done.\r\nCounting objects: 100% (5/5), done.\r\nDelta compression using up to 8 threads\r\nCompressing objects: 100% (3/3), done.\r\nWriting objects: 75% (3/4), 2.00 GiB | 2.75 MiB/s\r\nWriting objects: 100% (4/4), 2.22 GiB | 2.86 MiB/s, done.\r\nTotal 4 (delta 0), reused 1 (delta 0), pack-reused 0\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504\r\nsend-pack: unexpected disconnect while reading sideband packet\r\nfatal: the remote end hung up unexpectedly\r\nEverything up-to-date\r\n```",
"Having the same issue when uploading a ~6GB GPT-J model via command line:\r\n\r\n```\r\ngit push\r\nEnumerating objects: 4, done.\r\nCounting objects: 100% (4/4), done.\r\nDelta compression using up to 10 threads\r\nCompressing objects: 100% (2/2), done.\r\nWriting objects: 100% (3/3), 5.11 GiB | 624.00 KiB/s, done.\r\nTotal 3 (delta 1), reused 1 (delta 0), pack-reused 0\r\nerror: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504\r\nsend-pack: unexpected disconnect while reading sideband packet\r\nfatal: the remote end hung up unexpectedly\r\n```\r\n\r\nIt seems to be always around 5.11GB (total file size is 6.32GB). When uploading the same file via the HF website interface I get an error 400. Tried it from various locations (wifis), always the same behavior. \r\n\r\nAny helpful advises on how to proceed?\r\n\r\n",
"I was able to upload it after encountering the error, after restarting the run-time and using an earlier version of HuggingFace",
"Running into the same issue here trying to upload a 1.4Gb dataset\r\n\r\n```\r\nEnumerating objects: 23985, done.\r\nCounting objects: 100% (23985/23985), done.\r\nDelta compression using up to 4 threads\r\nCompressing objects: 100% (23948/23948), done.\r\nerror: RPC failed; HTTP 408 curl 22 The requested URL returned error: 408\r\nsend-pack: unexpected disconnect while reading sideband packet\r\nWriting objects: 100% (23949/23949), 1.43 GiB | 4.74 MiB/s, done.\r\nTotal 23949 (delta 2), reused 23948 (delta 1), pack-reused 0\r\nfatal: the remote end hung up unexpectedly\r\nEverything up-to-date\r\n```",
"How did you all solve this?",
"@sgugger @thomwolf can you reopen this issue?",
"For anyone how still couldn't figured it out: it is because the big file is **not** being tracked git LFS. Reasons maybe:\r\n1. You don't have the `.gitattributes`\r\n2. You have the `.gitattributes`, but the file extension is not in the list\r\n\r\nYou can simply ask git LFS to track your file, for example: `git lfs track \"*.gguf\"`"
] | 1,605 | 1,707 | 1,611 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Model Cards: @julien-c
T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
T5
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
git clone https://huggingface.co/Rostlab/prot_t5_xl_bfd
Cloning into 'prot_t5_xl_bfd'...
remote: Enumerating objects: 31, done.
remote: Counting objects: 100% (31/31), done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 31 (delta 13), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (31/31), done.
```
`cp config.json pytorch_model.bin prot_t5_xl_bf`
`git add --all`
```
git status
On branch main
Your branch is up to date with 'origin/main'.
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: config.json
modified: pytorch_model.bin
```
`git commit -m "[T5] Fix load weights function #8528"`
```
git push
Username for 'https://huggingface.co': xxxxxx
Password for 'https://[email protected]':
Counting objects: 4, done.
Delta compression using up to 80 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 4.92 GiB | 23.09 MiB/s, done.
Total 4 (delta 2), reused 1 (delta 0)
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
OR
```
GIT_CURL_VERBOSE=1 git push
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Trying 192.99.39.165...
* TCP_NODELAY set
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* found 140 certificates in /etc/ssl/certs/ca-certificates.crt
* found 421 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: huggingface.co (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=huggingface.co
* start date: Tue, 10 Nov 2020 08:05:46 GMT
* expire date: Mon, 08 Feb 2021 08:05:46 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server accepted to use http/1.1
> GET /Rostlab/prot_t5_xl_bfd/info/refs?service=git-receive-pack HTTP/1.1
Host: huggingface.co
User-Agent: git/2.17.1
Accept: */*
Accept-Encoding: gzip
Accept-Language: C, *;q=0.9
Pragma: no-cache
< HTTP/1.1 401 Unauthorized
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:43:52 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 12
< Connection: keep-alive
< X-Powered-By: huggingface-moon
< WWW-Authenticate: Basic realm="Authentication required", charset="UTF-8"
< ETag: W/"c-dAuDFQrdjS3hezqxDTNgW7AOlYk"
<
* Connection #0 to host huggingface.co left intact
Username for 'https://huggingface.co': agemagician
Password for 'https://[email protected]':
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'agemagician'
> GET /Rostlab/prot_t5_xl_bfd/info/refs?service=git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Accept: */*
Accept-Encoding: gzip
Accept-Language: C, *;q=0.9
Pragma: no-cache
< HTTP/1.1 200 OK
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:43:59 GMT
< Content-Type: application/x-git-receive-pack-advertisement
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: huggingface-moon
<
* Connection #0 to host huggingface.co left intact
Counting objects: 4, done.
Delta compression using up to 80 threads.
Compressing objects: 100% (3/3), done.
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'agemagician'
> POST /Rostlab/prot_t5_xl_bfd/git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Content-Type: application/x-git-receive-pack-request
Accept: application/x-git-receive-pack-result
Content-Length: 4
* upload completely sent off: 4 out of 4 bytes
< HTTP/1.1 200 OK
< Server: nginx/1.14.2
< Date: Sat, 14 Nov 2020 23:44:02 GMT
< Content-Type: application/x-git-receive-pack-result
< Content-Length: 0
< Connection: keep-alive
< X-Powered-By: huggingface-moon
<
* Connection #0 to host huggingface.co left intact
* Couldn't find host huggingface.co in the .netrc file; using defaults
* Found bundle for host huggingface.co: 0x55acdab63f80 [can pipeline]
* Re-using existing connection! (#0) with host huggingface.co
* Connected to huggingface.co (192.99.39.165) port 443 (#0)
* Server auth using Basic with user 'xxxxxx'
> POST /Rostlab/prot_t5_xl_bfd/git-receive-pack HTTP/1.1
Host: huggingface.co
Authorization: Basic YWdlbWFnaWNpYW46VWRjXzEyMDA=
User-Agent: git/2.17.1
Accept-Encoding: gzip
Content-Type: application/x-git-receive-pack-request
Accept: application/x-git-receive-pack-result
Transfer-Encoding: chunked
Writing objects: 100% (4/4), 4.92 GiB | 23.17 MiB/s, done.
Total 4 (delta 2), reused 1 (delta 0)
* Signaling end of chunked upload via terminating chunk.
* Signaling end of chunked upload via terminating chunk.
* The requested URL returned error: 504 Gateway Time-out
* stopped the pause stream!
* Closing connection 0
error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504 Gateway Time-out
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
## Expected behavior
I had issue before for uploading big models like T5 "#7480".
It was marked as solved after moving to model versioning:
https://github.com/huggingface/transformers/pull/8324
However, @patrickvonplaten fixed some problems in T5 weights :
https://github.com/huggingface/transformers/pull/8528
I tried to update the model but it still doesn't work as showed above.
I tried several tricks like:
https://stackoverflow.com/questions/54061758/error-rpc-failed-http-504-curl-22-the-requested-url-returned-error-504-gatewa
But could not solve it.
Any ideas ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8542/comments | https://api.github.com/repos/huggingface/transformers/issues/8542/events | https://github.com/huggingface/transformers/issues/8542 | 743,077,108 | MDU6SXNzdWU3NDMwNzcxMDg= | 8,542 | Failed in predict function after converting xlnet model to onnx format | {
"login": "EricYangCn",
"id": 26324908,
"node_id": "MDQ6VXNlcjI2MzI0OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/26324908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricYangCn",
"html_url": "https://github.com/EricYangCn",
"followers_url": "https://api.github.com/users/EricYangCn/followers",
"following_url": "https://api.github.com/users/EricYangCn/following{/other_user}",
"gists_url": "https://api.github.com/users/EricYangCn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EricYangCn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricYangCn/subscriptions",
"organizations_url": "https://api.github.com/users/EricYangCn/orgs",
"repos_url": "https://api.github.com/users/EricYangCn/repos",
"events_url": "https://api.github.com/users/EricYangCn/events{/privacy}",
"received_events_url": "https://api.github.com/users/EricYangCn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am facing a similar issue. Oddly, it seems to work when input length is 6. Could anyone help out here please? @EricYangCn, were you able to resolve this? Thank you.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:ubuntu 18.04
- Python version:3.7.9
- PyTorch version (1.7.0):
- Using GPU in script:No
- Using distributed or parallel set-up in script?:No
- onnx 1.8.0
- onnxruntime 1.5.2
- transformers 3.5.1
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
TransfoXL/XLNet: @TevenLeScao
-->
TransfoXL/XLNet:
@TevenLeScao
## Information
Model I am using (XLNet):
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.train a pytorch model by XLNetForSequenceClassification successfully
2.convert to onnx format by convert_graph_to_onnx.py with some warning:
python ../convert_graph_to_onnx.py --pipeline sentiment-analysis --framework pt --opset 12 ....
====== Converting model to ONNX ======
ONNX opset version set to: 12
Loading pipeline (model: /home/yhq/onnx/icd_model/9-4, tokenizer: /home/yhq/onnx/icd_model/9-4)
Using framework PyTorch: 1.7.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch'}
Found output output_1 with shape: {1: 'batch', 0: 'sequence'}
Found output output_2 with shape: {1: 'batch', 0: 'sequence'}
Found output output_3 with shape: {1: 'batch', 0: 'sequence'}
Found output output_4 with shape: {1: 'batch', 0: 'sequence'}
Found output output_5 with shape: {1: 'batch', 0: 'sequence'}
Found output output_6 with shape: {1: 'batch', 0: 'sequence'}
Found output output_7 with shape: {1: 'batch', 0: 'sequence'}
Found output output_8 with shape: {1: 'batch', 0: 'sequence'}
Found output output_9 with shape: {1: 'batch', 0: 'sequence'}
Found output output_10 with shape: {1: 'batch', 0: 'sequence'}
Found output output_11 with shape: {1: 'batch', 0: 'sequence'}
Found output output_12 with shape: {1: 'batch', 0: 'sequence'}
Ensuring inputs are in correct order
mems is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/torch/onnx/utils.py:1109: UserWarning: Provided key token_type_ids for dynamic axes is not a valid input/output name
warnings.warn("Provided key {} for dynamic axes is not a valid input/output name".format(key))
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/transformers/modeling_xlnet.py:1160: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
non_tgt_mask = -torch.eye(qlen).to(attn_mask)
/home/yhq/anaconda3/envs/convert/lib/python3.7/site-packages/transformers/modeling_utils.py:1673: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape == tensor_shape for input_tensor in input_tensors
3.the model I trained works well while predict by loading the model in pytorch format, however failed while loading the model in onnx format,errors as follow:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_26' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:475 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 6 by 48
my codes:
import onnxruntime,os,torch
from tqdm.auto import tqdm
from torch.utils.data import TensorDataset,SequentialSampler,DataLoader
from transformers import XLNetTokenizer
sess_options = onnxruntime.SessionOptions()
sess_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
sess_options.intra_op_num_threads = 1
session = onnxruntime.InferenceSession(os.path.join(onnx_path,'onnx_model.onnx'), sess_options)
tokenizer = XLNetTokenizer.from_pretrained(onnx_path)
input_ids = []
attention_masks = []
for name in name_list:
encoded_dict = tokenizer.encode_plus(
name,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids.append(encoded_dict['input_ids'])
attention_masks.append(encoded_dict['attention_mask'])
input_ids = torch.cat(input_ids, dim=0)
attention_mask = torch.cat(attention_masks, dim=0)
prediction_data = TensorDataset(input_ids,attention_mask)
prediction_sampler = SequentialSampler(prediction_data)
prediction_dataloader = DataLoader(prediction_data, sampler=prediction_sampler, batch_size=1)
for step in tqdm(prediction_dataloader, desc="Predicting"):
step = tuple(t.detach().cpu().numpy() for t in step)
ort_inputs = {'input_ids': step[0],
'attention_mask': step[1]
}
logits = session.run(None, ort_inputs) #failed
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8541/comments | https://api.github.com/repos/huggingface/transformers/issues/8541/events | https://github.com/huggingface/transformers/issues/8541 | 743,075,794 | MDU6SXNzdWU3NDMwNzU3OTQ= | 8,541 | Specify label name | {
"login": "exelents",
"id": 12846582,
"node_id": "MDQ6VXNlcjEyODQ2NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12846582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exelents",
"html_url": "https://github.com/exelents",
"followers_url": "https://api.github.com/users/exelents/followers",
"following_url": "https://api.github.com/users/exelents/following{/other_user}",
"gists_url": "https://api.github.com/users/exelents/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exelents/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exelents/subscriptions",
"organizations_url": "https://api.github.com/users/exelents/orgs",
"repos_url": "https://api.github.com/users/exelents/repos",
"events_url": "https://api.github.com/users/exelents/events{/privacy}",
"received_events_url": "https://api.github.com/users/exelents/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | NONE | null | I'm trying to train a classification network on T5 encoder from this training code:
https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb
As I have a special output for label, I need to make trainer associate field from batch with field from my net output dictionary, as they represent output class label. How to do that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8540/comments | https://api.github.com/repos/huggingface/transformers/issues/8540/events | https://github.com/huggingface/transformers/pull/8540 | 743,066,469 | MDExOlB1bGxSZXF1ZXN0NTIxMDUxMTYz | 8,540 | Add model_max_length property on fast tokenizers | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is handled in the base tokenizer class, Morgan.",
"I think I've got an issue in my dev env, something fucked up... "
] | 1,605 | 1,605 | 1,605 | MEMBER | null | Fast tokenizers doesn't have model_max_length property as their "slow" counterpart.
```python
>>> t.model_max_len
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'BertTokenizerFast' object has no attribute 'model_max_len'
```
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8540/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8540",
"html_url": "https://github.com/huggingface/transformers/pull/8540",
"diff_url": "https://github.com/huggingface/transformers/pull/8540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8540.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/8539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8539/comments | https://api.github.com/repos/huggingface/transformers/issues/8539/events | https://github.com/huggingface/transformers/issues/8539 | 743,045,618 | MDU6SXNzdWU3NDMwNDU2MTg= | 8,539 | TFLongformer Error: TypeError: __init__() missing 1 required positional argument: 'last_hidden_state' | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @SapirWeissbuch - it's very hard for us to debug such long and user-specific examples that also require external files. \r\n\r\nI'd recommend that you either:\r\n\r\n1) Narrow down the error much more and give us a 10-liner to reproduce the error (also making sure that it is in fact an error and not a wrong pre-processing step)\r\nOR\r\n2) Ask your question on the forum https://discuss.huggingface.co/\r\nOR\r\n3) Make a google colab (also keeping it to an absolute minimum of lines) to reproduce your error which makes it easier for us to quickly debug the problem.\r\n\r\nWe are only 4 people maintaining the library and sadly don't find the time to debug such longer issues. Hope you understand and looking forward for a pin-pointed error description :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@SapirWeissbuch , I'm also facing this issue, were you able to resolve this? I get this error when running training for \"allenai/longformer-base-4096\" model with distributed training"
] | 1,605 | 1,658 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFLongformer
The problem arises when using:
* [x] my own modified scripts: Scripts below.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: TriviaQA converted to SQuAD format
Hi, Iβm getting an error when trying to fine-tune TFLongformer on TriviaQA using a simple architecture that is discussed [here](https://keras.io/examples/nlp/text_extraction_with_bert/) (I want to use TFLongformer instead of BERT and formatted-TriviaQA instead of SQuAD).
The error I'm getting is:
```
Traceback (most recent call last): [29/1207]
File "model_for_issues.py", line 102, in <module>
model = build_model()
File "model_for_issues.py", line 65, in build_model
embedding = encoder(input_ids, attention_mask=attention_mask).last_hidden_state
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base$
layer.py", line 926, in __call__
input_list)
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base$
layer.py", line 1117, in _functional_construction_call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "path/to/project/env/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api
.py", line 258, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
path/to/project/env/lib/python3.7/site-packages/transformers/modeling_tf_longformer.py:1
760 call *
outputs = self.longformer(inputs, **kwargs)
path/to/project/env/lib/python3.7/site-packages/transformers/modeling_tf_longformer.py:1
509 call *
encoder_outputs = self.encoder(
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:926 __call__ **
input_list)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:1147 _functional_construction_call
outputs)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_laye
r.py:2568 _set_connectivity_metadata
outputs = nest.pack_sequence_as(outputs, outputs_copy)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:570 pack_
sequence_as
return _pack_sequence_as(structure, flat_sequence, expand_composites)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:533 _pack
_sequence_as
return sequence_fn(structure, packed)
path/to/project/env/lib/python3.7/site-packages/tensorflow/python/util/nest.py:152 _sequ
ence_like
d = instance_type()
TypeError: __init__() missing 1 required positional argument: 'last_hidden_state'
```
_Remark: I changed the path to the working directory to βpath/to/projectβ for my privacy_
## To reproduce
Steps to reproduce the behavior:
1. Convert TriviaQA data to SQuAD format using this script Longformerβs github https://github.com/allenai/longformer/blob/master/scripts/cheatsheet.txt#L29
2. Use this code to re-format the data to the format I work with:
```python
""" Usage:
TriviaQA_formatting.py [--in=INPUT_FILE] [--out=OUTPUT_FILE] [--debug]
"""
# External imports
import logging
import pdb
from pprint import pprint
from pprint import pformat
from docopt import docopt
from pathlib import Path
from tqdm import tqdm
import json
# Local imports
import numpy as np
#----
if __name__ == "__main__":
# Parse command line arguments
args = docopt(__doc__)
inp_fn = Path(args["--in"]) if args["--in"] else None
out_fn = Path(args["--out"]) if args["--out"] else Path("./tmp.out")
# Determine logging level
debug = args["--debug"]
if debug:
logging.basicConfig(level = logging.DEBUG)
else:
logging.basicConfig(level = logging.INFO)
# Start computation
data = []
with open(inp_fn, 'r') as f:
print("started loading")
data = json.load(f)["data"]
print("ended loading")
data_dict = {}
contexts = [entry["paragraphs"][0]["context"] for entry in tqdm(data)]
questions = [entry["paragraphs"][0]["qas"][0]["question"] for entry in tqdm(data)]
answer_texts = [entry["paragraphs"][0]["qas"][0]["aliases"] for entry in tqdm(data)]
start_indices = [None] * len(data)
end_indices = [None] * len(data)
for index, entry in enumerate(data):
# taking the first answer
answers = entry["paragraphs"][0]["qas"][0]["answers"]
start_indices[index] = answers[0]["answer_start"] if answers else -1
end_indices[index] = (answers[0]["answer_start"] + len(answers[0]["text"])) if answers else -1
data_dict = {
"context": contexts,
"question": questions,
"start_index": start_indices,
"end_index": end_indices,
"answer_texts": answer_texts,
}
with open(out_fn, 'w+') as f:
json.dump(data_dict, f)
# End
logging.info("DONE")
```
Scripts used to run this formatting code:
```
python TriviaQA_formatting.py --in=squad-wikipedia-train-4096.json --out=formatted_wiki_train_4096.json
```
3. Run this script for tokenization and training model:
```python
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from transformers import TFLongformerModel,LongformerConfig, LongformerTokenizer, LongformerTokenizerFast
from longformer_encoding import *
import numpy as np
from tqdm import tqdm
import json
def encode_sentence(s, tokenizer):
s = s + tokenizer.sep_token
return list(tokenizer.encode(s))
def pad_to_max_length(t, max_len, padding_value):
t = t[:, :max_len]
paddings = [[0, 0], [0, max_len-tf.shape(t)[1]]]
return tf.pad(t, paddings, 'CONSTANT', constant_values=padding_value)
def longformer_encode(texts1, texts2, answer_end, tokenizer, max_len=4096):
num_examples = len(texts1)
sentence1 = tf.ragged.constant([encode_sentence(tokenizer.cls_token+s, tokenizer)
for s in tqdm(texts1)])
sentence2 = tf.ragged.constant([
encode_sentence(s, tokenizer)
for s in tqdm(texts2)])
input_word_ids = tf.concat([sentence1, sentence2], axis=-1)
attention_mask = tf.ones_like(input_word_ids).to_tensor()
type_s1 = tf.ones_like(sentence1)
type_s2 = tf.zeros_like(sentence2)
global_attention_mask = tf.concat(
[type_s1, type_s2], axis=-1).to_tensor()
sentence2_start_index = [len(s1) for s1 in sentence1]
# get indices of examples to ignore:
valid_sample_indices = []
for i in tqdm(range(num_examples)):
if sentence2_start_index[i] + answer_end[i] <= max_len:
valid_sample_indices.append(i)
re
inputs = {
'input_ids': pad_to_max_length(input_word_ids.to_tensor(), max_len, tokenizer.pad_token_id),
'attention_mask': pad_to_max_length(attention_mask, max_len, 0),
'global_attention_mask': pad_to_max_length(global_attention_mask, max_len, 0),
}
return inputs, valid_sample_indices
def read_data_file(fn):
with open(fn, 'r') as f:
data = json.load(f)
answer_exists_slice = np.ndarray.flatten(np.argwhere(np.array(data["end_index"]) != -1))
return {key:np.ndarray.tolist(np.array(value)[answer_exists_slice]) for (key,value) in data.items()}
def build_model( max_len=4096):
config = LongformerConfig()
LongformerModel = TFLongformerModel(config=config)
encoder = LongformerModel.from_pretrained('allenai/longformer-base-4096',return_dict=True)
input_ids = layers.Input(shape=(max_len,), dtype=tf.int32, name="input_ids")
attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32, name="attention_mask")
global_attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32, name="global_attention_mask")
embedding = encoder(input_ids, attention_mask=attention_mask).last_hidden_state
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation(keras.activations.softmax)(start_logits)
end_probs = layers.Activation(keras.activations.softmax)(end_logits)
model = keras.Model(
inputs=[input_ids, attention_mask],
outputs=[start_probs, end_probs],
)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam(lr=5e-5)
model.compile(optimizer=optimizer, loss=[loss, loss])
model.summary()
return model
# Preprocessing
train_fn = "data/formatted_wiki_train_4096.json"
data = read_data_file(train_fn)
print("One data example:\n Question: {}\ncontext: {}\nstart_index: {}".format(data['question'][0], data['context'][0][:200], data["start_index"][0]))
tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096')
inputs, valid_sample_indices = longformer_encode(data["question"], data["context"], data["end_index"], tokenizer, 4096)
valid_samples_slice = np.array(valid_sample_indices)
clean_inputs = {key:np.array(value)[valid_samples_slice] for (key,value) in inputs.items()}
clean_data = {key:np.array(value)[valid_samples_slice] for (key,value) in data.items()}
clean_data["encoded_input"] = clean_inputs
x_train = clean_data["encoded_input"]
y_train = [clean_data["start_index"], clean_data["end_index"]]
model = build_model()
model.fit(
x_train,
y_train,
epochs=3,
verbose=2,
batch_size=1,
)
```
###### Trainig sample example:
```
One data example:
Question: Ownership of which worldwide publishing concern gave Conrad Black control of the Daily Telegraph?
context: Conrad Moffat Black , Baron Black of Crossharbour , ( born 25 August 1944 ) is a Canadian-born British former newspaper publisher and author . He is a non-affiliated life peer .
Black controlled H
start_index: 199
```
## Expected behavior
The model should start training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8538/comments | https://api.github.com/repos/huggingface/transformers/issues/8538/events | https://github.com/huggingface/transformers/issues/8538 | 743,022,346 | MDU6SXNzdWU3NDMwMjIzNDY= | 8,538 | TFBertModel not working at all with any type of model | {
"login": "brunopistone",
"id": 10196125,
"node_id": "MDQ6VXNlcjEwMTk2MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/10196125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brunopistone",
"html_url": "https://github.com/brunopistone",
"followers_url": "https://api.github.com/users/brunopistone/followers",
"following_url": "https://api.github.com/users/brunopistone/following{/other_user}",
"gists_url": "https://api.github.com/users/brunopistone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brunopistone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brunopistone/subscriptions",
"organizations_url": "https://api.github.com/users/brunopistone/orgs",
"repos_url": "https://api.github.com/users/brunopistone/repos",
"events_url": "https://api.github.com/users/brunopistone/events{/privacy}",
"received_events_url": "https://api.github.com/users/brunopistone/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Thanks for opening such a detailed issue!\r\n\r\nLet me ping @jplu, the TensorFlow master so that he may help you.",
"Hello!\r\n\r\nThis is because you are not updating your embedding size for `token_type_embeddings`. If you check carefully the config for the `dbmdz/bert-base-italian-xxl-cased` mode you will see:\r\n```\r\nBertConfig {\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 32102\r\n}\r\n```\r\nAnd `type_vocab_size` equals `2`. While in your example your are trying with a different value, that's why the error tells you that it cannot find any lookup in the range `[0:2)`.\r\n",
"Hi @jplu , thank you for your answer. Now it works!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I am trying this exact thing and i get an error on this line:\r\nnet = tf.keras.layers.Dense(512, activation='relu')(seq_output)\r\nInputs to a layer should be tensors. Got: last_hidden_state\r\nAny ideas?"
] | 1,605 | 1,617 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: macOS
- Python version: 3.7
- PyTorch version (GPU?): NO
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert "dbmdz/bert-base-italian-xxl-cased"
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
<---------------------------------------------->
Hi @LysandreJik , @stefan-it , @sgugger , I'm trying to use `dbmdz/bert-base-italian-xxl-cased`for creating a keras model for a classification task.
I've followed the documentation but I continue to receive the following error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[5,0] = 102 is not in [0, 2)
[[node functional_1/bert/embeddings/token_type_embeddings/embedding_lookup (defined at /anaconda3/envs/profanity-detector/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:186) ]] [Op:__inference_train_function_29179]
```
This is the model:
```
from transformers import TFBertModel, BertTokenizer
bert_model = TFBertModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
input_ids = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
token_type_ids = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
attention_mask = tf.keras.layers.Input(shape=(constants.MAX_SEQ_LENGTH,), dtype=tf.int32)
seq_output, _ = bert_model({
"input_ids": input_ids,
"token_type_ids": token_type_ids,
"attention_mask": attention_mask
})
pooling = tf.keras.layers.GlobalAveragePooling1D()(seq_output)
dropout = tf.keras.layers.Dropout(0.2)(pooling)
output = tf.keras.layers.Dense(constants.CLASSES, activation="softmax")(dropout)
model = tf.keras.Model(
inputs=[input_ids, token_type_ids, attention_mask],
outputs=[output]
)
model.compile(optimizer=tf.optimizers.Adam(lr=0.00001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
```
My dataset is tokenized by this method:
```
def map_to_dict(self, input_ids, attention_masks, token_type_ids, labels):
return {
"input_ids": input_ids,
"token_type_ids": token_type_ids,
"attention_mask": attention_masks,
}, labels
def tokenize_sequences(self, tokenizer, max_sequence_length, data, labels):
try:
token_ids = []
token_type_ids = []
attention_mask = []
for sentence in data:
bert_input = tokenizer.encode_plus(
sentence,
add_special_tokens=True, # add [CLS], [SEP]
max_length=max_sequence_length, # max length of the text that can go to BERT
truncation=True,
pad_to_max_length=True, # add [PAD] tokens
return_attention_mask=True # add attention mask to not focus on pad tokens
)
token_ids.append(bert_input["input_ids"])
token_type_ids.append(bert_input["token_type_ids"])
attention_mask.append(bert_input["attention_mask"])
return tf.data.Dataset.from_tensor_slices((token_ids, token_type_ids, attention_mask, labels)).map(self.map_to_dict)
except Exception as e:
stacktrace = traceback.format_exc()
logger.error("{}".format(stacktrace))
raise e
ds_train_encoded = tokenize_sequences(tokenizer, 512, X_train, y_train).shuffle(10000).batch(6)
```
X_train examples:
```
["Questo video Γ¨ davvero bellissimo", "La qualitΓ del video non Γ¨ proprio il massimo"......]
```
y_train examples:
```
[[1], [0]...]
```
I continue to receive the error described before.
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[5,0] = 102 is not in [0, 2)
[[node functional_1/bert/embeddings/token_type_embeddings/embedding_lookup (defined at /anaconda3/envs/profanity-detector/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:186) ]] [Op:__inference_train_function_29179]
```
If I try to use TFBertForSequenceClassification everything works fine (for this reason I'm excluding tokenization problems).
Can you please provide a solution or a well documented guide for using TFBertModel class with Keras model (I cannot find it)?
Thank you
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8538/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8537/comments | https://api.github.com/repos/huggingface/transformers/issues/8537/events | https://github.com/huggingface/transformers/issues/8537 | 742,968,328 | MDU6SXNzdWU3NDI5NjgzMjg= | 8,537 | Add a new model ConvBert | {
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
}
] | [
"there is one implement:https://github.com/JunnYu/ConvBert_huggingface",
"I have implemented this model in [https://github.com/gitabtion/ConvBert-PyTorch](https://github.com/gitabtion/ConvBert-PyTorch), and it pass the unittest right now. :heavy_check_mark:",
"> I have implemented this model in https://github.com/gitabtion/ConvBert-PyTorch, and it pass the unittest right now. heavy_check_mark\r\n\r\nhow about a conversion script? "
] | 1,605 | 1,612 | 1,612 | NONE | null | # π New model addition
Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (https://github.com/yitu-opensource/ConvBert)
* [x] the model weights are available: (https://drive.google.com/drive/folders/1pSsPcQrGXyt1FB45clALUQf-WTNAbUQa)
* [x] who are the authors: (@zihangJiang @zhoudaquan)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8537/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 8,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8536/comments | https://api.github.com/repos/huggingface/transformers/issues/8536/events | https://github.com/huggingface/transformers/issues/8536 | 742,961,728 | MDU6SXNzdWU3NDI5NjE3Mjg= | 8,536 | Pretrain PEGASUS from scratch | {
"login": "EKebriaei",
"id": 15990021,
"node_id": "MDQ6VXNlcjE1OTkwMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/15990021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EKebriaei",
"html_url": "https://github.com/EKebriaei",
"followers_url": "https://api.github.com/users/EKebriaei/followers",
"following_url": "https://api.github.com/users/EKebriaei/following{/other_user}",
"gists_url": "https://api.github.com/users/EKebriaei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EKebriaei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EKebriaei/subscriptions",
"organizations_url": "https://api.github.com/users/EKebriaei/orgs",
"repos_url": "https://api.github.com/users/EKebriaei/repos",
"events_url": "https://api.github.com/users/EKebriaei/events{/privacy}",
"received_events_url": "https://api.github.com/users/EKebriaei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@patil-suraj or @patrickvonplaten can chime in if I'm wrong, but I believe we currently only have fine-tuning & distillation schemes for the BART-family models, no pre-training.",
"Hey @EKebriaei - yeah we sadly don't have any pre-training notebooks for pegasus yet. Are you looking for the summary specific pre-training of pegasus or just the BART-like denoising pre-training? ",
"> Hey @EKebriaei - yeah we sadly don't have any pre-training notebooks for pegasus yet. Are you looking for the summary specific pre-training of pegasus or just the BART-like denoising pre-training?\r\n\r\nI want to pre-train pegasus on a language other than English. ",
"Yeah, we don't have a script or good documentation for this yet.\r\n\r\ncc https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819",
"> Yeah, we don't have a script or good documentation for this yet.\r\n> \r\n> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)\r\n\r\nI have some dependency problems when compiling this: https://github.com/google-research/pegasus/blob/master/pegasus/ops/pretrain_parsing_ops.cc\r\nDo you have any comments that help?",
"This PR will enable a pretraining script: #8731",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"> Yeah, we don't have a script or good documentation for this yet.\r\n> \r\n> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)\r\n\r\nCould we follow the same approach you (@patrickvonplaten) provided [here](https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271) to pretrain BART for PEGASUS ? PEGASUS has also a GSG training objective on top of the BART-like denoising as detailed in the original [paper](https://arxiv.org/pdf/1912.08777.pdf). \r\nThe GSG work by masking the most important sentences according to ROUGE then the target are the missing sentences.\r\nSo my attempt by changing your code would be:\r\n\r\n```\r\nfrom transformers import PegasusTokenizer, PegasusForConditionalGeneration, PegasusConfig\r\n\r\ntok = PegasusTokenizer.from_pretrained(\"google/pegasus\")\r\nmodel = PegasusForConditionalGeneration(PegasusConfig())\r\n\r\ninput_string = [\"Pegasus is <mask_2> . <mask_1> it <mask_2> the model .\"\r\ndecoder_input_string = \"<s> It is pure white .\"\r\nlabels_string = \"It is pure white . <eos>\"\r\n\r\ninput_ids = tok(input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\ndecoder_input_ids =tok(decoder_input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\nlabels = tok(labels_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\n \r\nloss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n```\r\nDoes this look reasonable (the selection strategy of masked sentences will naturally need to be implemented)? @patrickvonplaten \r\n",
"@Skylixia - yes this looks reasonable to me! I guess in the original PEGASUS paper another masking loss was added on top of the encoder to predict the <mask_2> tokens, which would be difficult here (but should also be feasible). But this looks like the right approach to me!",
"Hi. I've been struggling with a pretty simple issue trying to get the above code to work.\r\n\r\nEssentially, the Pegasus tokenizer's eos is `</s>` (not `<eos>` as mentioned above) and it does not seem to have a bos symbol. So no matter what combination I try, I keep getting a ValueError as the lengths of the label and decoder inputs don't match.\r\n\r\nI tried to follow what happens in [BART](https://github.com/huggingface/transformers/issues/5096), but the following does not work: \r\n\r\n```\r\nfrom transformers import PegasusForConditionalGeneration, PegasusTokenizer\r\nmodel_name = 'google/pegasus-xsum'\r\ntokenizer = PegasusTokenizer.from_pretrained(model_name)\r\nmodel = PegasusForConditionalGeneration.from_pretrained(model_name)\r\n\r\ninput_string = [\"Pegasus is mythical . <mask_1> it names the model .\"]\r\ndecoder_input_string = [\"<s>It is pure white . \"]\r\nlabels_string = [\"It is pure white .</s>\"]\r\n\r\ninput_ids = tokenizer(input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\ndecoder_input_ids = tokenizer(decoder_input_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\nlabels = tokenizer(labels_string, add_special_tokens=False, return_tensors=\"pt\").input_ids\r\nloss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n```\r\n\r\nIf I try to run this, I get `Expected input batch_size (10) to match target batch_size (7).` Complete stack trace:\r\n\r\n```\r\n---> 15 loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n 16 # for _ in range(1_000):\r\n 17 # loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0]\r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 725 result = self._slow_forward(*input, **kwargs)\r\n 726 else:\r\n--> 727 result = self.forward(*input, **kwargs)\r\n 728 for hook in itertools.chain(\r\n 729 _global_forward_hooks.values(),\r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1285 if labels is not None:\r\n 1286 loss_fct = CrossEntropyLoss()\r\n-> 1287 masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))\r\n 1288 \r\n 1289 if not return_dict:\r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 725 result = self._slow_forward(*input, **kwargs)\r\n 726 else:\r\n--> 727 result = self.forward(*input, **kwargs)\r\n 728 for hook in itertools.chain(\r\n 729 _global_forward_hooks.values(),\r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/loss.py in forward(self, input, target)\r\n 959 \r\n 960 def forward(self, input: Tensor, target: Tensor) -> Tensor:\r\n--> 961 return F.cross_entropy(input, target, weight=self.weight,\r\n 962 ignore_index=self.ignore_index, reduction=self.reduction)\r\n 963 \r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)\r\n 2466 if size_average is not None or reduce is not None:\r\n 2467 reduction = _Reduction.legacy_get_string(size_average, reduce)\r\n-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n 2469 \r\n 2470 \r\n\r\n/home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)\r\n 2259 \r\n 2260 if input.size(0) != target.size(0):\r\n-> 2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'\r\n 2262 .format(input.size(0), target.size(0)))\r\n 2263 if dim == 2:\r\n\r\nValueError: Expected input batch_size (10) to match target batch_size (7).\r\n```",
"I have opened a new issue with complete detail (and a corrected example) here: https://github.com/huggingface/transformers/issues/11541",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"> Yeah, we don't have a script or good documentation for this yet.\r\n> \r\n> cc [#8594 (comment)](https://github.com/huggingface/transformers/issues/8594#issuecomment-731248819)\r\n\r\n@patrickvonplaten Any update on this? I am planning on researching abstractive summarization in a non-English language and the PEGASUS model seems to be a worthwhile model to pursue. It would be great if you could either direct me to any resources or suggest another model to pursue in my project. Thanks!"
] | 1,605 | 1,663 | 1,622 | NONE | null | I want to pre-train PEGASUS model from scratch on a language other than English. Is there any way to do this using huggingace API's? The source code released by the authors is complicated in use to pre-train. Also little documentation available to do this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8535/comments | https://api.github.com/repos/huggingface/transformers/issues/8535/events | https://github.com/huggingface/transformers/pull/8535 | 742,937,927 | MDExOlB1bGxSZXF1ZXN0NTIwOTY2OTY2 | 8,535 | [doc] typo fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Apparently, there is no agreement on the comma: https://www.dailywritingtips.com/comma-after-i-e-and-e-g/, especially after i.e. https://english.stackexchange.com/questions/16172/should-i-always-use-a-comma-after-e-g-or-i-e. \r\n\r\nTherefore it's probably the best to stick to whatever manual of style this projects prefers. And, perhaps, even starting a brief guide for the `transformers` manual of style, where you also include all those stylistic recommendations, such as, not documenting `None` as a default for optional objects, the item in question, etc.\r\n\r\nMy recommendation is that every time a style change is introduced, to change all docs at once, so that the diverse developers will most likely copy/observe how the text is written and follow suite when they create new docs. It's not hard to do:\r\n\r\n```\r\nfind . -type d -name \".git\" -prune -o -type f -exec perl -pi -e \"s|e\\.g\\. |e.g., |g; s|i\\.e\\. |i.e., |g;\" {} \\;\r\n```\r\nthis one fixes a missing comma.",
"Thanks @stas00 !",
"I had missed your comment. We can do the perl magic thing (though check me with privately before applying it as we are scheduling a few big changes today and tomorrow in preparation for the v4.0 so you don't want to do this in the middle of one ;-) ).\r\n\r\nAs for the documentation, we can add it to our [documentation guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification) (which already contains the not defaults to `None`).",
"> I had missed your comment. We can do the perl magic thing (though check me with privately before applying it as we are scheduling a few big changes today and tomorrow in preparation for the v4.0 so you don't want to do this in the middle of one ;-) ).\r\n\r\nYou can run it any time you know it's a good time. All credits go to perl ;)\r\n \r\n> As for the documentation, we can add it to our [documentation guide](https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification) (which already contains the not defaults to `None`).\r\n\r\nThat works. Probably start a \"grammar style\" section.\r\n"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | s/e.g./i.e./ as what follows is not an example, but a "that is" statement + fix language
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8535/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8535",
"html_url": "https://github.com/huggingface/transformers/pull/8535",
"diff_url": "https://github.com/huggingface/transformers/pull/8535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8535.patch",
"merged_at": 1605531931000
} |
https://api.github.com/repos/huggingface/transformers/issues/8534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8534/comments | https://api.github.com/repos/huggingface/transformers/issues/8534/events | https://github.com/huggingface/transformers/issues/8534 | 742,864,462 | MDU6SXNzdWU3NDI4NjQ0NjI= | 8,534 | mBart prefix and suffix for language id | {
"login": "RQuispeC",
"id": 28014561,
"node_id": "MDQ6VXNlcjI4MDE0NTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28014561?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RQuispeC",
"html_url": "https://github.com/RQuispeC",
"followers_url": "https://api.github.com/users/RQuispeC/followers",
"following_url": "https://api.github.com/users/RQuispeC/following{/other_user}",
"gists_url": "https://api.github.com/users/RQuispeC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RQuispeC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RQuispeC/subscriptions",
"organizations_url": "https://api.github.com/users/RQuispeC/orgs",
"repos_url": "https://api.github.com/users/RQuispeC/repos",
"events_url": "https://api.github.com/users/RQuispeC/events{/privacy}",
"received_events_url": "https://api.github.com/users/RQuispeC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"doublechecking the paper, it is expected that the language id as suffix so there is not a bug in the code but a error in the inline documentation.",
"Hey @RQuispeC - thanks a lot for checking! Do you feel like opening a PR to fix the doc? That would be super helpful :-)",
"Just added the PR @patrickvonplaten :) ",
"Hi, I've ran into the same issue - but it seems that the documentation was correct, the target language tag should be the bos (and maybe also eos?). \r\neos is set to be the target language id here: https://github.com/pytorch/fairseq/blob/c8a0659be5cdc15caa102d5bbf72b872567c4859/fairseq/tasks/translation_from_pretrained_bart.py#L116\r\nbut then for inference, it is also used as bos by default, as far as I can tell:\r\nhttps://github.com/pytorch/fairseq/blob/c8a0659be5cdc15caa102d5bbf72b872567c4859/fairseq/sequence_generator.py#L173\r\nThe paper states \"A language id symbol <LID> is used as the initial token to predict the sentence. \"\r\nI get very strange predictions with the pretrained mBART model unless I set \"decoder_start_token_id\" to the target language id in model.generate().",
"Hi, I didn't check FAIR's code but if you check the figure 1 of the paper (\"Multilingual Denoising Pre-Training (mBART)\") you can see that the language id is at:\r\n* eos of input (source lang text)\r\n* bos of decoder (I guess this is what author's mean with \"A language id symbol is used as the initial token to predict the sentence. \")\r\n* eos of output (target lang text)\r\n\r\nThe documentation refers to input and output, but not the decoder . `decoder_start_token_id` sets the decoder input symbol so it's expected that not using it gives weird results.\r\nThat's what I understand, Am I missing something?",
"The tokenizer produces `X [eos, tgt_lang_code]` for the target text, but to be consistent with the pretrained model, it should be `[tgt_lang_code] X [tgt_lang_code] ` (refering to the situation where you have a target side text)? `decoder_start_token_id` is only for inference, not for training, right?\r\n At least that's what I understand from the paper/fairseq code.",
"`decoder_start_token_id` is used during training and testing, it's the first token of the decoder, I found this easier to understand with the figure 1 of the paper.\r\n\r\nWhere did you get the input used for the pretrained model?",
"I don't have acces to the input used for pretraining (that would be so much easier :) ), I'm just trying to understand the format from the paper and code. \r\nMaybe a stupid question, but how can I set `decoder_start_token_id` for training (in the forward pass of the module)? I could only find it as an argument of generate() (I'm using version 3.1.0).\r\nAlso I've noticed that the models seems to have problems in inference if I replace `eos` with the language token on the target side (it randomly stops generating mid sentence), so I guess `eos` has to be present for generate() to function properly (I did set `eos_token_id` to the language tag)? \r\n",
"Not stupit question :)\r\nYou can find an example of you to set source and target languages here \r\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py#L222\r\n```\r\n model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]\r\n```\r\nNot sure if it's available in version 3.1, I was using version 3.5\r\n\r\nThe caveat is that (at least in up to version 3.5) the hugging face implementation only allows 1 language for all your data in source and 1 language for all your data in the target, hence `decoder_start_token_id`, `tgt_lang_code` and `src_lang_code` are just one string. \r\nDepending in your problem this may be fine. For instance, if you are working on a translation then you are good to go with hugging face implementation because for translation you usually aim to translate one language (e.g. english) to another (e.g. spanish).\r\n\r\nIn my case I needed something more generic. I was working on a project that needed to train multiple languages at the same time, so my training data looked something like:\r\n\r\n* source\r\n\r\n| text | lang |\r\n|:--------- | ----------: |\r\n| my source text 1 | en_XX | \r\n| my source text 2 | es_XX | \r\n| my source text 3 | fr_FR | \r\n\r\n* target\r\n\r\n\r\n| text | lang |\r\n|:--------- | ----------: |\r\n| my target text 1 | en_XX|\r\n| my target text 2 | es_XX|\r\n| my target text 3 | fr_FR|\r\n\r\nIn this case `decoder_start_token_id`, `tgt_lang_code` and `src_lang_code` should be `['en_XX', 'es_XX', 'fr_FR']` but this is not supported by hugging face. I implemented a custom version of `MBartForConditionalGeneration` and `MBartTokenizer`.\r\n\r\nAbout the `eos` I have just used it as in the seq2seq example so I'm not sure about the behavior you mention.\r\n\r\n\r\n\r\n",
"Thank you for the explanations! There are major differences to the (old) version I'm using, so I think the best way for me is to update my code to work with the newest huggingface version. Hopefully, that will solve the problem :) But just in case, where would I find your custom versions of `MBartForConditionalGeneration` and `MBartTokenizer` ?",
"sorry for answering so late, I was really busy at work.\r\n\r\nI'm not sure if can open source it at this time but the main ideas was to check the flow of the parameters `tgt_lang_code` and `src_lang_code` then update the function calls to support vectors of strings instead of only strings.",
"I find this problem too\r\n```\r\nclass MyMBartTokenizer(MBartTokenizer):\r\n def set_tgt_lang_special_tokens(self, lang: str) -> None:\r\n \"\"\"Reset the special tokens to the target language setting. No prefix and suffix=[eos, tgt_lang_code].\"\"\"\r\n self.cur_lang_code = self.lang_code_to_id[lang]\r\n self.prefix_tokens = [self.cur_lang_code]\r\n self.suffix_tokens = [self.eos_token_id]\r\n```"
] | 1,605 | 1,635 | 1,606 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.5.1
### Who can help
mBART: @patrickvonplaten
documentation: @sgugger
## Information
It seems that there is error or inconsistency between [mbart tokenizer](https://github.com/huggingface/transformers/blob/55e8d0cea25be18b044523b30f4bef58fec63289/src/transformers/tokenization_mbart.py) and comments/docs inline the code.
comments on code explain
```
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. An MBART sequence has the following format, where ``X`` represents the sequence:
- ``input_ids`` (for encoder) ``X [eos, src_lang_code]``
- ``decoder_input_ids``: (for decoder) ``[tgt_lang_code] X [eos]``
```
but `set_tgt_lang_special_tokens` considers that `decoder_input_ids` is **X [eos][tgt_lang_code]** instead of **[tgt_lang_code] X [eos]**
```
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = []
self.suffix_tokens = [self.eos_token_id, self.cur_lang_code]
```
Shouldn't it be:
```
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = [self.cur_lang_code]
self.suffix_tokens = [self.eos_token_id]
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8534/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8533/comments | https://api.github.com/repos/huggingface/transformers/issues/8533/events | https://github.com/huggingface/transformers/issues/8533 | 742,853,376 | MDU6SXNzdWU3NDI4NTMzNzY= | 8,533 | Bertabs example: index_select(): Expected dtype int64 for index | {
"login": "TheTimKiely",
"id": 34795732,
"node_id": "MDQ6VXNlcjM0Nzk1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/34795732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheTimKiely",
"html_url": "https://github.com/TheTimKiely",
"followers_url": "https://api.github.com/users/TheTimKiely/followers",
"following_url": "https://api.github.com/users/TheTimKiely/following{/other_user}",
"gists_url": "https://api.github.com/users/TheTimKiely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheTimKiely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheTimKiely/subscriptions",
"organizations_url": "https://api.github.com/users/TheTimKiely/orgs",
"repos_url": "https://api.github.com/users/TheTimKiely/repos",
"events_url": "https://api.github.com/users/TheTimKiely/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheTimKiely/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! The `bert_abs` example is not maintained anymore, and should be moved to `examples/contrib/legacy`.\r\n\r\nThe recommended way of training sequence-to-sequence models is described in the `examples/seq2seq/README.md` file. What are you trying to do with `bertabs`, so that we may help you find what you need?",
"Hi!\r\nThanks for your response.\r\nI'm just starting to experiment with abstractive text summarization.\r\nIs this something I should look for in the Hugging Face tools and samples?\r\nThanks again,\r\nTim",
"I believe abstractive text summarization is implemented in the `seq2seq` examples, as the XSUM models were trained to do abstractive text summarization.\r\n\r\nHave you taken a look at the summarization examples in https://github.com/huggingface/transformers/tree/master/examples/seq2seq?\r\n\r\n@patil-suraj may also be of help.",
"Thanks again!\r\nIβll take a look at the seq2seq examples. \r\n-Tim",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18.04
- Python version: 3.8.0
- PyTorch version (GPU?): torch==1.7.0
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj
## Information
Following the example in the seq2seq/bertabs readme.md.
I am getting this error:
```
File "/code/tools/transformers/examples/seq2seq/bertabs/modeling_bertabs.py", line 919, in _fast_translate_batch
alive_seq = torch.cat([alive_seq.index_select(0, select_indices), topk_ids.view(-1, 1)], -1)
RuntimeError: index_select(): Expected dtype int64 for index
```
In a debugger, I see that the 'select_indices' parameter is a tensor of floats.
I don't understand the beam mechanism, so I don't know where to start troubleshooting this.
Any help would be great!
-Tim | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8533/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8532/comments | https://api.github.com/repos/huggingface/transformers/issues/8532/events | https://github.com/huggingface/transformers/issues/8532 | 742,850,219 | MDU6SXNzdWU3NDI4NTAyMTk= | 8,532 | converting tensorflow checkpoint to pytorch | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,605 | 1,605 | 1,605 | NONE | null | @LysandreJik , I am trying to convert REALM checkpoints in google-research/language/ to a pytorch checkpoint.
One of the arguments to convert_bert_original_tf_checkpoint_to_pytorch.py is bert config.json file. I don't see this file in the model directory. Just wanted to confirm that I can use any bert_config.json ?
From the code, I see :
" help="The config json file corresponding to the pre-trained BERT model. \n"
"This specifies the model architecture.",
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8531/comments | https://api.github.com/repos/huggingface/transformers/issues/8531/events | https://github.com/huggingface/transformers/issues/8531 | 742,773,824 | MDU6SXNzdWU3NDI3NzM4MjQ= | 8,531 | [models website: files section] various issues/suggestions for a better UI | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Our assumption was that git-lfs was pretty widespread, but it's an assumption we'd like to check in the coming weeks, and if not, explain/document better. I like the official website: https://git-lfs.github.com/\r\n\r\nLet's keep this issue open to find out if many users have issues/questions?\r\n\r\nOn your second bullet point, we just pushed an update where we display file sizes and download links: https://huggingface.co/facebook/bart-large/tree/main#\r\nlet me know if this helps.\r\n\r\n",
"> Our assumption was that git-lfs was pretty widespread, but it's an assumption we'd like to check in the coming weeks, and if not, explain/document better. I like the official website: https://git-lfs.github.com/\r\n\r\nI'm not attached to what you link to ;) I have never used it before.\r\n\r\nProbably the most ideal solution is for `git` to be made more smart and inform users about this website, rather than fail with `'lfs' is not a git command`\r\n\r\n> Let's keep this issue open to find out if many users have issues/questions?\r\n\r\nOf course. I renamed it to something more generic then. Turned the bullets into completion boxes.\r\n \r\n> On your second bullet point, we just pushed an update where we display file sizes and download links: https://huggingface.co/facebook/bart-large/tree/main#\r\n> let me know if this helps.\r\n\r\nThis is awesome! Love it!\r\n\r\nCan we add the download link from the file's page too in case the user missed the download button on the index page?\r\ni.e from https://huggingface.co/facebook/bart-large/blob/main/pytorch_model.bin\r\n\r\n- added an additional nice-to-have item in OP wrt `<title>`, \r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,605 | 1,616 | 1,616 | CONTRIBUTOR | null | Models website - files section: suggestions/issues:
1. [ ] `git lfs install` not working
I tried to download a model as instructed here:
https://huggingface.co/facebook/bart-large/tree/main#how-to-use
```
$ git lfs install
git: 'lfs' is not a git command. See 'git --help'.
The most similar commands are
last
ls
```
I had to do:
```
apt install git-lfs
```
and then it worked:
```
$ git lfs install
Updated git hooks.
Git LFS initialized.
```
Perhaps it needs a link to:
https://docs.github.com/en/free-pro-team@latest/github/managing-large-files/installing-git-large-file-storage
if apt doesn't have it or a user is on a different setup.
2. [ ] would it be possible to rename "Use in transformers" to "Use/Download" - it wasn't obvious to look there for download instructions. I was trying to click on the files.
3. [x] the fact that the files in the [listing](https://huggingface.co/facebook/bart-large/tree/main#) are clickable and some of them are containing the final data and others contain just a reference and no way to get to the final data is kind of inconsistent and confusing. at the very least the ones with a reference instead of data should have a link to how to get to the data.
4. [ ] ` .gitattributes` is irrelevant to the user in that listing, it's a circumstantial file and not part of the model files, IMHO ;)
5. [ ] <title> is missing (uses generic title) - not optimal for bookmarking/email forwarding
@julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8531/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8530/comments | https://api.github.com/repos/huggingface/transformers/issues/8530/events | https://github.com/huggingface/transformers/pull/8530 | 742,764,600 | MDExOlB1bGxSZXF1ZXN0NTIwODMxMTE0 | 8,530 | Switch `return_dict` to `True` by default. | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"> You can still index with integers or a slice:\r\n> loss = outputs[0]\r\n\r\nDoesn't seem to be the case if I use fp16/apex (possibly any fp16):\r\n\r\n`finetune.py` currently fails with:\r\n\r\n```\r\n File \"finetune.py\", line 170, in training_step\r\n loss_tensors = self._step(batch)\r\n File \"finetune.py\", line 151, in _step\r\n lm_logits = outputs[0]\r\nKeyError: 0\r\n```\r\n\r\nOddly enough all finetune tests pass, and it works with fp32, but if I pass apex (which tests don't test) it fails. Any ideas how fp16 could make a difference? I can't test native amp due to the leak.\r\n\r\nThis may have something to do with PL too, since the call stack goes through PL:\r\n\r\nFull backtrace follows:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 444, in <module>\r\n main(args)\r\n File \"finetune.py\", line 411, in main\r\n trainer: pl.Trainer = generic_train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-finetune-fixes/examples/lightning_base.py\", line 398, in generic_train\r\n trainer.fit(model)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 469, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py\", line 64, in train\r\n results = self.train_or_test()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py\", line 66, in train_or_test\r\n results = self.trainer.train()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 521, in train\r\n self.train_loop.run_training_epoch()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 539, in run_training_epoch\r\n batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 691, in run_training_batch\r\n self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 477, in optimizer_step\r\n self.trainer.accelerator_backend.optimizer_step(\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py\", line 114, in optimizer_step\r\n model_ref.optimizer_step(\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/core/lightning.py\", line 1406, in optimizer_step\r\n optimizer_closure()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 681, in train_step_and_backward_closure\r\n result = self.training_step_and_backward(\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 770, in training_step_and_backward\r\n result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/training_loop.py\", line 324, in training_step\r\n training_step_output = self.trainer.accelerator_backend.training_step(args)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py\", line 72, in training_step\r\n output = self.__training_step(args)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py\", line 80, in __training_step\r\n output = self.trainer.model.training_step(*args)\r\n File \"finetune.py\", line 172, in training_step\r\n loss_tensors = self._step(batch)\r\n File \"finetune.py\", line 153, in _step\r\n lm_logits = outputs[0]\r\nKeyError: 0\r\n```\r\n\r\nI will fix the program, but thought I'd share this strange case.\r\n\r\nI really should get away from finetune for awhile, it seems to be the doorway to some incessant work...\r\n\r\n",
"Fixed with:\r\n```\r\n- lm_logits = outputs[0]\r\n+ lm_logits = outputs[\"logits\"]\r\n```\r\ncan't use attributes either:\r\n\r\n`outputs.logits` fails:\r\n\r\n`AttributeError: 'dict' object has no attribute 'logits'`\r\n"
] | 1,605 | 1,605 | 1,605 | COLLABORATOR | null | # What does this PR do?
As part of the main changes we want to bring to Transformers v4, this PR switches the `return_dict` argument default from `False` to `True`. Most of the examples and documentation were already using this value (the PR removes `return_dict=True` in all those instances since this is now the default).
**New model outputs**
The new model outputs are dictionaries (instead of tuples) with a bit of added functionality: you can access elements by their keys, as attributes or even by index (to keep most of the backward compatibility). Here is an example:
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(**inputs, labels=labels)
```
The `outputs` object contains loss and logits and you can access them as keys:
```python
loss = outputs["loss"]
logits = outputs["logits"]
```
or as attributes.
```python
loss = outputs.loss
logits = outputs.logits
```
In an IDE you can thus use autocomplete to help.
You can still index with integers or a slice:
```python
loss = outputs[0]
logits = outputs[1]
loss, logits = outputs[:2]
```
but you can't unpack the tuple directly as was done before:
```python
loss, logits = outputs
```
will return "loss" and "logits" in `loss` and `logits` (like dictionaries).
**Known caveats**
Torchscript is incompatible with dict outputs (until 1.7) so doesn't work with those new outputs. The flag `return_dict` will default to `False` if you use `torchscript=True` when building your model.
TF Saved models deal with dictionaries but not their subclasses, so if you save and load your model using `tf.keras.models.load_model`, it will output regular dictionaries. This means that you won't be able to index in the outputs with integers or attributes, only their string keys.
```python
tf.saved_model.save(model, tmpdirname)
model = tf.keras.models.load_model(tmpdirname)
outputs = model(inputs)
loss = outputs["loss"]
logits = outputs["logits"]
```
Apex used in optimization mode "O2" will also lose the output types of the model and return regular dictionaries. This means that you won't be able to index in the outputs with integers or attributes, only their string keys.
**Breaking change**
All code of the form
```python
loss, output = model(**inputs)
```
needs to be switched to
```python
loss, output = model(**inputs).to_tuple()
```
or use the key/attributes of the model output returned.
Alternatively, you can pass `return_dict=False` when creating your model to get regular tuples as outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8530/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8530",
"html_url": "https://github.com/huggingface/transformers/pull/8530",
"diff_url": "https://github.com/huggingface/transformers/pull/8530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8530.patch",
"merged_at": 1605544980000
} |
https://api.github.com/repos/huggingface/transformers/issues/8529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8529/comments | https://api.github.com/repos/huggingface/transformers/issues/8529/events | https://github.com/huggingface/transformers/pull/8529 | 742,744,344 | MDExOlB1bGxSZXF1ZXN0NTIwODE0MjIz | 8,529 | Adding PrefixConstrainedLogitsProcessor | {
"login": "nicola-decao",
"id": 9703100,
"node_id": "MDQ6VXNlcjk3MDMxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9703100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicola-decao",
"html_url": "https://github.com/nicola-decao",
"followers_url": "https://api.github.com/users/nicola-decao/followers",
"following_url": "https://api.github.com/users/nicola-decao/following{/other_user}",
"gists_url": "https://api.github.com/users/nicola-decao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicola-decao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicola-decao/subscriptions",
"organizations_url": "https://api.github.com/users/nicola-decao/orgs",
"repos_url": "https://api.github.com/users/nicola-decao/repos",
"events_url": "https://api.github.com/users/nicola-decao/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicola-decao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten I think we are ready to merge now π",
"Hey @nicola-decao,\r\n\r\nThanks a lot for re-implementing your PR! A couple of things I think we should improve.\r\n\r\n1) I'm still not 100% sure how the API of `prefix_allowed_tokens_fn` would look like. Does it take one or two arguments? Also it would be very useful to correctly define the type hints in this case as I mentioned in the comment above\r\n\r\n2) Can we add a test for this logits processor? It should be relatively straight-forward in /home/patrick/hugging_face/transformers/tests/test_generation_logits_process.py. The test should only test the `prefix_allowed_tokens_fn` not the whole generate function. \r\n\r\n3) It would be awesome if the docstring could be a bit more explicit (*e.g.* including your paper and maybe even a tiny example)\r\n\r\n4) Can we delete the `src/.DS_Store ` file?\r\n\r\n5) (Not required for merge) I think we could speed up this function by just using torch.Tensor operations (see comment above), but I'm happy to keep this for a future PR.\r\n\r\nAlso, let me know if you need help! :-) ",
"> Great looks good to me now! @yjernite - do you want to take a final look as well?\r\n\r\nYup! Will take a look by EOD",
"LGTM, thanks for implementing this functionality!",
"Great work @nicola-decao ",
"> Great work @nicola-decao\r\n\r\nThanks :)"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
This pull request adds a new decoding strategy that constrains the next token to generate based on a callable function. It is a new PR that fixes https://github.com/huggingface/transformers/pull/7784 since the generate function went trough refactoring. @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8529/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8529",
"html_url": "https://github.com/huggingface/transformers/pull/8529",
"diff_url": "https://github.com/huggingface/transformers/pull/8529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8529.patch",
"merged_at": 1605715586000
} |
https://api.github.com/repos/huggingface/transformers/issues/8528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8528/comments | https://api.github.com/repos/huggingface/transformers/issues/8528/events | https://github.com/huggingface/transformers/pull/8528 | 742,726,519 | MDExOlB1bGxSZXF1ZXN0NTIwNzk5NTgz | 8,528 | [T5] Fix load weights function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failing tests are unrelated",
"Thanks again @patrickvonplaten , you made my weekend π \r\n"
] | 1,605 | 1,605 | 1,605 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7791
The problem in 7791 was that the code that was used to convert mtf t5 to hf t5 was outdated and had a couple of bugs:
1) weight embedding were not tied -> random weight matrices were used instead
2) the tokenizer didn't add EOS which meant that the input_ids were wrong.
@agemagician I will merge this into master now so that you don't have to work on the hacky branch I did a while back. Your models should be convertible with the new code added here and then everything should work as expected.
The following code now produces the correct results:
```python
#!/usr/bin/env python3
from transformers import T5Tokenizer # noqa: E402
from transformers.convert_t5_original_tf_checkpoint_to_pytorch import ( # noqa: E402
convert_tf_checkpoint_to_pytorch,
)
from transformers.modeling_t5 import T5Config, T5ForConditionalGeneration # noqa: E402
import torch
path_to_tf_checkpoint = "t5_mesh_checkpoints"
config = T5Config.from_pretrained("t5-base")
config.d_ff = 2048
config.d_kv = 64
config.d_model = 512
config.num_decoder_layers = 6
config.num_layers = 6
config.num_heads = 8
config.vocab_size = 32128
config.tie_word_embeddings = True
config.save_pretrained(path_to_tf_checkpoint)
convert_tf_checkpoint_to_pytorch(path_to_tf_checkpoint, path_to_tf_checkpoint + "/config.json", path_to_tf_checkpoint)
input_txt = ["javascript documentation generation: function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"]
tok = T5Tokenizer.from_pretrained(path_to_tf_checkpoint)
model = T5ForConditionalGeneration.from_pretrained(path_to_tf_checkpoint, return_dict=True)
model.to("cuda")
input_ids = tok(input_txt, return_tensors="pt").input_ids
outputs = model.generate(input_ids.to("cuda"), num_beams=4)
print(tok.decode(outputs[0]))
```
=> gives "<pad> Returns true if the browser is a native element.</s> "
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSTM: @stas00
-->
I checked that the new `convert_weights_function` works with previous mtf (t5==0.7.1) models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8528/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8528/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8528",
"html_url": "https://github.com/huggingface/transformers/pull/8528",
"diff_url": "https://github.com/huggingface/transformers/pull/8528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8528.patch",
"merged_at": 1605295900000
} |
https://api.github.com/repos/huggingface/transformers/issues/8527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8527/comments | https://api.github.com/repos/huggingface/transformers/issues/8527/events | https://github.com/huggingface/transformers/pull/8527 | 742,679,005 | MDExOlB1bGxSZXF1ZXN0NTIwNzU3MzQ2 | 8,527 | Add bart-large-mnli model card | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"looks good!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | # What does this PR do?
Adds a model card for facebook/bart-large-mnli. Since this model is currently the default for the zero-shot pipeline/widget, it adds an introduction to zero-shot text classification with references & example snippets. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8527/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8527",
"html_url": "https://github.com/huggingface/transformers/pull/8527",
"diff_url": "https://github.com/huggingface/transformers/pull/8527.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8527.patch",
"merged_at": 1605294446000
} |
https://api.github.com/repos/huggingface/transformers/issues/8526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8526/comments | https://api.github.com/repos/huggingface/transformers/issues/8526/events | https://github.com/huggingface/transformers/issues/8526 | 742,654,658 | MDU6SXNzdWU3NDI2NTQ2NTg= | 8,526 | Problem while pretraining MLM from scratch using Transformers | {
"login": "meysamgh",
"id": 34138742,
"node_id": "MDQ6VXNlcjM0MTM4NzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/34138742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meysamgh",
"html_url": "https://github.com/meysamgh",
"followers_url": "https://api.github.com/users/meysamgh/followers",
"following_url": "https://api.github.com/users/meysamgh/following{/other_user}",
"gists_url": "https://api.github.com/users/meysamgh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meysamgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meysamgh/subscriptions",
"organizations_url": "https://api.github.com/users/meysamgh/orgs",
"repos_url": "https://api.github.com/users/meysamgh/repos",
"events_url": "https://api.github.com/users/meysamgh/events{/privacy}",
"received_events_url": "https://api.github.com/users/meysamgh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It looks like you're using Transformers v3.1.0. There have been quite q few improvements/bug fixes on Trainer since then. Could you check you still have the issue with the latest version?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,605 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes(Two GPUs, Nvidia P100)
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Trainer: **@sgugger**
-->
Trainer: @sgugger
## Information
Model I am using (RoBERTa):
The problem arises when using:
* [* ] my own modified scripts: (give details below)
'''Python
config=RobertaConfig(vocab_size=30_000, max_position_embeddings=inputlen, num_attention_heads=12,
num_hidden_layers=6,hidden_dropout_prob=0.1,attention_probs_dropout_prob=0.1,
initializer_range=0.2, intermediate_size=3072, type_vocab_size=1)
training_args=TrainingArguments(output_dir="/LAB_SHARED/Projects/011-Language_Model/Data/LanguageModel/BadgerBERT",overwrite_output_dir=True,
num_train_epochs=10, do_train=True, do_eval=True, evaluate_during_training=True,
per_gpu_train_batch_size=64, learning_rate=0.0004,
gradient_accumulation_steps=32,
logging_steps=128,
warmup_steps=30000,
weight_decay=0.01,
eval_steps=128,
save_steps=128,
save_total_limit=2, prediction_loss_only=True)
trainer=Trainer(model=model,args=training_args,data_collator=data_collator,train_dataset=dataset,eval_dataset=dataset1,prediction_loss_only=True)
'''
The tasks I am working on is:
* [ *] my own task or dataset: (give details below)
Training masked language model
## To reproduce
Steps to reproduce the behavior:
1. Please use the following configuration
2. I used my own training dataset, but I guess there should be no difference. My input text file size is around 40GB
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 279691/279692 [5:22:01<00:00, 15.48it/s]
File "CreateLanguageModel.py", line 84, in <module>
trainer.train()
File "**/lib64/python3.6/site-packages/transformers/trainer.py", line 785, in train
torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 361, in save
with _open_file_like(f, 'wb') as opened_file:
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 229, in _open_file_like
return _open_file(name_or_buffer, mode)
File "**/lib64/python3.6/site-packages/torch/serialization.py", line 210, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '**/checkpoint-128/optimizer.pt'
## Expected behavior
Instead of continue training, it stops. It seems that the problem is related to the check-points. I don't know why it tries to open (save) check-point-128 while it already saved 512!
Besides that it already saved check-point-160 while I did not set any parameter as 160, so I don't know where the 160 came from.
I expected the code to just report the evaluation loss periodically, save checkpoints and finish the training.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8526/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8525/comments | https://api.github.com/repos/huggingface/transformers/issues/8525/events | https://github.com/huggingface/transformers/issues/8525 | 742,590,889 | MDU6SXNzdWU3NDI1OTA4ODk= | 8,525 | `TypeError: unhashable type: 'list'` when using DataCollatorForWholeWordMask | {
"login": "sikfeng",
"id": 9458918,
"node_id": "MDQ6VXNlcjk0NTg5MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9458918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sikfeng",
"html_url": "https://github.com/sikfeng",
"followers_url": "https://api.github.com/users/sikfeng/followers",
"following_url": "https://api.github.com/users/sikfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/sikfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sikfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sikfeng/subscriptions",
"organizations_url": "https://api.github.com/users/sikfeng/orgs",
"repos_url": "https://api.github.com/users/sikfeng/repos",
"events_url": "https://api.github.com/users/sikfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/sikfeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can reproduce, working on a fix right now.",
"So, `DataCollatorForWholeWordMask` has a few deisgn flaws (it only works for BERT for instance) and fixing it is not directly doable (basically what it tries to do should be done at the tokenization level). I will adapt the `run_mlm_wwm` example to stop using it and we will probably deprecate it afterward.\r\n\r\nFor your specific problem however, there is a fix, which is to remove the `return_tensors='pt'` from the tokenzier call.",
"This solves my problem, thanks!"
] | 1,605 | 1,605 | 1,605 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.0
- Platform: Linux-5.9.1-arch1-1-x86_64-with-arch
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
This is the code I am running
```python
from transformers import BertTokenizer, BertForMaskedLM, AdamW, BertConfig, get_linear_schedule_with_warmup, pipeline, DataCollatorForWholeWordMask, DataCollatorForLanguageModeling
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=True)
sent = "The capital of France is Paris."
data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
encoded = tokenizer(
sent,
truncation = True,
add_special_tokens = True,
max_length = 64,
return_tensors = 'pt',
return_special_tokens_mask = True,
)
masked = data_collator([encoded])
print(masked)
```
It gives me the following error
```python
Traceback (most recent call last):
File "sadness.py", line 19, in <module>
masked = data_collator([encoded])
File "/home/csikfeng/.local/lib/python3.7/site-packages/transformers/data/data_collator.py", line 328, in __call__
token = self.tokenizer._convert_id_to_token(id)
File "/home/csikfeng/.local/lib/python3.7/site-packages/transformers/tokenization_bert.py", line 241, in _convert_id_to_token
return self.ids_to_tokens.get(index, self.unk_token)
TypeError: unhashable type: 'list'
```
But if instead I use `data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
` like this
```python
from transformers import BertTokenizer, BertForMaskedLM, AdamW, BertConfig, get_linear_schedule_with_warmup, pipeline, DataCollatorForWholeWordMask, DataCollatorForLanguageModeling
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case=True)
sent = "The capital of France is Paris."
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
encoded = tokenizer(
sent,
truncation = True,
add_special_tokens = True,
max_length = 64,
return_tensors = 'pt',
return_special_tokens_mask = True,
)
masked = data_collator([encoded])
print(masked)
```
I do not get any errors
```
{'input_ids': tensor([[[ 101, 10105, 12185, 10108, 63184, 10112, 10124, 24289, 10107, 119,
102]]]), 'token_type_ids': tensor([[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]]), 'attention_mask': tensor([[1]]), 'labels': tensor([[[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]]])}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should just perform whole word masking and not have errors. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8524/comments | https://api.github.com/repos/huggingface/transformers/issues/8524/events | https://github.com/huggingface/transformers/issues/8524 | 742,565,829 | MDU6SXNzdWU3NDI1NjU4Mjk= | 8,524 | LayoutLM Token Classification not learning | {
"login": "AntPeixe",
"id": 21229140,
"node_id": "MDQ6VXNlcjIxMjI5MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/21229140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntPeixe",
"html_url": "https://github.com/AntPeixe",
"followers_url": "https://api.github.com/users/AntPeixe/followers",
"following_url": "https://api.github.com/users/AntPeixe/following{/other_user}",
"gists_url": "https://api.github.com/users/AntPeixe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntPeixe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntPeixe/subscriptions",
"organizations_url": "https://api.github.com/users/AntPeixe/orgs",
"repos_url": "https://api.github.com/users/AntPeixe/repos",
"events_url": "https://api.github.com/users/AntPeixe/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntPeixe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Is there any update on this issue?",
"Hi there!\r\n\r\nI have been investigating the model by making [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318), and turns out it outputs the same tensors as the original repository on the same input data, so there are no issues (tested this both for the base model - `LayoutLMModel` as well as the models with heads on top - `LayoutLMForTokenClassification` and `LayoutLMForSequenceClassification`).\r\n\r\nHowever, the model is poorly documented in my opinion, I needed to first look at the original repository to understand everything. I made a demo notebook that showcases how to fine-tune HuggingFace's `LayoutLMForTokenClassification` on the FUNSD dataset (a sequence labeling task): https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb\r\n\r\nLet me know if this helps you!",
"I have experienced the same issue, I realized that model files from [here](https://huggingface.co/microsoft/layoutlm-base-uncased) are different than the weights in the original repo. I was using weights from the original repo and the model couldn't load them at the start of the training. So, I was starting from a random model instead of a pre-trained one. That's why it is not learning much in a down-stream task.\r\n\r\nI solved the issue by using model files from [huggingface](https://huggingface.co/microsoft/layoutlm-base-uncased)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,605 | 1,614 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: in docker based on image: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
- Python version: 3.7.9
- PyTorch version (GPU?): 1.5.1+cu92 (True)
- Tensorflow version (GPU?): 2.2.0-rc0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: <fill in>
## Information
Model I am using (Bert, XLNet ...): LayoutLMForTokenClassification
The problem arises when using: my own scripts
The tasks I am working on is: my own task
NER task. I've reproduced the implementation of Dataset, compute metrics (and other helper functions) as in the original repo [microsoft/layoutlm repo](https://github.com/microsoft/unilm/blob/master/layoutlm/examples/seq_labeling/run_seq_labeling.py)
When initially trying with the original repo and training script the model managed to learn and provided reasonable results after very few epochs. After implementing with Huggingface the model doesn't learn at all even after a much higher number of epochs.
## To reproduce
Model loading and trainer configuration:
```
config = LayoutLMConfig.from_pretrained(
<path_layoutlm_base_uncased>,
num_labels=<num_labels>,
cache_dir=None
)
model = LayoutLMForTokenClassification.from_pretrained(
<path_layoutlm_base_uncased>
from_tf=bool(".ckpt" in <path_layoutlm_base_uncased>),
config=config,
cache_dir=None,
)
device = torch.device("cuda")
model.train().to(device)
TrainingArguments(
output_dir=<pytorch_model_dir>, # output directory
do_train=True,
do_eval=True,
do_predict=False,
evaluation_strategy=EvaluationStrategy.EPOCH,
num_train_epochs=<epochs>, # total # of training epochs
per_device_train_batch_size=<batch_size>, # batch size per device during training
per_device_eval_batch_size=<batch_size>, # batch size for evaluation
weight_decay=<weight_decay>, # strength of weight decay
learning_rate=<learning_rate>,
adam_epsilon=<adam_epsilon>,
logging_dir=<profile_logs>, # Tensorboard log directory
logging_steps=0, # it logs when running evaluation so no need to log on step interval
save_steps=0,
seed=seed,
overwrite_output_dir=True,
disable_tqdm=False,
load_best_model_at_end=True,
save_total_limit=10,
fp16=True,
)
trainer = MetaMazeTrainer(
model=model, # the instantiated π€ Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=test_dataset, # evaluation dataset
compute_metrics=compute_metrics,
)
```
## Expected behavior
Similar results to the original repo as the given the same parameters to the trainer and the Dataset being the same after processing the data.
Is this due to the ongoing integration of this model? Is the setup wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8524/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8523/comments | https://api.github.com/repos/huggingface/transformers/issues/8523/events | https://github.com/huggingface/transformers/issues/8523 | 742,506,205 | MDU6SXNzdWU3NDI1MDYyMDU= | 8,523 | Reformer model crashes during casual LM evaluation | {
"login": "qbeer",
"id": 24634931,
"node_id": "MDQ6VXNlcjI0NjM0OTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/24634931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qbeer",
"html_url": "https://github.com/qbeer",
"followers_url": "https://api.github.com/users/qbeer/followers",
"following_url": "https://api.github.com/users/qbeer/following{/other_user}",
"gists_url": "https://api.github.com/users/qbeer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qbeer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qbeer/subscriptions",
"organizations_url": "https://api.github.com/users/qbeer/orgs",
"repos_url": "https://api.github.com/users/qbeer/repos",
"events_url": "https://api.github.com/users/qbeer/events{/privacy}",
"received_events_url": "https://api.github.com/users/qbeer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm, looks like the reformer model is outputing some `None`s, which it shouldn't do. Can make a fix for that in `Trainer` but the model itself should not do that. Looks like there is work for both of us @patrickvonplaten :-)"
] | 1,605 | 1,605 | 1,605 | NONE | null | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-47-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: -
### Who can help
I tried to dig into the code but could not find out why this is happening, so I am tagging @sgugger since this might be a `Trainer` related issue as well as @patrickvonplaten as I am using `ReformerWithLMHead`.
## Information
I am using `ReformerWithLMHead` with a custom dataset and already set up the masked language modeling task so I moved on to casual LM but something odd happened. My setup is based on the official notebook from @patrickvonplaten and it works fine for masked LM.
```python
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
def compute_metrics(pred):
"""
pred.label_ids = (prediction_set_size, sequence_length)
pred.predictions = (prediction_set_size, sequence_length, vocab_size)
prob. dist. along vocab size
Since we do masked language modelling, most of the sequence is MASKED with -100
and only the non masked should be checked. :)
"""
non_masked_indices = (pred.label_ids != -100)
predictions = np.argmax(pred.predictions, axis=-1)
labels = pred.label_ids[non_masked_indices]
predictions = predictions[non_masked_indices]
return {"accuracy": np.mean(np.asarray(predictions == labels), dtype=np.float)}
trainer = Trainer(
model=model,
args=training_args,
compute_metrics=compute_metrics,
data_collator=data_collator,
train_dataset=dataset,
eval_dataset=eval_dataset,
prediction_loss_only=False)
trainer.train()
```
I set up the collator for the non-mlm task but left the custom metric (also based on the official notebook) to calculate accuracy since it should be the same as before (IMO). The tricky part is if I explicitly set `prediction_loss_only=False` I get an error indicating that the `logits` could not have been nested_detached:
```bash
File "src/lm/reformer_casual_lm.py", line 146, in <module>
trainer.train()
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 786, in train
self._maybe_log_save_evalute(tr_loss, model, trial, epoch)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 843, in _maybe_log_save_evalute
metrics = self.evaluate()
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1251, in evaluate
output = self.prediction_loop(eval_dataloader, description="Evaluation")
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1348, in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer.py", line 1452, in prediction_step
logits = nested_detach(logits)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "/home/qbeer/miniconda3/envs/nlp/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 67, in nested_detach
return tensors.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
```
If I just delete the `prediction_loss_only=False` line the training runs but my custom metric is not evaluated since in the training class, the gathered labels and predictions are only not `None` when this value is set to `False`:
```python
eval_loss = eval_losses_gatherer.finalize()
preds = preds_gatherer.finalize() if not prediction_loss_only else None
label_ids = labels_gatherer.finalize() if not prediction_loss_only else None
if self.compute_metrics is not None and preds is not None and label_ids is not None:
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
```
## Expected behavior
I expect that my custom metric is evaluated and the training not crashing randomly.
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/8522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/8522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/8522/comments | https://api.github.com/repos/huggingface/transformers/issues/8522/events | https://github.com/huggingface/transformers/pull/8522 | 742,498,478 | MDExOlB1bGxSZXF1ZXN0NTIwNjE5ODMw | 8,522 | Update deepset/roberta-base-squad2 model card | {
"login": "brandenchan",
"id": 33759007,
"node_id": "MDQ6VXNlcjMzNzU5MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/33759007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandenchan",
"html_url": "https://github.com/brandenchan",
"followers_url": "https://api.github.com/users/brandenchan/followers",
"following_url": "https://api.github.com/users/brandenchan/following{/other_user}",
"gists_url": "https://api.github.com/users/brandenchan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandenchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandenchan/subscriptions",
"organizations_url": "https://api.github.com/users/brandenchan/orgs",
"repos_url": "https://api.github.com/users/brandenchan/repos",
"events_url": "https://api.github.com/users/brandenchan/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandenchan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Really cool!"
] | 1,605 | 1,605 | 1,605 | CONTRIBUTOR | null | Update model card since our v1 and v2 of the model are in this repo.
Note that accessing models doesn't seem to be working when referencing tag name #8521 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/8522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/8522/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/8522",
"html_url": "https://github.com/huggingface/transformers/pull/8522",
"diff_url": "https://github.com/huggingface/transformers/pull/8522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/8522.patch",
"merged_at": 1605279507000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.