url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9429/comments | https://api.github.com/repos/huggingface/transformers/issues/9429/events | https://github.com/huggingface/transformers/issues/9429 | 780,132,135 | MDU6SXNzdWU3ODAxMzIxMzU= | 9,429 | Apache Hadoop (HDFS) File Loading from_pretrained | {
"login": "anninterpreter",
"id": 44267622,
"node_id": "MDQ6VXNlcjQ0MjY3NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/44267622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anninterpreter",
"html_url": "https://github.com/anninterpreter",
"followers_url": "https://api.github.com/users/anninterpreter/followers",
"following_url": "https://api.github.com/users/anninterpreter/following{/other_user}",
"gists_url": "https://api.github.com/users/anninterpreter/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anninterpreter/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anninterpreter/subscriptions",
"organizations_url": "https://api.github.com/users/anninterpreter/orgs",
"repos_url": "https://api.github.com/users/anninterpreter/repos",
"events_url": "https://api.github.com/users/anninterpreter/events{/privacy}",
"received_events_url": "https://api.github.com/users/anninterpreter/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
Loading configuration file for **transformers.AutoConfig** and **transformers.AutoModelForSequenceClassification** using the function **from_pretrained** by giving the HDFS file path
## Motivation
In case of file that is not locally available, the library utilizes the **get_from_cache** function inside **transformers/file_utils.py** file to try to download the model from the remote resource. But, in case of no ETag being present in the header of the response returned, an OSError("Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility.") exception is raised. In case of HDFS, this ETag validation shouldn't be considered as a mandatory requirement but as an optional one. Either another mechanism should be used to ensure the reliability of the resource or this ETag check should be made optional.
Please see below the code-snippet screenshot of the aforementioned file.

Additional information.
Apache Hadoop Version: 2.7.7, rc1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9429/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9429/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9428/comments | https://api.github.com/repos/huggingface/transformers/issues/9428/events | https://github.com/huggingface/transformers/pull/9428 | 780,063,242 | MDExOlB1bGxSZXF1ZXN0NTQ5OTg1MDg4 | 9,428 | Improve documentation coverage for Herbert | {
"login": "Qbiwan",
"id": 69753975,
"node_id": "MDQ6VXNlcjY5NzUzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/69753975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qbiwan",
"html_url": "https://github.com/Qbiwan",
"followers_url": "https://api.github.com/users/Qbiwan/followers",
"following_url": "https://api.github.com/users/Qbiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Qbiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qbiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qbiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Qbiwan/orgs",
"repos_url": "https://api.github.com/users/Qbiwan/repos",
"events_url": "https://api.github.com/users/Qbiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qbiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Still not letting me assign you @sgugger :("
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9035
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9428",
"html_url": "https://github.com/huggingface/transformers/pull/9428",
"diff_url": "https://github.com/huggingface/transformers/pull/9428.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9428.patch",
"merged_at": 1609942424000
} |
https://api.github.com/repos/huggingface/transformers/issues/9427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9427/comments | https://api.github.com/repos/huggingface/transformers/issues/9427/events | https://github.com/huggingface/transformers/pull/9427 | 779,957,558 | MDExOlB1bGxSZXF1ZXN0NTQ5ODg3MDUw | 9,427 | Improve documentation coverage for Phobert | {
"login": "Qbiwan",
"id": 69753975,
"node_id": "MDQ6VXNlcjY5NzUzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/69753975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qbiwan",
"html_url": "https://github.com/Qbiwan",
"followers_url": "https://api.github.com/users/Qbiwan/followers",
"following_url": "https://api.github.com/users/Qbiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Qbiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qbiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qbiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Qbiwan/orgs",
"repos_url": "https://api.github.com/users/Qbiwan/repos",
"events_url": "https://api.github.com/users/Qbiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qbiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can't pin you for review @sgugger, so tagging you!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9035
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9427",
"html_url": "https://github.com/huggingface/transformers/pull/9427",
"diff_url": "https://github.com/huggingface/transformers/pull/9427.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9427.patch",
"merged_at": 1609945473000
} |
https://api.github.com/repos/huggingface/transformers/issues/9426 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9426/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9426/comments | https://api.github.com/repos/huggingface/transformers/issues/9426/events | https://github.com/huggingface/transformers/issues/9426 | 779,804,482 | MDU6SXNzdWU3Nzk4MDQ0ODI= | 9,426 | Is it possible to export a pytorch .pt file after finetuning a model? | {
"login": "farazk86",
"id": 33456896,
"node_id": "MDQ6VXNlcjMzNDU2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/33456896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farazk86",
"html_url": "https://github.com/farazk86",
"followers_url": "https://api.github.com/users/farazk86/followers",
"following_url": "https://api.github.com/users/farazk86/following{/other_user}",
"gists_url": "https://api.github.com/users/farazk86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farazk86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farazk86/subscriptions",
"organizations_url": "https://api.github.com/users/farazk86/orgs",
"repos_url": "https://api.github.com/users/farazk86/repos",
"events_url": "https://api.github.com/users/farazk86/events{/privacy}",
"received_events_url": "https://api.github.com/users/farazk86/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just rename your `.bin` to `.pt`",
"Hi @julien-c \r\n\r\nSorry to come back to this but I am having trouble with this. As the .bin file also has a much larger accompanying ``optimizer`` file that I assume holds the weights.\r\n\r\nI am trying to deploy a fine tuned model to google cloud. And even when using a custom prediction routine to load the entire folder for distil GPT2 the folder size exceeds the limit of ``500MB``.\r\n\r\nIs there a way to export the fine tuned model to be used alone. Either export to a standalone pytorch model or tensorflow model.\r\n\r\nI searched, but could not find any documentation on this. Would appreciate any help on this, or even direct me towards relevant documentation that can help me.\r\n\r\nI'm using ``transformers==2.8.0``\r\n\r\nThank you",
"The optimizer file does not contain the weights of your model, but the state of the optimizer during your training.\r\n\r\nIf you do not plan on continuing training, then you can safely discard that file. You can have more information about [optimizers here](https://pytorch.org/docs/stable/optim.html)."
] | 1,609 | 1,610 | 1,609 | NONE | null | Hi,
Considering how I am having troubles with the tflite interpreter at issue https://github.com/huggingface/transformers/issues/9392 . I was wondering if I will have better luck trying Pytorch mobile, since the base models are pytorch to begin with.
But to use the pytorch converter I need the saved ``.pt`` file. The checkpoints saved during training are in ``.bin`` format. Is there any way to get the exported pytorch ``.pt`` file from a checkpoint folder?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9426/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9425/comments | https://api.github.com/repos/huggingface/transformers/issues/9425/events | https://github.com/huggingface/transformers/issues/9425 | 779,562,303 | MDU6SXNzdWU3Nzk1NjIzMDM= | 9,425 | [utils/get_modified_files.py] fails with a few PR checkout tools | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,609 | 1,610 | 1,610 | CONTRIBUTOR | null | I have noticed that when using [gh cli](https://github.com/cli/cli) to checkout a pr
```
git merge-base --fork-point master
```
fails, which breaks `utils/get_modified_files.py`
e.g.:
```
gh pr checkout 9423
python utils/get_modified_files.py
Traceback (most recent call last):
File "utils/get_modified_files.py", line 27, in <module>
fork_point_sha = subprocess.check_output("git merge-base --fork-point master".split()).decode("utf-8")
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/subprocess.py", line 512, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['git', 'merge-base', '--fork-point', 'master']' returned non-zero exit status 1.
```
So `make fixup` fails to check the modified files then.
It fails if I use `git-pr` too (this tool is from the git-extras package).
It works fine if I use the native `git pr`.
This needs to be investigated.
Until this is resolved if you use those tools please use `make style` / `make quality`, but `make fixup` should work just fine elsewhere.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9425/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9424/comments | https://api.github.com/repos/huggingface/transformers/issues/9424/events | https://github.com/huggingface/transformers/pull/9424 | 779,494,475 | MDExOlB1bGxSZXF1ZXN0NTQ5NDU5NDA1 | 9,424 | improve readme text to private models/versioning/api | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"👍 "
] | 1,609 | 1,609 | 1,609 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9424",
"html_url": "https://github.com/huggingface/transformers/pull/9424",
"diff_url": "https://github.com/huggingface/transformers/pull/9424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9424.patch",
"merged_at": 1609876966000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9423/comments | https://api.github.com/repos/huggingface/transformers/issues/9423/events | https://github.com/huggingface/transformers/pull/9423 | 779,461,294 | MDExOlB1bGxSZXF1ZXN0NTQ5NDI5MDM2 | 9,423 | Upgrade styler to better handle lists | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I suppose I need to stick to the common bullet format, as it couldn't handle this `\\d\\) ` style of bullets. Leading to this rewrite:\r\n```\r\n-1) Optimizer State Partitioning (stage 1)\r\n-2) Add Gradient Partitioning (stage 2)\r\n+1) Optimizer State Partitioning (stage 1) 2) Add Gradient Partitioning (stage 2)\r\n```\r\nThis is not a problem - will fix the style.",
"We also need a new line injector for bulleted lists in .rst checker pretty please.\r\n\r\n In .rst I had:\r\n\r\n```\r\nMiscellaneous notes:\r\n- DeepSpeed works with the PyTorch Trainer but not TF Trainer.\r\n- While DeepSpeed has a pip installable PyPI package, \r\n```\r\nthe style wrapper broke the bullets and made them into one paragraph/line.\r\n```\r\nMiscellaneous notes: - DeepSpeed works with the PyTorch Trainer but not TF Trainer. - While DeepSpeed has a pip installable PyPI package, \r\n```\r\n\r\nsame problem as with docstring - it's missing a new line again. Could we do the same fix for .rst to inject a new line before bullets if an unwary writer forgot to add one? \r\n\r\nThank you!\r\n\r\n",
"Mmm, the patch should be applied to the rst files too (can't link to the diff but it's line 384 of the last file in the diff shown by GitHub).",
"I re-based just in case, and no, it still doesn't insert the line. Here is the exact para:\r\n```\r\nMiscellaneous notes:\r\n* DeepSpeed works with the PyTorch Trainer but not TF Trainer.\r\n* While DeepSpeed has a pip installable PyPI package, it is highly recommended that it be `installed from source\r\n <https://github.com/microsoft/deepspeed#installation>`__ to best match your hardware and also to enable features like\r\n 1-bit Adam, which aren't available in the pypi distribution.\r\n```",
"Indeed, I made some stupid mistake, #9488 should fix this."
] | 1,609 | 1,610 | 1,609 | COLLABORATOR | null | # What does this PR do?
This PR upgrades the doc styling script to automatically add new lines before lists. This makes the script more robust as it will avoid reformatting those lists and make them appear properly once sphinx has done its thing.
In passing a few badly formatted docstrings/doc pages are fixed, just waiting for some input from @patrickvonplaten for the problems in LED/Longformer.
Fixes #9408
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9423",
"html_url": "https://github.com/huggingface/transformers/pull/9423",
"diff_url": "https://github.com/huggingface/transformers/pull/9423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9423.patch",
"merged_at": 1609937178000
} |
https://api.github.com/repos/huggingface/transformers/issues/9422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9422/comments | https://api.github.com/repos/huggingface/transformers/issues/9422/events | https://github.com/huggingface/transformers/issues/9422 | 779,286,834 | MDU6SXNzdWU3NzkyODY4MzQ= | 9,422 | [Announcement] Changing model type of Barthez | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Applied the change",
"Hi @patrickvonplaten. Sorry for the late reply.\r\n\r\nActually I tested the model with `BartForConditionalGeneration` and everything was working well. On the other hand, after the modification I am getting the following error: \r\n```\r\nUnrecognized configuration class for this kind of AutoModel: AutoModelForMaskedLM. Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.\r\n```\r\n\r\nMaybe we only need to change `model_type` (even if I am not sure why) and not the architecture, because [mBART](https://huggingface.co/facebook/mbart-large-cc25/blob/main/config.json) itself is using `BartForConditionalGeneration`.\r\n\r\nWe still have the problem of the tokenizer when using AutoTokenizer:\r\n```\r\nTokenizer class BarthezTokenizer does not exist or is not currently imported.\r\n```\r\n\r\nIs it possible to force the api to import and use `BarthezTokenizer` instead of `AutoTokenizer`?",
"Hey @moussaKam,\r\n\r\nThanks for your answer! Yeah the `AutoTokenizer` is still a problem and actually showcases a deeper problem we're having for the `AutoTokenziers` in the lib. We'll need a new design, something like proposed here: https://github.com/huggingface/transformers/pull/9305 to fix this issue. It's on my Todo list.\r\n\r\n",
"Regarding the error with `AutoTokenizer` I cannot reproduce it :-/ Could you maybe provide code snippet showcasing the problem?",
"Hi @patrickvonplaten,\r\n\r\nHere's a snippet:\r\n```python\r\ntext_sentence = \"Paris est la capitale de la <mask>\"\r\nimport torch\r\n\r\nfrom transformers import (\r\n AutoTokenizer,\r\n BartForConditionalGeneration\r\n)\r\n\r\nbarthez_tokenizer = AutoTokenizer.from_pretrained(\"moussaKam/barthez\")\r\nbarthez_model = BartForConditionalGeneration.from_pretrained(\"moussaKam/barthez\")\r\n\r\ninput_ids = torch.tensor(\r\n [barthez_tokenizer.encode(text_sentence, add_special_tokens=True)]\r\n)\r\nmask_idx = torch.where(input_ids == barthez_tokenizer.mask_token_id)[1].tolist()[0]\r\n\r\npredict = barthez_model.forward(input_ids)[0]\r\n\r\nbarthez_tokenizer.decode(predict[:, mask_idx, :].topk(5).indices[0])\r\n```\r\n```\r\n----> 9 barthez_tokenizer = AutoTokenizer.from_pretrained(\"moussaKam/barthez\")\r\n 10 barthez_model = BartForConditionalGeneration.from_pretrained(\"moussaKam/barthez\")\r\n 11 \r\n\r\n~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers-4.1.1-py3.8.egg/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 357 \r\n 358 if tokenizer_class is None:\r\n--> 359 raise ValueError(\r\n 360 \"Tokenizer class {} does not exist or is not currently imported.\".format(tokenizer_class_candidate)\r\n 361 )\r\n\r\nValueError: Tokenizer class BarthezTokenizer does not exist or is not currently imported.\r\n```\r\nThe expected output (if we use BarthezTokenizer instead of AutoTokenizer):\r\n```\r\n'France culture francophonie gastronomie mode'\r\n```",
"Ok, @LysandreJik found a nice fix for the tokenizer. Regarding the model, I think from now on we should use `MBart` for Barthez since after the new release Bart is not compatible with Barthez anymore",
"However, there seems to be an issue remaining with the `BarthezTokenizer`, as the code shared by @moussaKam outputs the following in v4.1.0:\r\n```\r\nFrance culture francophonie gastronomie mode\r\n```\r\nbut outputs the following on `master`:\r\n```\r\nompeolin corporelleenfin1-1\r\n```\r\n\r\nIt also mentions the following:\r\n```\r\nSome weights of the model checkpoint at moussaKam/barthez were not used when initializing BartForConditionalGeneration: ['encoder.layer_norm.weight', 'encoder.layer_norm.bias', 'decoder.layer_norm.weight', 'decoder.layer_norm.bias']\r\n```",
"My bad, changing from `BartForConditionalGeneration` to `MBartForConditionalGeneration` fixes the issue.",
"Yeah, Barthez is the only model that is not longer compatible with Bart looking forward - we have to stick to MBart. But the model architecture corresponds 1-to-1 to MBart, so I think it's fine. Hope it's ok for you @moussaKam ",
"It's OK @patrickvonplaten if BARThez works well with `AutoModel`. Currently the shared code outputs (on the master): \r\n\r\n'France culture francophonie gastronomie mode' if we use `MBartForConditionalGeneration`\r\n'édappraiav comme' if we use `AutoModel`\r\n'ompeolin corporelleenfin1-1' if we use `BartForConditionalGeneration`",
"Ah yeah, so instead of `AutoModel`, you'll have to use `AutoModelForSeq2SeqLM`.\r\nAnd it should not work anymore on master with `BartForConditionalGeneration`, but only with `MBartForConditionalGeneration`. Is the output of `MBartForConditionalGeneration` correct/reasonable in your opinion? \r\n\r\n=> so the model classes to use in the future are `AutoModelForSeq2SeqLM` (as before) and `MBartForConditionalGeneration` (this worked before as well), but now `BartForConditionalGeneration` should not work anymore.\r\n\r\nIf you could verify that this is actually the case on master now, that would be super nice",
"yes the output is reasonable with `MBartForConditionalGeneration` and `AutoModelForSeq2SeqLM`.\r\n\r\nHowever we still have one last (I hope) problem when using `pipeline`. \r\nThe following code returns an error:\r\n```python\r\nfrom transformers import pipeline\r\n\r\npbase = pipeline(task=\"fill-mask\", model=\"moussaKam/barthez\")\r\nsrc_text = [\"Paris est la capitale de la <mask>\"]\r\nresults = [x[\"token_str\"] for x in pbase(src_text)]\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-12-d7b2e5a78b7c> in <module>\r\n 1 from transformers import pipeline\r\n 2 \r\n----> 3 pbase = pipeline(task=\"fill-mask\", model=\"moussaKam/barthez\")\r\n 4 src_text = [\"Paris est la capitale de la <mask>\"]\r\n 5 results = [x[\"token_str\"] for x in pbase(src_text)]\r\n\r\n/datadisks/datadisk1/transformers/src/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)\r\n 403 )\r\n 404 \r\n--> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs)\r\n 406 if task == \"translation\" and model.config.task_specific_params:\r\n 407 for key in model.config.task_specific_params:\r\n\r\n/datadisks/datadisk1/transformers/src/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n 1124 )\r\n-> 1125 raise ValueError(\r\n 1126 \"Unrecognized configuration class {} for this kind of AutoModel: {}.\\n\"\r\n 1127 \"Model type should be one of {}.\".format(\r\n\r\nValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM.\r\nModel type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.\r\n```\r\n\r\nWe got the same error when using the the inference [api](https://huggingface.co/moussaKam/barthez?text=Paris+est+la+%3Cmask%3E+de+la+France.).",
"Ah yeah, that's something unrelated to the Bart Split PR I think. Do you mind opening a new issue where you can copy paste your code example from above? Feel free to tag me on it :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,618 | 1,618 | MEMBER | null | We are currently undergoing some major refactoring of Bart-like models as shown in: https://github.com/huggingface/transformers/pull/9343.
After the refactoring, the Barthez models would not work anymore with the `AutoModel` and `AutoModelForSeq2SeqLM` classes because Barthez actually corresponds more to the mbart model structure than to the Bart structure (compare to PR in https://github.com/huggingface/transformers/pull/9343), but has `bart` and `BartForConditionalGeneration` defined as their default models.
In order to make the Barthez models work after merging the PR, the model type needs to be changed online to `mbart` for those models: https://huggingface.co/models?search=barthez . Since MBart is identical to Bart previous to merging the above PR the change won't affect older versions.
I want to do the change soon, just wanted to ping you @moussaKam. Please do let me know if you have are not happy with it or have any questions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9422/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9421/comments | https://api.github.com/repos/huggingface/transformers/issues/9421/events | https://github.com/huggingface/transformers/pull/9421 | 779,268,737 | MDExOlB1bGxSZXF1ZXN0NTQ5MjU0OTE2 | 9,421 | Store transformers version info when saving the model | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR stores the transformers version info in the model config. It makes debugging saved models from the model hub easier without affecting any actual function.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9421",
"html_url": "https://github.com/huggingface/transformers/pull/9421",
"diff_url": "https://github.com/huggingface/transformers/pull/9421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9421.patch",
"merged_at": 1609947289000
} |
https://api.github.com/repos/huggingface/transformers/issues/9420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9420/comments | https://api.github.com/repos/huggingface/transformers/issues/9420/events | https://github.com/huggingface/transformers/issues/9420 | 779,245,705 | MDU6SXNzdWU3NzkyNDU3MDU= | 9,420 | Transformer models for semantic parsing | {
"login": "ayushjain1144",
"id": 28894174,
"node_id": "MDQ6VXNlcjI4ODk0MTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/28894174?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushjain1144",
"html_url": "https://github.com/ayushjain1144",
"followers_url": "https://api.github.com/users/ayushjain1144/followers",
"following_url": "https://api.github.com/users/ayushjain1144/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushjain1144/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushjain1144/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushjain1144/subscriptions",
"organizations_url": "https://api.github.com/users/ayushjain1144/orgs",
"repos_url": "https://api.github.com/users/ayushjain1144/repos",
"events_url": "https://api.github.com/users/ayushjain1144/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushjain1144/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ayushjain1144 \r\n\r\nThat's an interesting question, would be better if you ask it on the [forum](https://discuss.huggingface.co/) "
] | 1,609 | 1,609 | 1,609 | NONE | null | Hi! Thank you for your awesome work!
I want to perform semantic parsing. Unfortunately, I couldn't find any examples on hugging face repo for that. Could you please let me know how I should proceed? I suppose I could use a Seq2Seq EncoderDecoder model like BERT2BERT and finetune it for semantic parsing. Or do you think there is a better way? For more context, I have natural language grounding descriptions and I want to generate logical parse tree from it. In literature, there are a few tree transformer-based techniques and Seq2Tree technique which I think hugging face do not support yet (or does it?).
Thanks :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9419/comments | https://api.github.com/repos/huggingface/transformers/issues/9419/events | https://github.com/huggingface/transformers/pull/9419 | 779,221,520 | MDExOlB1bGxSZXF1ZXN0NTQ5MjEyNDcy | 9,419 | New serving | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Humm looks like the quick test I have added to make sure that a saved model can be properly created is a bit too long for at least one of the models. @LysandreJik any idea how I can figure out for which model?",
"Ok, the test `test_saved_model_creation` is skipped if it needs more than 30sec to be executed for a model. For now these models are skipped:\r\nBART\r\nBlenderBot\r\nFunnel\r\nLongformer\r\nLxmert\r\nMarian\r\nMBart\r\nMobilebert\r\nPegasus\r\nT5\r\n\r\nLet's see if the test becomes faster once I will optimise these model like I did for BERT. LGTM!",
"Cool, maybe @sgugger can take a look as well :-) ",
"Oops forgot to rebase and then the changes for the LED model is missing, and also the changes in the Seq2Seq template. Please wait my next push before merging.",
"I should have addressed all the comments. The saved model creation tests are silent for the Seq2Seq models until I find a proper fix.",
"Great should we merge @jplu @LysandreJik @? - it's blocking the TF-Bart Split PR a bit. ",
"For me it is good to merge if there are no other comments :)",
"Cool merging then",
"I had some more comments actually! With the short names, most of the outputs of the serving methods fit on one line now. black does not put things back on the same line once it has split on several, so it's not fixed by the quality scripts.\r\n\r\nI also think it would make future maintenance easier to add the # Copied from comments for dupe code.",
"They are all on one line (when it is possible, which means not too many characters to fit in).\r\n\r\nI will open a PR to take care of adding the `#copied from` comments once I finish to fix the S2S models.",
"See comment above, and this is just one example, most of those now fit in one line with your last changes."
] | 1,609 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
This PR proposes a new way to create a saved model to be properly served via TF Serving. The logic behind is to create a `serving` that will be used to create the expected saved model with a proper input signature. Currently the saved models are very limited:
- input sequence length limited to exactly 5 tokens
- input parameters limited to have only `input_ids`
- When `output_attentions` or `output_hidden_states` was set to True, the saved model output contained as many outputs as the number of attentions or hidden state
This PR fixes these 3 issues. A new behavior is also introduced, when doing `model.save_pretrained(...)` a saved model version is also created in same time than the `.h5` weights file.
The proposed logic allows anybody to create its own input signature simply by overwriting the new `serving` method. For example, the default inputs for BERT are now `input_ids`, `attention_mask` and `token_type_ids`, if one wants to replace `input_ids` by `inputs_embeds`, a new model has to be created overwriting the `serving` method like:
```
class CustomBertModel(TFBertModel):
@tf.function(
input_signature=[
{
"inputs_embeds": tf.TensorSpec((None, None, 768), tf.float32, name="inputs_embeds"),
"attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"),
"token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"),
}
]
)
def serving(self, inputs):
output = self.call(inputs)
return self.serving_output(output)
model = CustomBertModel.from_pretrained("bert-base-cased")
model.save_pretrained("saving_path")
```
Slow/quick tests are passing.
EDIT: ping @sgugger @patrickvonplaten and @LysandreJik for review. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9419/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9419",
"html_url": "https://github.com/huggingface/transformers/pull/9419",
"diff_url": "https://github.com/huggingface/transformers/pull/9419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9419.patch",
"merged_at": 1610016530000
} |
https://api.github.com/repos/huggingface/transformers/issues/9418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9418/comments | https://api.github.com/repos/huggingface/transformers/issues/9418/events | https://github.com/huggingface/transformers/pull/9418 | 778,892,447 | MDExOlB1bGxSZXF1ZXN0NTQ4OTE1NDY1 | 9,418 | New TF embeddings (cleaner and faster) | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I like this PR in general! \r\n\r\nJust wondering about two things:\r\n\r\n1) Do we need this `get_config` function?\r\n2) Not a huge fan of the `Add()` keras layer...does this really improve performance much?",
"Good point @LysandreJik! Basically here most of the models share the similar embedding computation that stay inside their respective file. What has been exported is just the specific computation, which means that `WordEmbeddings`, `PositionalEmbeddings` and `TokenTypeEmbeddings` are always the same doesn't matter who is using it.\r\n\r\nThe same logic that is currently applied to `TFSharedEmbeddings`.",
"> Just reviewed the general approach on one model for now and I have some questions before going further. If I understand correctly, the computation of the three different types of embeddings is split in three different ways to maximize the speedup but I wonder if it's documented from TF or just some tests on one particular setup. Before adding the extra complexity, I would like to be sure it brings a speedup on almost all possible environments (CPU, GPU, multi-GPU, TPU) without any loss in memory footprint (one-hot encoding the token type ids seems harmless, but we never know).\r\n\r\nI basically took example on the official implementation of Transformer encoder available in the Google Repo https://github.com/tensorflow/models/tree/master/official/nlp/keras_nlp . After having done several experiments (only on CPU and GPU though), I end up to extract from this an optimal version for each embedding.\r\n\r\n> As for putting those in modeling utils versus the model file, I agree with Lysandre that this breaks our philosophy of putting everything in each model file. I emitted the same reserves for TFSharedEmbeddings when it was introduced.\r\n\r\nI don't mind to copy/paste the same layers in all the concerned files if it is the recommended way. @sgugger @LysandreJik Will you be more confident if I create a version for each model and add the comment `# copied from ....` everytime it is a strong copy/paste?\r\n\r\n> I don't understand how it can be used above (line 420) in a tf.matmul if it's a layer and not a weight.\r\n\r\nNow the `get_input_embeddings` returns a `WordEmbeddings` layer that has a `word_embeddings` attribute. If you look at the Bert model for example, the layer `TFBertLMPredictionHead` takes a `WordEmbeddings` layer as `input_embeddings` and use the `WordEmbeddings.word_embeddings` attribute into the `tf.matmul`.",
"> Now the `get_input_embeddings` returns a `WordEmbeddings` layer that has a `word_embeddings` attribute. If you look at the Bert model for example, the layer `TFBertLMPredictionHead` takes a `WordEmbeddings` layer as `input_embeddings` and use the `WordEmbeddings.word_embeddings` attribute into the `tf.matmul`.\r\n\r\nSo this part confuses me. Why name `word_embeddings` the weights inside the `WordEmbeddings`? It causes so much headache when reading the code afterward as we keep seeing some `word_embeddings` attributes which might either be an embedding layer or a weight.\r\n\r\nAlso, how does the new organization not screw up pretrained weights? From what I understand, the old `world_embeddings` in the `BertEmbeddings` layer used to be a weight and now it's a layer with an added `world_embeddings` attribute?",
"> So this part confuses me. Why name word_embeddings the weights inside the WordEmbeddings? It causes so much headache when reading the code afterward as we keep seeing some word_embeddings attributes which might either be an embedding layer or a weight.\r\n\r\nI agree it is confusing, if you prefer it can be called `weight` such as in `TFSharedEmbeddings` I think it would be a more suitable name. This renaming will make easier the kind of checking (from the incoming PR on ebd resizing)\r\n```python\r\n def _get_word_embedding_weight(self, embedding_layer):\r\n if hasattr(embedding_layer, \"word_embeddings\"):\r\n return embedding_layer.word_embeddings\r\n elif hasattr(embedding_layer, \"weight\"):\r\n return embedding_layer.weight\r\n elif hasattr(embedding_layer, \"decoder\"):\r\n return embedding_layer.decoder\r\n else:\r\n # Here we build the word embeddings weights if not exists.\r\n # And then we retry to get the attribute once built.\r\n self(self.dummy_inputs)\r\n if hasattr(embedding_layer, \"word_embeddings\"):\r\n return embedding_layer.word_embeddings\r\n elif hasattr(embedding_layer, \"weight\"):\r\n return embedding_layer.weight\r\n elif hasattr(embedding_layer, \"decoder\"):\r\n return embedding_layer.decoder\r\n else:\r\n return None\r\n```\r\nNo more `word_embeddings` or `weight`, only `weight`. What do you think?\r\n\r\n> Also, how does the new organization not screw up pretrained weights? From what I understand, the old world_embeddings in the BertEmbeddings layer used to be a weight and now it's a layer with an added world_embeddings attribute?\r\n\r\nThis is because before we where using a [name score](https://www.tensorflow.org/api_docs/python/tf/name_scope) and not anymore in this PR. Let's say that defining a name scope or creating a layer represents the same thing. In both cases the weight is named `'tf_bert_model/bert/embeddings/word_embeddings/weight:0'` until now the `word_embeddings` part of the naming was because the embeddings was created in the context of `tf.name_scope(\"word_embeddings\"):` , in this PR it has the same name but because of the name of the new `WordEmbeddings` layer.",
"Yes, having only \"weight\" makes more sense to me, and it would make the code easier to read. Thanks for explaining why the name of the weight doesn't change for loading!",
"I found another advantage of these new embedding computation. It allows our models to be compiled in XLA_GPU and XLA_TPU which was not the case before. Small proof test on a machine with a GPU:\r\n```python\r\nfrom transformers import TFBertModel\r\nimport tensorflow as tf\r\n\r\nmodel = TFBertModel.from_pretrained(\"bert-base-cased\")\r\n\r\[email protected](experimental_compile=True)\r\ndef run():\r\n return model(model.dummy_inputs)\r\n\r\noutputs = run()\r\n```\r\nOn master fails with:\r\n```\r\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Trying to access resource _AnonymousVar4 located in device /job:localhost/replica:0/task:0/device:CPU:0 from device /job:localhost/replica:0/task:0/device:GPU:0 [Op:__inference_run_4637]\r\n```\r\n\r\nOn this PR works as expected. The reason is because the `tf.keras.layers.Embeddings` layers are initialized when the model is instanciated instead of being initialized at build time.",
"Now, each model has its own `WordEmbedding`, `TokenTypeEmbeddings` and `PositionEmbedding` layer in the model file decorated with the comment `#Copied from...` and the `words_embeddings` weights have been renamed into `weight` to make it more understandable and aligned with the name in `TFSharedEmbeddings`.",
"> LGTM in general. One thing I'm not 100% sure about is whether we really need to add keras layers like tf.keras.layers.Add() if we start doing this for the embeddings now, I'm wondering if we should do the same for all residual connections in the self-attention blocks\r\n\r\nIn the absolute, yes we should. In an ideal world, everytime TF proposes a function/layer for doing something we should use it, as it is part of the optimization process. I know and I understand that it might seems confusing and starts to diverge with what PT looks like."
] | 1,609 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR propose a better implementation of the embedding layer for the BERT-Like TF models. Another benefit of this cleaning is a better computational performance:
```
model = TFBertForMaskedLM.from_pretrained("bert-base-cased")
cProfile.run("model(model.dummy_inputs)")
# current master
56150 function calls (55318 primitive calls) in 0.096 seconds
# with new embeddings implem
55732 function calls (54891 primitive calls) in 0.080 seconds
```
This new implementation should be compatible with the incoming rework of the resizing proposed in #9193. A similar work will be applied to `TFSharedEmbeddings` in a next PR.
All slow/quick tests passes.
EDIT: I don't know why Github has some issues to pin the reviewers, so pinging @LysandreJik @sgugger and @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9418/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9418",
"html_url": "https://github.com/huggingface/transformers/pull/9418",
"diff_url": "https://github.com/huggingface/transformers/pull/9418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9418.patch",
"merged_at": 1611140893000
} |
https://api.github.com/repos/huggingface/transformers/issues/9417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9417/comments | https://api.github.com/repos/huggingface/transformers/issues/9417/events | https://github.com/huggingface/transformers/issues/9417 | 778,812,412 | MDU6SXNzdWU3Nzg4MTI0MTI= | 9,417 | shift_tokens_right in BART, FSMT incompatible with DataCollatorForLanguageModelling | {
"login": "jethrokuan",
"id": 1667473,
"node_id": "MDQ6VXNlcjE2Njc0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jethrokuan",
"html_url": "https://github.com/jethrokuan",
"followers_url": "https://api.github.com/users/jethrokuan/followers",
"following_url": "https://api.github.com/users/jethrokuan/following{/other_user}",
"gists_url": "https://api.github.com/users/jethrokuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jethrokuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jethrokuan/subscriptions",
"organizations_url": "https://api.github.com/users/jethrokuan/orgs",
"repos_url": "https://api.github.com/users/jethrokuan/repos",
"events_url": "https://api.github.com/users/jethrokuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jethrokuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @jethrokuan, we've merged a big BART PR yesterday just as a heads up, I think this might solve this problem for Bart -> could you check again?",
"@patrickvonplaten I think it does solve this problem: code looks good and runs fine, model results not great, but possibly a mistake of mine. Thanks!"
] | 1,609 | 1,610 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
I'm trying to train Bart from scratch on a masked language modelling task. I understand that this is currently not supported by HF, but I'm working on it and would like to bring up certain "blockers" that currently prevent this.
The Bart shift_tokens_right implementation looks like this:
```
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int):
"""
Shift input ids one token to the right, and wrap the last non pad token (usually <eos>).
"""
prev_output_tokens = input_ids.clone()
assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
# replace possible -100 values in labels by `pad_token_id`
prev_output_tokens.masked_fill_(prev_output_tokens == -100, pad_token_id)
index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()
prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone()
prev_output_tokens[:, 0] = decoder_start_tokens
return prev_output_tokens
```
The `shift_tokens_right` implementation assumes that anything that was filled as `-100` was a pad token, using that to try to find the index of the EOS token.
This is not always true. In `DataCollatorForLanguageModelling`, which is used in the example script, we see
https://github.com/huggingface/transformers/blob/748006c0b35d64cdee23a3cdc2107a1ce64044b5/src/transformers/data/data_collator.py#L303
This causes errors when trying to train a Bart model on language modelling.
```
Traceback (most recent call last):
File "math_explain/masked_lm.py", line 281, in <module>
main()
File "math_explain/masked_lm.py", line 236, in main
train_result = trainer.train()
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 815, in train
tr_loss += self.training_step(model, inputs)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 1157, in training_step
loss = self.compute_loss(model, inputs)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 1181, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1233, in forward
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 75, in shift_tokens_right
decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze()
RuntimeError: index -1 is out of bounds for dimension 1 with size 213
```
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9417/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9416/comments | https://api.github.com/repos/huggingface/transformers/issues/9416/events | https://github.com/huggingface/transformers/issues/9416 | 778,773,835 | MDU6SXNzdWU3Nzg3NzM4MzU= | 9,416 | Why was DataCollatorForNextSentencePrediction removed ? | {
"login": "dwarfer7634",
"id": 19330059,
"node_id": "MDQ6VXNlcjE5MzMwMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/19330059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwarfer7634",
"html_url": "https://github.com/dwarfer7634",
"followers_url": "https://api.github.com/users/dwarfer7634/followers",
"following_url": "https://api.github.com/users/dwarfer7634/following{/other_user}",
"gists_url": "https://api.github.com/users/dwarfer7634/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dwarfer7634/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwarfer7634/subscriptions",
"organizations_url": "https://api.github.com/users/dwarfer7634/orgs",
"repos_url": "https://api.github.com/users/dwarfer7634/repos",
"events_url": "https://api.github.com/users/dwarfer7634/events{/privacy}",
"received_events_url": "https://api.github.com/users/dwarfer7634/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"This class is not necessary anymore: it was the same as `DataCollatorForLanguageModeling` while keeping the `nsp_labels` but `DataCollatorForLanguageModeling` will keep any extra things (like `nsp_labels`) you pass to it. So you can just replace it with `DataCollatorForLanguageModeling`.",
"Thank you for your quick reply. \r\nDo you mean you just have to use TextDatasetForNextSentencePrediction before DataCollatorForLanguageModeling to conduct NSP?",
"Yes.",
"That makes sense.\r\nThank you very much for your help!"
] | 1,609 | 1,609 | 1,609 | NONE | null | # 🚀 Feature request
I want to ask a question about why DataCollatorForNextSentencePrediction was removed.
That class was implement in the pull request down below.
https://github.com/huggingface/transformers/pull/6572
It was so useful for me.
But, this feature is not included in the latest version.
Do anyone know why it was removed?
Or, is there any alternative features?
## Motivation
I need NSP feature to cunduct complete pre-training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9416/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9415/comments | https://api.github.com/repos/huggingface/transformers/issues/9415/events | https://github.com/huggingface/transformers/issues/9415 | 778,598,584 | MDU6SXNzdWU3Nzg1OTg1ODQ= | 9,415 | About Multi GPU | {
"login": "HyeyeonKoo",
"id": 43692697,
"node_id": "MDQ6VXNlcjQzNjkyNjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43692697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyeyeonKoo",
"html_url": "https://github.com/HyeyeonKoo",
"followers_url": "https://api.github.com/users/HyeyeonKoo/followers",
"following_url": "https://api.github.com/users/HyeyeonKoo/following{/other_user}",
"gists_url": "https://api.github.com/users/HyeyeonKoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyeyeonKoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyeyeonKoo/subscriptions",
"organizations_url": "https://api.github.com/users/HyeyeonKoo/orgs",
"repos_url": "https://api.github.com/users/HyeyeonKoo/repos",
"events_url": "https://api.github.com/users/HyeyeonKoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyeyeonKoo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, those kinds of questions should be asked on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only, so closing this.\r\n\r\nFor a quick answer (move the discussion to the forums for a longer discussion!), it's normal that each GPU takes a slightly different time to train as all CUDA operations are asynchronous and the program is launched twice in parallel to be executed by both. It's also normal to see two different losses as the loss is not gathered across devices during training, only the gradients.",
"Thank you for the answer. I will move this to forums."
] | 1,609 | 1,609 | 1,609 | NONE | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.3.1611-Core
- Python version: 3.6.4
- PyTorch version (GPU?): 1.7.0
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help
@LysandreJik, @sgugger
## Information
Model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. code
```
from transformers import RobertaConfig
config = RobertaConfig(
vocab_size=34492,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
type_vocab_size=1,
position_embedding_type="absolute"
)
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained("tokenizer", max_len=512)
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM(config=config)
from datetime import datetime
from transformers import LineByLineTextDataset
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="train.txt",
block_size=tokenizer.max_len_single_sentence
)
eval_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="eval.txt",
block_size=tokenizer.max_len_single_sentence
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="output",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
learning_rate=6e-4,
adam_beta1=0.9,
adam_beta2=0.98,
adam_epsilon=1e-6,
per_device_train_batch_size=200,
per_device_eval_batch_size=200,
num_train_epochs=14,
disable_tqdm=True
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
```
2. log
```
True
0: 1
0: Tesla V100-PCIE-32GB
1: True
1: 1
1: Tesla V100-PCIE-32GB
0: build dataset : 0:00:07.309346
0: build dataset : 0:00:00.328179
1: build dataset : 0:00:07.945437
1: build dataset : 0:00:00.364413
0: {'loss': 10.5927734375, 'learning_rate': 0.0005999489187808615, 'epoch': 0.0011918951132300357}
1: {'loss': 10.57164478302002, 'learning_rate': 0.0005999489187808615, 'epoch': 0.0011918951132300357}
0: {'loss': 6.9972802049411325, 'learning_rate': 0.000574459390430785, 'epoch': 0.5959475566150179}
1: {'loss': 7.002058581503216, 'learning_rate': 0.000574459390430785, 'epoch': 0.5959475566150179}
0: {'eval_loss': 7.694924354553223, 'epoch': 1.0}
1: {'eval_loss': 7.6843767166137695, 'epoch': 1.0}
0: {'loss': 6.86546826171875, 'learning_rate': 0.0005489187808615699, 'epoch': 1.1918951132300357}
1: {'loss': 6.86378662109375, 'learning_rate': 0.0005489187808615699, 'epoch': 1.1918951132300357}
0: {'loss': 6.84359375, 'learning_rate': 0.0005233781712923548, 'epoch': 1.7878426698450536}
1: {'loss': 6.842060546875, 'learning_rate': 0.0005233781712923548, 'epoch': 1.7878426698450536}
0: {'eval_loss': 7.635812759399414, 'epoch': 2.0}
1: {'eval_loss': 7.633483409881592, 'epoch': 2.0}
0: {'loss': 6.812291015625, 'learning_rate': 0.0004978375617231397, 'epoch': 2.3837902264600714}
1: {'loss': 6.811927734375, 'learning_rate': 0.0004978375617231397, 'epoch': 2.3837902264600714}
0: {'loss': 6.8180390625, 'learning_rate': 0.0004722969521539247, 'epoch': 2.9797377830750893}
1: {'loss': 6.8180390625, 'learning_rate': 0.0004722969521539247, 'epoch': 2.9797377830750893}
0: {'eval_loss': 7.621339797973633, 'epoch': 3.0}
1: {'eval_loss': 7.620441436767578, 'epoch': 3.0}
0: {'loss': 6.8016015625, 'learning_rate': 0.0004467563425847096, 'epoch': 3.575685339690107}
1: {'loss': 6.8015546875, 'learning_rate': 0.0004467563425847096, 'epoch': 3.575685339690107}
0: {'eval_loss': 7.575932025909424, 'epoch': 4.0}
1: {'eval_loss': 7.5758209228515625, 'epoch': 4.0}
0: {'loss': 6.81323828125, 'learning_rate': 0.0004212157330154946, 'epoch': 4.171632896305125}
1: {'loss': 6.81312109375, 'learning_rate': 0.0004212157330154946, 'epoch': 4.171632896305125}
0: {'loss': 6.80004296875, 'learning_rate': 0.0003956751234462795, 'epoch': 4.767580452920143}
1: {'loss': 6.80001953125, 'learning_rate': 0.0003956751234462795, 'epoch': 4.767580452920143}
0: {'eval_loss': 7.579530715942383, 'epoch': 5.0}
1: {'eval_loss': 7.579504013061523, 'epoch': 5.0}
0: {'loss': 6.79704296875, 'learning_rate': 0.0003701345138770645, 'epoch': 5.363528009535161}
1: {'loss': 6.79696875, 'learning_rate': 0.0003701345138770645, 'epoch': 5.363528009535161}
0: {'loss': 6.796515625, 'learning_rate': 0.0003445939043078495, 'epoch': 5.959475566150179}
1: {'loss': 6.79640234375, 'learning_rate': 0.0003445939043078495, 'epoch': 5.959475566150179}
0: {'eval_loss': 7.59311580657959, 'epoch': 6.0}
1: {'eval_loss': 7.593157768249512, 'epoch': 6.0}
0: {'loss': 6.7975078125, 'learning_rate': 0.0003190532947386344, 'epoch': 6.5554231227651965}
1: {'loss': 6.7974375, 'learning_rate': 0.0003190532947386344, 'epoch': 6.5554231227651965}
0: {'eval_loss': 7.5591912269592285, 'epoch': 7.0}
1: {'eval_loss': 7.559223175048828, 'epoch': 7.0}
0: {'loss': 6.8036171875, 'learning_rate': 0.00029351268516941936, 'epoch': 7.151370679380214}
1: {'loss': 6.803546875, 'learning_rate': 0.00029351268516941936, 'epoch': 7.151370679380214}
0: {'loss': 6.79696875, 'learning_rate': 0.0002679720756002043, 'epoch': 7.747318235995232}
1: {'loss': 6.7969921875, 'learning_rate': 0.0002679720756002043, 'epoch': 7.747318235995232}
0: {'eval_loss': 7.575222492218018, 'epoch': 8.0}
1: {'eval_loss': 7.574929714202881, 'epoch': 8.0}
0: {'loss': 6.796890625, 'learning_rate': 0.00024243146603098925, 'epoch': 8.34326579261025}
1: {'loss': 6.7968515625, 'learning_rate': 0.00024243146603098925, 'epoch': 8.34326579261025}
0: {'loss': 6.788359375, 'learning_rate': 0.00021689085646177421, 'epoch': 8.939213349225268}
1: {'loss': 6.788375, 'learning_rate': 0.00021689085646177421, 'epoch': 8.939213349225268}
0: {'eval_loss': 7.567000389099121, 'epoch': 9.0}
1: {'eval_loss': 7.566658973693848, 'epoch': 9.0}
0: {'loss': 6.794640625, 'learning_rate': 0.00019135024689255915, 'epoch': 9.535160905840286}
1: {'loss': 6.7945859375, 'learning_rate': 0.00019135024689255915, 'epoch': 9.535160905840286}
0: {'eval_loss': 7.5506415367126465, 'epoch': 10.0}
1: {'eval_loss': 7.550570487976074, 'epoch': 10.0}
0: {'loss': 6.78496875, 'learning_rate': 0.0001658096373233441, 'epoch': 10.131108462455304}
1: {'loss': 6.7848984375, 'learning_rate': 0.0001658096373233441, 'epoch': 10.131108462455304}
0: {'loss': 6.7898984375, 'learning_rate': 0.00014026902775412904, 'epoch': 10.727056019070321}
1: {'loss': 6.7898203125, 'learning_rate': 0.00014026902775412904, 'epoch': 10.727056019070321}
0: {'eval_loss': 7.568336486816406, 'epoch': 11.0}
1: {'eval_loss': 7.568056583404541, 'epoch': 11.0}
0: {'loss': 6.79440625, 'learning_rate': 0.000114728418184914, 'epoch': 11.32300357568534}
1: {'loss': 6.7943984375, 'learning_rate': 0.000114728418184914, 'epoch': 11.32300357568534}
0: {'loss': 6.78665625, 'learning_rate': 8.918780861569896e-05, 'epoch': 11.918951132300357}
1: {'loss': 6.786703125, 'learning_rate': 8.918780861569896e-05, 'epoch': 11.918951132300357}
0: {'eval_loss': 7.579376220703125, 'epoch': 12.0}
1: {'eval_loss': 7.5791497230529785, 'epoch': 12.0}
0: {'loss': 6.79565625, 'learning_rate': 6.364719904648391e-05, 'epoch': 12.514898688915375}
1: {'loss': 6.795640625, 'learning_rate': 6.364719904648391e-05, 'epoch': 12.514898688915375}
0: {'eval_loss': 7.5773115158081055, 'epoch': 13.0}
1: {'eval_loss': 7.577144622802734, 'epoch': 13.0}
0: {'loss': 6.795859375, 'learning_rate': 3.810658947726885e-05, 'epoch': 13.110846245530393}
1: {'loss': 6.795796875, 'learning_rate': 3.810658947726885e-05, 'epoch': 13.110846245530393}
0: {'loss': 6.79365625, 'learning_rate': 1.2565979908053806e-05, 'epoch': 13.70679380214541}
1: {'loss': 6.793703125, 'learning_rate': 1.2565979908053806e-05, 'epoch': 13.70679380214541}
0: {'eval_loss': 7.550729751586914, 'epoch': 14.0}
0: {'epoch': 14.0}
0: train time : 2:00:32.638885
1: {'eval_loss': 7.550601482391357, 'epoch': 14.0}
1: {'epoch': 14.0}
1: train time : 2:00:54.112366
```
## Expected behavior
Hello! I am not sure this is BUG, but I don't know where I can ask a question about this. So, if it is not appropriate, please tell me how can I get the answer about this.
I wrote the code like above and I have two GPUs. I understand that transformers automatically allocate data to each GPU, so I don't need to set up anything in code. But, the log seems like each GPU train separate model. I expect that each GPU is trained by scattered data, and gather the loss. However, GPU0's loss and GPU1's loss have same values. Also, there is no time different between using two gpus(7302sec) and single gpu(7252sec). Is there anything I can do for reducing training time with two GPUs? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9415/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9414/comments | https://api.github.com/repos/huggingface/transformers/issues/9414/events | https://github.com/huggingface/transformers/pull/9414 | 778,596,629 | MDExOlB1bGxSZXF1ZXN0NTQ4NjU0OTk1 | 9,414 | Fix link to Evaluate TAPAS Notebook | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9414",
"html_url": "https://github.com/huggingface/transformers/pull/9414",
"diff_url": "https://github.com/huggingface/transformers/pull/9414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9414.patch",
"merged_at": 1609922571000
} |
https://api.github.com/repos/huggingface/transformers/issues/9413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9413/comments | https://api.github.com/repos/huggingface/transformers/issues/9413/events | https://github.com/huggingface/transformers/pull/9413 | 778,595,312 | MDExOlB1bGxSZXF1ZXN0NTQ4NjUzOTAy | 9,413 | Fix link to Notebook to fine-tune TAPAS | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9413",
"html_url": "https://github.com/huggingface/transformers/pull/9413",
"diff_url": "https://github.com/huggingface/transformers/pull/9413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9413.patch",
"merged_at": 1609922693000
} |
https://api.github.com/repos/huggingface/transformers/issues/9412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9412/comments | https://api.github.com/repos/huggingface/transformers/issues/9412/events | https://github.com/huggingface/transformers/pull/9412 | 778,574,153 | MDExOlB1bGxSZXF1ZXN0NTQ4NjM1ODEy | 9,412 | [model parallel] add experimental warning | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR documents that model parallelism is experimental and can change at any moment, so that we are not committing to any APIs until we sorted this out and it appears to be stable.
This in particular applies to the device map which is far from being sorted out.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9412",
"html_url": "https://github.com/huggingface/transformers/pull/9412",
"diff_url": "https://github.com/huggingface/transformers/pull/9412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9412.patch",
"merged_at": 1609859133000
} |
https://api.github.com/repos/huggingface/transformers/issues/9411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9411/comments | https://api.github.com/repos/huggingface/transformers/issues/9411/events | https://github.com/huggingface/transformers/pull/9411 | 778,540,542 | MDExOlB1bGxSZXF1ZXN0NTQ4NjA4MDMw | 9,411 | [examples/text-classification] Fix a bug for using own regression dataset | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger \r\nThank you for checking and merging this PR!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR is to fix https://github.com/huggingface/transformers/issues/9393
Fix a bug in `run_glue.py` so that it can be used for our own dataset of regression tasks.
close #9393
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Thank you for checking the issue and giving the comment.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9411",
"html_url": "https://github.com/huggingface/transformers/pull/9411",
"diff_url": "https://github.com/huggingface/transformers/pull/9411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9411.patch",
"merged_at": 1609852507000
} |
https://api.github.com/repos/huggingface/transformers/issues/9410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9410/comments | https://api.github.com/repos/huggingface/transformers/issues/9410/events | https://github.com/huggingface/transformers/issues/9410 | 778,508,425 | MDU6SXNzdWU3Nzg1MDg0MjU= | 9,410 | `pip install -e .[dev]` in Python 3.9.1+ fails because `jaxlib==0.1.55` cannot be found | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I retried to install `transformers [dev]` with Python 3.9.1. \r\nThe latest `git tag` in the cloned repository is `v4.2.1`.\r\n\r\nI assumed that the same error would occur, but in this time it failed in installing `tensorflow`. \r\n\r\n``` sh\r\n****@**** $ conda create -n transformers-py39-dev\r\nCollecting package metadata (current_repodata.json): done\r\nSolving environment: done\r\n\r\n## Package Plan ##\r\n\r\n environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-py39-dev\r\n\r\n\r\n\r\nProceed ([y]/n)? y\r\n\r\nPreparing transaction: done\r\nVerifying transaction: done\r\nExecuting transaction: done\r\n#\r\n# To activate this environment, use\r\n#\r\n# $ conda activate transformers-py39-dev\r\n#\r\n# To deactivate an active environment, use\r\n#\r\n# $ conda deactivate\r\n\r\n****@**** $ conda activate transformers-py39-dev\r\n(transformers-py39-dev) ****@**** $ conda install pip\r\nCollecting package metadata (current_repodata.json): done\r\nSolving environment: done\r\n\r\n## Package Plan ##\r\n\r\n environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-py39-dev\r\n\r\n added / updated specs:\r\n - pip\r\n\r\n\r\nThe following packages will be downloaded:\r\n\r\n package | build\r\n ---------------------------|-----------------\r\n setuptools-51.1.2 | py39h06a4308_4 743 KB\r\n ------------------------------------------------------------\r\n Total: 743 KB\r\n\r\nThe following NEW packages will be INSTALLED:\r\n\r\n _libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main\r\n ca-certificates pkgs/main/linux-64::ca-certificates-2020.12.8-h06a4308_0\r\n certifi pkgs/main/linux-64::certifi-2020.12.5-py39h06a4308_0\r\n ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7\r\n libedit pkgs/main/linux-64::libedit-3.1.20191231-h14c3975_1\r\n libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2\r\n libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0\r\n libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0\r\n ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1\r\n openssl pkgs/main/linux-64::openssl-1.1.1i-h27cfd23_0\r\n pip pkgs/main/linux-64::pip-20.3.3-py39h06a4308_0\r\n python pkgs/main/linux-64::python-3.9.1-hdb3f193_2\r\n readline pkgs/main/linux-64::readline-8.0-h7b6447c_0\r\n setuptools pkgs/main/linux-64::setuptools-51.1.2-py39h06a4308_4\r\n sqlite pkgs/main/linux-64::sqlite-3.33.0-h62c20be_0\r\n tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0\r\n tzdata pkgs/main/noarch::tzdata-2020d-h14c3975_0\r\n wheel pkgs/main/noarch::wheel-0.36.2-pyhd3eb1b0_0\r\n xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0\r\n zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3\r\n```\r\n\r\n``` sh\r\n(transformers-py39-dev) ****@**** $ pwd\r\n****/workspace/Clone/transformers\r\n(transformers-py39-dev) ****@**** $ pip install -e \".[dev]\"\r\nObtaining file:///****/workspace/Clone/transformers\r\n Installing build dependencies ... done\r\n Getting requirements to build wheel ... done\r\n Preparing wheel metadata ... done\r\nERROR: Could not find a version that satisfies the requirement tensorflow>=2.3; extra == \"dev\" (from transformers[dev])\r\nERROR: No matching distribution found for tensorflow>=2.3; extra == \"dev\"\r\n```\r\n\r\nIs it possible that you have decided not to support Python 3.9+ at this time because of the compatibility with the libraries `transformers` depends on?\r\n\r\nI apologize if there is any misunderstanding.\r\n",
"You can use transformers without TensorFlow or FLAX installed, there is nothing in the code of transformers that is incompatible with Python 3.9. It looks like you want TensorFlow support for Python 3.9, which you should ask on the TensorFlow GitHub.",
"@sgugger \r\nThank you for your comment.\r\nExcuse me for making you confused. It seems that there was a lack of information in my explanation.\r\n\r\nIn this case, my aim is not to use transformers with TensorFlow or FLAX.\r\nWhat I'd like to do is install `transformers [dev]` to open PRs in the future, so I'm a bit confused about whether I can install it with Python 3.9+.\r\nI'm not familiar with installing a `[dev]` version software, so I opened this issue to ask if we can install the [dev] version transformers with Python 3.9+ and open PRs using it.\r\n\r\nI can use Python <= 3.8, so this question is not urgent.\r\nI apologize for making you confused.",
"You will be able to open PRs without installing `transformers [dev]`, it just mean you won't be able to run all the tests locally. \r\n`pip install transformers [torch, sentencepiece, tokenizers, testing, quality, ja, docs, sklearn, modelcreation]` might work to install all the depencies except TensorFlow and Flax (I just took all what is in dev and removed TensorFlow and Flax to create this command) but no guarantee.\r\n\r\nIf you're not an advanced user, I would recommend sticking with Python 3.6 to 3.8 while waiting for TensorFlow and Flax to support Python 3.9, as installing things with it might have some challenges :-)",
"Hi @sgugger,\r\n\r\nThank you for telling me how to install it!\r\n\r\nWhen I tried to open a PR before, the auto-formatting of the code didn't work properly (I think it was when I tried to open a PR in `datasets`, not in `transformer`), and I assumed that I had to use `[dev]` versions when I want to open a PR. \r\nNow I think that the matter was caused by my have not installed the proper version of `testing`, `quality`, and `docs` then.\r\n\r\nI would like to become an advanced user eventually, but not now, so I would like to use Python 3.6 to 3.8 for now.\r\n\r\nThanks again!\r\n",
"Yep getting the same error, \r\nOn a fresh 3.9 Python conda:\r\n`ERROR: No matching distribution found for jaxlib==0.1.55; extra == \"dev\"`",
"To fix it, I moved back to Python 3.8.8, then `pip install -e \".[dev]\"` worked fine"
] | 1,609 | 1,617 | 1,610 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.2.0dev0 (the error is during the installation)
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.9.1 (the error occurs) -> 3.8.0 (the error does not occur)
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
documentation: @sgugger
## Information
This is the report of a bug that I encountered during the [dev] version of `transformers`.
I try to create a conda environment to install `transformer [dev]` by `pip install -e .[dev]`,
but failed due to the `jaxlib` version.
## To reproduce
Git clone the forked `transformers` and update it to be `This branch is even with huggingface:master.`
``` sh
$ git clone [email protected]:forest1988/transformers.git forest1988_transformers
$ cd forest1988_transformers/
$ git remote add upstream https://github.com/huggingface/transformers.git
$ git pull upstream main
$ git pull upstream master
$ git push origin master
```
Create a new conda env.
``` sh
$ conda create -n transformers-for-contribute
$ conda activate transformers-for-contribute
```
Then, try to install `transformers [dev]` by `pip install -e .[dev]`.
``` sh
(transformers-for-contribute) ****@**** $ conda install pip
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: ****/.pyenv/versions/anaconda3-2020.07/envs/transformers-for-contribute
added / updated specs:
- pip
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2020.12.8 | h06a4308_0 121 KB
certifi-2020.12.5 | py39h06a4308_0 140 KB
openssl-1.1.1i | h27cfd23_0 2.5 MB
pip-20.3.3 | py39h06a4308_0 1.8 MB
python-3.9.1 | hdb3f193_2 18.1 MB
setuptools-51.0.0 | py39h06a4308_2 726 KB
wheel-0.36.2 | pyhd3eb1b0_0 33 KB
------------------------------------------------------------
Total: 23.4 MB
The following NEW packages will be INSTALLED:
_libgcc_mutex pkgs/main/linux-64::_libgcc_mutex-0.1-main
ca-certificates pkgs/main/linux-64::ca-certificates-2020.12.8-h06a4308_0
certifi pkgs/main/linux-64::certifi-2020.12.5-py39h06a4308_0
ld_impl_linux-64 pkgs/main/linux-64::ld_impl_linux-64-2.33.1-h53a641e_7
libedit pkgs/main/linux-64::libedit-3.1.20191231-h14c3975_1
libffi pkgs/main/linux-64::libffi-3.3-he6710b0_2
libgcc-ng pkgs/main/linux-64::libgcc-ng-9.1.0-hdf63c60_0
libstdcxx-ng pkgs/main/linux-64::libstdcxx-ng-9.1.0-hdf63c60_0
ncurses pkgs/main/linux-64::ncurses-6.2-he6710b0_1
openssl pkgs/main/linux-64::openssl-1.1.1i-h27cfd23_0
pip pkgs/main/linux-64::pip-20.3.3-py39h06a4308_0
python pkgs/main/linux-64::python-3.9.1-hdb3f193_2
readline pkgs/main/linux-64::readline-8.0-h7b6447c_0
setuptools pkgs/main/linux-64::setuptools-51.0.0-py39h06a4308_2
sqlite pkgs/main/linux-64::sqlite-3.33.0-h62c20be_0
tk pkgs/main/linux-64::tk-8.6.10-hbc83047_0
tzdata pkgs/main/noarch::tzdata-2020d-h14c3975_0
wheel pkgs/main/noarch::wheel-0.36.2-pyhd3eb1b0_0
xz pkgs/main/linux-64::xz-5.2.5-h7b6447c_0
zlib pkgs/main/linux-64::zlib-1.2.11-h7b6447c_3
Proceed ([y]/n)? y
Downloading and Extracting Packages
pip-20.3.3 | 1.8 MB | ################################################################################################################################################################# | 100%
ca-certificates-2020 | 121 KB | ################################################################################################################################################################# | 100%
python-3.9.1 | 18.1 MB | ################################################################################################################################################################# | 100%
certifi-2020.12.5 | 140 KB | ################################################################################################################################################################# | 100%
setuptools-51.0.0 | 726 KB | ################################################################################################################################################################# | 100%
wheel-0.36.2 | 33 KB | ################################################################################################################################################################# | 100%
openssl-1.1.1i | 2.5 MB | ################################################################################################################################################################# | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(transformers-for-contribute) ****@**** $ pwd
****/workspace/Clone/forest1988_transformers
(transformers-for-contribute) ****@**** $ pip install -e ".[dev]"
Obtaining file:///****/workspace/Clone/forest1988_transformers
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
ERROR: Could not find a version that satisfies the requirement jaxlib==0.1.55; extra == "dev" (from transformers[dev])
ERROR: No matching distribution found for jaxlib==0.1.55; extra == "dev"
(transformers-for-contribute) ****@**** $ pip install jaxlib==0.1.55
ERROR: Could not find a version that satisfies the requirement jaxlib==0.1.55
ERROR: No matching distribution found for jaxlib==0.1.55
```
When I downgraded the python to 3.8 by `conda install python==3.8`, then `pip install -e ".[dev]"` works.
I tried other versions of python installed via conda:
- `conda install python==3.7` : OK
- `conda install python==3.9` : the same error occurs
## Expected behavior
Depending on the version of python we are using, we may find that the version of `jaxlib` specified in [`setup.py`](https://github.com/huggingface/transformers/blob/master/setup.py) is missing, and it causes `pip install -e .[dev]` failure.
For the `transformers [dev]`, is it better not to use python 3.9+? (I apologize if I missed the explanation)
If I change `jaxlib==0.1.55` to `jaxlib>=0.1.55` in `setup.py`, will it cause problems elsewhere? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9410/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9410/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9409/comments | https://api.github.com/repos/huggingface/transformers/issues/9409/events | https://github.com/huggingface/transformers/pull/9409 | 778,491,047 | MDExOlB1bGxSZXF1ZXN0NTQ4NTY5MTI5 | 9,409 | [trainer] group fp16 args together | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | this PR proposes a purely cosmetic change that puts all the fp16 args together - so they are easier to manage/read
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9409",
"html_url": "https://github.com/huggingface/transformers/pull/9409",
"diff_url": "https://github.com/huggingface/transformers/pull/9409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9409.patch",
"merged_at": 1609857579000
} |
https://api.github.com/repos/huggingface/transformers/issues/9408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9408/comments | https://api.github.com/repos/huggingface/transformers/issues/9408/events | https://github.com/huggingface/transformers/issues/9408 | 778,452,777 | MDU6SXNzdWU3Nzg0NTI3Nzc= | 9,408 | [autoformatters] wrapping destroying items/lists | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The styling script does indeed break your list in this instance, which is kind of a bug that is a feature instead. Let me explain. If you use sphinx to convert this docstring to HTML, here is the result that it will produce:\r\n\r\nSo the styler is really only showing you in advance there is going to be a problem with your list when putting everything in the same paragraph.\r\n\r\nTo avoid the breaking (and properly rendering your list in the docs), you have to add a new empty line:\r\n```\r\n \"\"\"\r\n number of training steps is either \r\n\r\n 1. args.max_steps if --max_steps > 1 \r\n 2. else derive from dataset if we can get its size\r\n \"\"\"\r\n```\r\n\r\nI know it's a bit annoying for docstrings that are just there as internal documentation and not really designed to be shown in the main documentation, but the script can't guess which docstrings to check and which not...\r\n\r\n> I also am not sure why there is a need to merge lines when the writer meant them to be shorter.\r\n\r\nAgain, this will be shown as one paragraph in the actual documentation. If you want to keep lines separated, they need to have an extra new line in-between.",
"Thank you for explaining that it is the sphinx that is lacking.\r\n\r\nCould the autoformatter detect such situations and fix that so that it remains a list by inserting a new line, rather than unwrapping the whole thing?\r\n\r\nIf such a parser would be complicated we could make it easier by having a stricter format. Usually, in English a proper list is preceded by a colon as in:\r\n```\r\nHere is what you do:\r\n1. ....\r\n2. ....\r\n```\r\nSo `r':\\s*[\\r\\n]+\\s+(\\d+\\.|[\\-\\*] )'` would match 3 types of lists."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # 🚀 Feature request
Would it be possible to make the auto-wrappers respect items/lists?
e.g. ended up with:
```
"""
number of training steps is either 1. args.max_steps if --max_steps > 1 2. else derive from dataset if we can
get its size
"""
```
not only it's broken, it's unreadable.
The original was:
```
"""
number of training steps is either
1. args.max_steps if --max_steps > 1
2. else derive from dataset if we can get its size
"""
```
Ideally it should not remove new lines before bullets */- and numbers 1.
I also am not sure why there is a need to merge lines when the writer meant them to be shorter. I get the shortening, but why can't short lines be left alone. Which would be the case in this example. It looks like the only way I can enforce readable content is to inject paragraphs.
Thank you!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9407/comments | https://api.github.com/repos/huggingface/transformers/issues/9407/events | https://github.com/huggingface/transformers/pull/9407 | 778,419,886 | MDExOlB1bGxSZXF1ZXN0NTQ4NTEwNjk0 | 9,407 | Allow example to use a revision and work with private models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"+1 on the `model_revision` part\r\n\r\nOn the `use_auth_token`, I was thinking we could try to implement auto-use of the token _iff_ the model is not public (i.e. send the token for non-existent models and private models, as the server doesn't make a difference for those two cases when unauthorized – you get a 404 in both cases – said differently if you don't have access to a model you shouldn't know whether it's an existing private model)\r\n\r\nThis will require some implementation changes in file_utils though so it might take a bit of time. \r\n\r\nIf you think it's helpful to expose this PR's manual option first, I'm ok with that.",
"@LysandreJik Yep totally right! I won't personally get around to adding the feature in file_utils/huggingface_hub in the next 2-3 weeks though, so maybe worth it to merge it like this in the meantime:)",
"I think it's important to provide the option right now to let the user play with their private models for those scripts. We can have that flag become `None` later on and default to the right thing when the implementation in `file_utils` permits it then remove it entirely a bit later.",
"sounds good",
"Sounds good!"
] | 1,609 | 1,609 | 1,609 | COLLABORATOR | null | # What does this PR do?
This PR adds the ability to:
- pick a particular revision for a model checkpoint
- use private models when the user is logged in
with the example scripts.
Just did `run_glue` as a proof of concept, will duplicate to all examples and the new example template if this suits everyone.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9407",
"html_url": "https://github.com/huggingface/transformers/pull/9407",
"diff_url": "https://github.com/huggingface/transformers/pull/9407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9407.patch",
"merged_at": 1609933763000
} |
https://api.github.com/repos/huggingface/transformers/issues/9406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9406/comments | https://api.github.com/repos/huggingface/transformers/issues/9406/events | https://github.com/huggingface/transformers/issues/9406 | 778,378,201 | MDU6SXNzdWU3NzgzNzgyMDE= | 9,406 | Unable to train xlnet with tensorflow | {
"login": "nicolas-ferland",
"id": 76973180,
"node_id": "MDQ6VXNlcjc2OTczMTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/76973180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolas-ferland",
"html_url": "https://github.com/nicolas-ferland",
"followers_url": "https://api.github.com/users/nicolas-ferland/followers",
"following_url": "https://api.github.com/users/nicolas-ferland/following{/other_user}",
"gists_url": "https://api.github.com/users/nicolas-ferland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicolas-ferland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicolas-ferland/subscriptions",
"organizations_url": "https://api.github.com/users/nicolas-ferland/orgs",
"repos_url": "https://api.github.com/users/nicolas-ferland/repos",
"events_url": "https://api.github.com/users/nicolas-ferland/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicolas-ferland/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Can you try with master instead of the old `2.0.0` release? In order to know if the problem is still here or not.",
"By \"with master\", do you mean installed from source?\r\n\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .",
"After installing from source transformers 4.2.0.dev0,\r\n\r\nI have this error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-2-1503c944af5c> in <module>\r\n----> 1 from transformers import AutoTokenizer, TFAutoModel\r\n\r\n~/transformers/src/transformers/__init__.py in <module>\r\n 38 \r\n 39 # Data\r\n---> 40 from .data import (\r\n 41 DataProcessor,\r\n 42 InputExample,\r\n\r\n~/transformers/src/transformers/data/__init__.py in <module>\r\n 18 \r\n 19 from .metrics import glue_compute_metrics, xnli_compute_metrics\r\n---> 20 from .processors import (\r\n 21 DataProcessor,\r\n 22 InputExample,\r\n\r\n~/transformers/src/transformers/data/processors/__init__.py in <module>\r\n 18 \r\n 19 from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels\r\n---> 20 from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features\r\n 21 from .utils import DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor\r\n 22 from .xnli import xnli_output_modes, xnli_processors, xnli_tasks_num_labels\r\n\r\n~/transformers/src/transformers/data/processors/squad.py in <module>\r\n 22 \r\n 23 from ...file_utils import is_tf_available, is_torch_available\r\n---> 24 from ...models.bert.tokenization_bert import whitespace_tokenize\r\n 25 from ...tokenization_utils_base import BatchEncoding, PreTrainedTokenizerBase, TruncationStrategy\r\n 26 from ...utils import logging\r\n\r\n~/transformers/src/transformers/models/bert/__init__.py in <module>\r\n 43 \r\n 44 if is_tf_available():\r\n---> 45 from .modeling_tf_bert import (\r\n 46 TF_BERT_PRETRAINED_MODEL_ARCHIVE_LIST,\r\n 47 TFBertEmbeddings,\r\n\r\n~/transformers/src/transformers/models/bert/modeling_tf_bert.py in <module>\r\n 21 import tensorflow as tf\r\n 22 \r\n---> 23 from ...activations_tf import get_tf_activation\r\n 24 from ...file_utils import (\r\n 25 MULTIPLE_CHOICE_DUMMY_INPUTS,\r\n\r\n~/transformers/src/transformers/activations_tf.py in <module>\r\n 66 \"gelu\": tf.keras.layers.Activation(gelu),\r\n 67 \"relu\": tf.keras.activations.relu,\r\n---> 68 \"swish\": tf.keras.activations.swish,\r\n 69 \"silu\": tf.keras.activations.swish,\r\n 70 \"gelu_new\": tf.keras.layers.Activation(gelu_new),\r\n\r\nAttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'\r\n```\r\n\r\nIt's due to the line of code:\r\nfrom transformers import AutoTokenizer, TFAutoModel",
"The next release of transformers (from source) now requires TF >= 2.3",
"It seems to work now, but I have a lot of warnings. Should I be worried about any of them?\r\n\r\n```\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fb1e4e82280>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.\r\n```",
"They look ok for me!",
"I'm fine-tuning the model on my dataset and the accuracy is 0.02 after one epoch and it didn't really change during training. Also, it takes 7 hours per epoch. I'm wondering if the low accuracy might be due to the things mentioned in the warnings.\r\n\r\n`WARNING:tensorflow:Gradients do not exist for variables ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0'] when minimizing the loss.`\r\nIf there are no gradients, it cannot learn. Do you know if it's really correct that all those layers have no gradient? I would expect the layers to have gradients.\r\n\r\n```\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa069898210>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa069898210>> and will run it as-is.\r\n```\r\nThis warning seems to say that it is a bug worth mentioning to the TensorFlow team. Could it be the cause of the bad training time?\r\n\r\n`The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).`\r\nDo those parameters must be set? I have no set them or tried to modify them, I'm simply using the .fit method that all tensorflow 2 models have.\r\n",
"Also, do we need to use training=True somewhere? It's mentioned to use it when using the call of the model, but I'm calling .fit() rather than using the model's call, so I don't have this option as far as I know.\r\n\r\nI'm asking because the training doesn't seem to be working and I'm wondering if it could be the problem.",
"Unfortunately, no issues for me with `TFXLNetForSequenceClassification`, just test over the MRPC dataset and got around 0.93 accuracy on training and 0.83 accuracy on validation. The version I used is the master branch from source.\r\n\r\nThe issue might come from the way you are training the model. Are you using the same one that you shared in your first post?",
"yes, but some parameters are different. For example, I needed a batchsize of 1 to fit in memory. Even 2 crashes. Also, I'm using callbacks. But I don't think it can make that the model doesn't learn.\r\n\r\n # Save the model after each epoch.\r\n ModelCheckpoint_callback = tf.keras.callbacks.ModelCheckpoint(\r\n filepath = self.params['save_model_weight_filepath']+'_{epoch:02d}.hdf5', \r\n monitor='val_loss', verbose=0, save_best_only=False,\r\n save_weights_only=False, mode='auto', save_freq='epoch'\r\n )\r\n \r\n # Stop when val loss stops decreasing.\r\n EarlyStopping_callback = tf.keras.callbacks.EarlyStopping(\r\n monitor='val_loss', min_delta=self.params['min_delta'], patience=self.params['patience'], verbose=0,\r\n mode='auto', baseline=None, restore_best_weights=True\r\n\r\n history = self.clf.fit(x=padded_inputs, y=y,\r\n batch_size=1,\r\n epochs=40,\r\n verbose=1,\r\n validation_split=0.2,\r\n max_queue_size=10,\r\n workers=-1,\r\n use_multiprocessing=True,\r\n callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n\r\nAfter epoch 2, the accuracies and losses are loss: 6.7701 - accuracy: 0.0197 - val_loss: 11.5031 - val_accuracy: 0.0025\r\n\r\nEpoch 3 is still in process with loss: 6.7662 - accuracy: 0.0204\r\n\r\nIt doesn't seem to learn at all.\r\n\r\nAlso, I have this warning \r\n`WARNING:tensorflow:Callbacks method `on_test_batch_end` is slow compared to the batch time (batch time: 0.0089s vs `on_test_batch_end` time: 0.6593s). Check your callbacks.`\r\nbut none of my callbacks are used on batch_end, they are used on epoch ends, so infrequently and shouldn't affect the time too much.",
"Ok, from what I see in your script, the reason why your model don't learn anything is because the labels are not seen by the model which is normal with the way you set your dataset. The models in the lib have to be feed in a specific way, the data have to be a `Tuple(x, y)` where `x` can be either a list or a dict with tf.Tensor or np.ndarray, same for `y`. And then feed your model with:\r\n\r\n```python\r\nhistory = model.fit(\r\n train_dataset,\r\n epochs=3,\r\n)\r\n```\r\n\r\nYou can take example on how to do this in our examples or on our datasets website https://huggingface.co/docs/datasets/torch_tensorflow.html to know how to format your dataset.",
"I tried that (using a tuple with x and y, my x and y were already numpy arrays) and I got an error.\r\n\r\n```\r\n~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df)\r\n 181 workers=self.params['workers'],\r\n 182 use_multiprocessing=self.params['use_multiprocessing'],\r\n--> 183 callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n 184 self.history_df = pd.DataFrame({'epochs':history.epoch, 'loss': history.history['loss'], \r\n 185 'validation_loss': history.history['val_loss'], 'accuracy': history.history['accuracy'],\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)\r\n 106 def _method_wrapper(self, *args, **kwargs):\r\n 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access\r\n--> 108 return method(self, *args, **kwargs)\r\n 109 \r\n 110 # Running inside `run_distribute_coordinator` already.\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)\r\n 1096 batch_size=batch_size):\r\n 1097 callbacks.on_train_batch_begin(step)\r\n-> 1098 tmp_logs = train_function(iterator)\r\n 1099 if data_handler.should_sync:\r\n 1100 context.async_wait()\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)\r\n 778 else:\r\n 779 compiler = \"nonXla\"\r\n--> 780 result = self._call(*args, **kwds)\r\n 781 \r\n 782 new_tracing_count = self._get_tracing_count()\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)\r\n 821 # This is the first call of __call__, so we have to initialize.\r\n 822 initializers = []\r\n--> 823 self._initialize(args, kwds, add_initializers_to=initializers)\r\n 824 finally:\r\n 825 # At this point we know that the initialization is complete (or less\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)\r\n 695 self._concrete_stateful_fn = (\r\n 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n--> 697 *args, **kwds))\r\n 698 \r\n 699 def invalid_creator_scope(*unused_args, **unused_kwds):\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)\r\n 2853 args, kwargs = None, None\r\n 2854 with self._lock:\r\n-> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n 2856 return graph_function\r\n 2857 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)\r\n 3211 \r\n 3212 self._function_cache.missed.add(call_context_key)\r\n-> 3213 graph_function = self._create_graph_function(args, kwargs)\r\n 3214 self._function_cache.primary[cache_key] = graph_function\r\n 3215 return graph_function, args, kwargs\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)\r\n 3073 arg_names=arg_names,\r\n 3074 override_flat_arg_shapes=override_flat_arg_shapes,\r\n-> 3075 capture_by_value=self._capture_by_value),\r\n 3076 self._function_attributes,\r\n 3077 function_spec=self.function_spec,\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)\r\n 984 _, original_func = tf_decorator.unwrap(python_func)\r\n 985 \r\n--> 986 func_outputs = python_func(*func_args, **func_kwargs)\r\n 987 \r\n 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)\r\n 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give\r\n 599 # the function a weak reference to itself to avoid a reference cycle.\r\n--> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n 601 weak_wrapped_fn = weakref.ref(wrapped_fn)\r\n 602 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 971 except Exception as e: # pylint:disable=broad-except\r\n 972 if hasattr(e, \"ag_error_metadata\"):\r\n--> 973 raise e.ag_error_metadata.to_exception(e)\r\n 974 else:\r\n 975 raise\r\n\r\nValueError: in user code:\r\n\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *\r\n return step_function(self, iterator)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:796 step_function **\r\n outputs = model.distribute_strategy.run(run_step, args=(data,))\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run\r\n return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica\r\n return self._call_for_each_replica(fn, args, kwargs)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica\r\n return fn(*args, **kwargs)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:789 run_step **\r\n outputs = model.train_step(data)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:757 train_step\r\n self.trainable_variables)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:2737 _minimize\r\n trainable_variables))\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:562 _aggregate_gradients\r\n filtered_grads_and_vars = _filter_grads(grads_and_vars)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1271 _filter_grads\r\n ([v.name for _, v in grads_and_vars],))\r\n\r\n ValueError: No gradients provided for any variable: ['tfxl_net_for_sequence_classification/transformer/mask_emb:0', 'tfxl_net_for_sequence_classification/transformer/word_embedding/weight:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._0/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._1/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._2/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._3/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._4/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._5/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._6/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._7/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._8/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._9/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._10/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/q:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/k:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/v:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/o:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_r_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_s_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/r_w_bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/seg_embed:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/rel_attn/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_norm/gamma:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_norm/beta:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_1/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_1/bias:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_2/kernel:0', 'tfxl_net_for_sequence_classification/transformer/layer_._11/ff/layer_2/bias:0', 'tfxl_net_for_sequence_classification/sequence_summary/summary/kernel:0', 'tfxl_net_for_sequence_classification/sequence_summary/summary/bias:0', 'tfxl_net_for_sequence_classification/logits_proj/kernel:0', 'tfxl_net_for_sequence_classification/logits_proj/bias:0'].\r\n```",
"I also tried with dataset, but I got this error\r\n\r\n```\r\nSome layers from the model checkpoint at xlnet-base-cased were not used when initializing TFXLNetForSequenceClassification: ['lm_loss']\r\n- This IS expected if you are initializing TFXLNetForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing TFXLNetForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome layers of TFXLNetForSequenceClassification were not initialized from the model checkpoint at xlnet-base-cased and are newly initialized: ['sequence_summary', 'logits_proj']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nEpoch 1/40\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f21748f1210>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f21748f1210>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-6-db016531efec> in <module>\r\n 1 # Try to use tensorflow dataset\r\n 2 st = time.time()\r\n----> 3 model.fit(df)\r\n 4 print('time', time.time()-st)\r\n\r\n~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df)\r\n 187 workers=self.params['workers'],\r\n 188 use_multiprocessing=self.params['use_multiprocessing'],\r\n--> 189 callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n 190 self.history_df = pd.DataFrame({'epochs':history.epoch, 'loss': history.history['loss'], \r\n 191 'validation_loss': history.history['val_loss'], 'accuracy': history.history['accuracy'],\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)\r\n 106 def _method_wrapper(self, *args, **kwargs):\r\n 107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access\r\n--> 108 return method(self, *args, **kwargs)\r\n 109 \r\n 110 # Running inside `run_distribute_coordinator` already.\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)\r\n 1096 batch_size=batch_size):\r\n 1097 callbacks.on_train_batch_begin(step)\r\n-> 1098 tmp_logs = train_function(iterator)\r\n 1099 if data_handler.should_sync:\r\n 1100 context.async_wait()\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)\r\n 778 else:\r\n 779 compiler = \"nonXla\"\r\n--> 780 result = self._call(*args, **kwds)\r\n 781 \r\n 782 new_tracing_count = self._get_tracing_count()\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)\r\n 821 # This is the first call of __call__, so we have to initialize.\r\n 822 initializers = []\r\n--> 823 self._initialize(args, kwds, add_initializers_to=initializers)\r\n 824 finally:\r\n 825 # At this point we know that the initialization is complete (or less\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)\r\n 695 self._concrete_stateful_fn = (\r\n 696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access\r\n--> 697 *args, **kwds))\r\n 698 \r\n 699 def invalid_creator_scope(*unused_args, **unused_kwds):\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)\r\n 2853 args, kwargs = None, None\r\n 2854 with self._lock:\r\n-> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)\r\n 2856 return graph_function\r\n 2857 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)\r\n 3211 \r\n 3212 self._function_cache.missed.add(call_context_key)\r\n-> 3213 graph_function = self._create_graph_function(args, kwargs)\r\n 3214 self._function_cache.primary[cache_key] = graph_function\r\n 3215 return graph_function, args, kwargs\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)\r\n 3073 arg_names=arg_names,\r\n 3074 override_flat_arg_shapes=override_flat_arg_shapes,\r\n-> 3075 capture_by_value=self._capture_by_value),\r\n 3076 self._function_attributes,\r\n 3077 function_spec=self.function_spec,\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)\r\n 984 _, original_func = tf_decorator.unwrap(python_func)\r\n 985 \r\n--> 986 func_outputs = python_func(*func_args, **func_kwargs)\r\n 987 \r\n 988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)\r\n 598 # __wrapped__ allows AutoGraph to swap in a converted function. We give\r\n 599 # the function a weak reference to itself to avoid a reference cycle.\r\n--> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds)\r\n 601 weak_wrapped_fn = weakref.ref(wrapped_fn)\r\n 602 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)\r\n 971 except Exception as e: # pylint:disable=broad-except\r\n 972 if hasattr(e, \"ag_error_metadata\"):\r\n--> 973 raise e.ag_error_metadata.to_exception(e)\r\n 974 else:\r\n 975 raise\r\n\r\nValueError: in user code:\r\n\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *\r\n return step_function(self, iterator)\r\n /home/jovyan/transformers/src/transformers/models/xlnet/modeling_tf_xlnet.py:1452 call *\r\n transformer_outputs = self.transformer(\r\n /home/jovyan/transformers/src/transformers/models/xlnet/modeling_tf_xlnet.py:625 call *\r\n inputs[\"input_ids\"] = tf.transpose(inputs[\"input_ids\"], perm=(1, 0))\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **\r\n return target(*args, **kwargs)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2107 transpose_v2\r\n return transpose(a=a, perm=perm, name=name, conjugate=conjugate)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper\r\n return target(*args, **kwargs)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2188 transpose\r\n return transpose_fn(a, perm, name=name)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py:11535 transpose\r\n \"Transpose\", x=x, perm=perm, name=name)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper\r\n attrs=attr_protos, op_def=op_def)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal\r\n compute_device)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal\r\n op_def=op_def)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1975 __init__\r\n control_input_ops, op_def)\r\n /home/jovyan/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op\r\n raise ValueError(str(e))\r\n\r\n ValueError: Dimension must be 3 but is 2 for '{{node tfxl_net_for_sequence_classification/transformer/transpose}} = Transpose[T=DT_INT32, Tperm=DT_INT32](IteratorGetNext, tfxl_net_for_sequence_classification/transformer/transpose/perm)' with input shapes: [1,11929,2000], [2].\r\n```\r\n\r\nFor dataset, I used this code\r\n```\r\n train_dataset = (padded_inputs, y)\r\n train_dataset, val_dataset = train_test_split(train_dataset, test_size=0.02)\r\n train_dataset = tf.data.Dataset.from_tensors(train_dataset)\r\n val_dataset = tf.data.Dataset.from_tensors(val_dataset)\r\n \r\n \r\n # Fit model\r\n self.clf = TFAutoModelForSequenceClassification.from_pretrained(\"xlnet-base-cased\", num_labels=self.n_label)\r\n loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\n self.clf.compile(optimizer='adam',loss=loss,metrics=['accuracy'])\r\n history = self.clf.fit(\r\n train_dataset, #x=padded_inputs, y=y,\r\n validation_data = val_dataset,\r\n batch_size=self.params['batchsize'],\r\n epochs=self.params['epochs'],\r\n verbose=1,\r\n #validation_split=self.params['validation_split'],\r\n max_queue_size=self.params['max_queue_size'],\r\n workers=self.params['workers'],\r\n use_multiprocessing=self.params['use_multiprocessing'],\r\n callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n```",
"This is still not ok. Here an example for MRPC, from which your can take inspiration from:\r\n```\r\nimport tensorflow as tf\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-cased')\r\ndataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)\r\n\r\ndataset.set_format(type='numpy', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])\r\nfeatures = {x: dataset[x] for x in ['input_ids', 'token_type_ids', 'attention_mask']}\r\ntfdataset = tf.data.Dataset.from_tensor_slices((features, dataset[\"label\"])).batch(1)\r\n```\r\n\r\nAnd your data must have the shape:\r\n```\r\n<BatchDataset shapes: ({input_ids: (None, 512), token_type_ids: (None, 512), attention_mask: (None, 512)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int32)>\r\n```\r\nHere it is a tuple, where the first element is a dict that has tensors (built from numpy arrays), and the label is a label id.",
"What kind of format or object is the dataset obtained from\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n?\r\nI'm not loading a public dataset but using my own so I can't take this part from the code. Do you know how I can generate it from a input numpy array and a label numpy array?\r\n\r\nAlso, I don't think xlnet uses 'token_type_ids', 'attention_mask', should I use\r\ndataset.set_format(type='numpy', columns=['input_ids', 'label'])\r\nfeatures = {x: dataset[x] for x in ['input_ids']}\r\n",
"The example I gave is just to show you how should looks like your dataset. And yes XLNet can take both attention_mask and token_type_ids arguments. The steps are simple:\r\n1. Tokenize your dataset\r\n2. Create a tf.data.Dataset and format it to make it looking like I showed you: `({\"input_ids\": [[ex1],[ex2],...], \"attention_mask\":[[ex1],[ex2],...], \"token_type_ids\":[ex1],[ex2],...]}, [label_id_ex1, label_id_ex_2,...])`\r\n3. Run your training with `model.fit(training_dataset, epochs=3)`\r\n\r\nAnd that's it.\r\n\r\n",
"Okay, I'm using\r\n\r\n x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.02)\r\n tokenized_inputs = [xlnet_tokenizer.encode(text) for text in x_train.values.tolist()]\r\n max_length = max(1,min(np.array([len(inp) for inp in tokenized_inputs]).max(), self.params['MAX_LENGTH']))\r\n padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(tokenized_inputs, maxlen=max_length, \r\n value=0,\r\n padding='post', truncating='post',dtype='int32'))\r\n train_dataset = tf.data.Dataset.from_tensor_slices(({\"input_ids\":padded_inputs}, y_train)).batch(1)\r\n val_tokenized_inputs = [xlnet_tokenizer.encode(text) for text in x_val.values.tolist()]\r\n val_padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(val_tokenized_inputs, maxlen=max_length, \r\n value=0,\r\n padding='post', truncating='post',dtype='int32'))\r\n val_dataset = tf.data.Dataset.from_tensor_slices(({\"input_ids\":x_val}, y_val)).batch(1)\r\n print('dataset',train_dataset,val_dataset)\r\n history = self.clf.fit(\r\n train_dataset,\r\n validation_data = val_dataset,\r\n batch_size=self.params['batchsize'],\r\n epochs=self.params['epochs'],\r\n verbose=1,\r\n max_queue_size=self.params['max_queue_size'],\r\n workers=self.params['workers'],\r\n use_multiprocessing=self.params['use_multiprocessing'],\r\n callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n\r\nThe format is <BatchDataset shapes: ({input_ids: (None, 2000)}, (None,)), types: ({input_ids: tf.int32}, tf.int16)>\r\n\r\nShould it work?\r\n\r\nAlso, for the prediction, is it correct to use\r\npred_dataset = tf.data.Dataset.from_tensor_slices(({\"input_ids\":x_train})).batch(1)\r\nsince the model should not use the y at prediction time.\r\n\r\nI'm getting this error right now\r\n```\r\n---------------------------------------------------------------------------\r\nInternalError Traceback (most recent call last)\r\n<ipython-input-6-db016531efec> in <module>\r\n 1 # Try to use tensorflow dataset\r\n 2 st = time.time()\r\n----> 3 model.fit(df)\r\n 4 print('time', time.time()-st)\r\n\r\n~/ticket-analysis-releasev3/ticket-analysis/src/model/xlnet/xlnet.py in fit(self, df)\r\n 168 \r\n 169 # Fit model\r\n--> 170 self.clf = TFAutoModelForSequenceClassification.from_pretrained(\"xlnet-base-cased\", num_labels=self.n_label)\r\n 171 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\n 172 self.clf.compile(optimizer='adam',loss=loss,metrics=['accuracy'])\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 384 return TFBertForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n 385 elif 'xlnet' in pretrained_model_name_or_path:\r\n--> 386 return TFXLNetForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n 387 elif 'xlm' in pretrained_model_name_or_path:\r\n 388 return TFXLMForSequenceClassification.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 266 \r\n 267 inputs = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])\r\n--> 268 ret = model(inputs, training=False) # build the network with dummy inputs\r\n 269 \r\n 270 assert os.path.isfile(resolved_archive_file), \"Error retrieving file {}\".format(resolved_archive_file)\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)\r\n 983 \r\n 984 with ops.enable_auto_cast_variables(self._compute_dtype_object):\r\n--> 985 outputs = call_fn(inputs, *args, **kwargs)\r\n 986 \r\n 987 if self._activity_regularizer:\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in call(self, inputs, **kwargs)\r\n 911 \r\n 912 def call(self, inputs, **kwargs):\r\n--> 913 transformer_outputs = self.transformer(inputs, **kwargs)\r\n 914 output = transformer_outputs[0]\r\n 915 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)\r\n 983 \r\n 984 with ops.enable_auto_cast_variables(self._compute_dtype_object):\r\n--> 985 outputs = call_fn(inputs, *args, **kwargs)\r\n 986 \r\n 987 if self._activity_regularizer:\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in call(self, inputs, attention_mask, mems, perm_mask, target_mapping, token_type_ids, input_mask, head_mask, training)\r\n 607 \r\n 608 ##### Positional encoding\r\n--> 609 pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz, dtype=dtype_float)\r\n 610 pos_emb = self.dropout(pos_emb, training=training)\r\n 611 \r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in relative_positional_encoding(self, qlen, klen, bsz, dtype)\r\n 490 if self.clamp_len > 0:\r\n 491 fwd_pos_seq = tf.clip_by_value(fwd_pos_seq, -clamp_len, clamp_len)\r\n--> 492 pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz)\r\n 493 \r\n 494 return pos_emb\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py in positional_embedding(pos_seq, inv_freq, bsz)\r\n 437 @staticmethod\r\n 438 def positional_embedding(pos_seq, inv_freq, bsz=None):\r\n--> 439 sinusoid_inp = tf.einsum('i,d->id', pos_seq, inv_freq)\r\n 440 pos_emb = tf.concat([tf.sin(sinusoid_inp), tf.cos(sinusoid_inp)], axis=-1)\r\n 441 pos_emb = pos_emb[:, None, :]\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)\r\n 199 \"\"\"Call target, and fall back on dispatchers if there is a TypeError.\"\"\"\r\n 200 try:\r\n--> 201 return target(*args, **kwargs)\r\n 202 except (TypeError, ValueError):\r\n 203 # Note: convert_to_eager_tensor currently raises a ValueError, not a\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/ops/special_math_ops.py in einsum(equation, *inputs, **kwargs)\r\n 682 - number of inputs or their shapes are inconsistent with `equation`.\r\n 683 \"\"\"\r\n--> 684 return _einsum_v2(equation, *inputs, **kwargs)\r\n 685 \r\n 686 \r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/ops/special_math_ops.py in _einsum_v2(equation, *inputs, **kwargs)\r\n 1111 if ellipsis_label:\r\n 1112 resolved_equation = resolved_equation.replace(ellipsis_label, '...')\r\n-> 1113 return gen_linalg_ops.einsum(inputs, resolved_equation)\r\n 1114 \r\n 1115 # Send fully specified shapes to opt_einsum, since it cannot handle unknown\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py in einsum(inputs, equation, name)\r\n 1086 return _result\r\n 1087 except _core._NotOkStatusException as e:\r\n-> 1088 _ops.raise_from_not_ok_status(e, name)\r\n 1089 except _core._FallbackException:\r\n 1090 pass\r\n\r\n~/.local/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)\r\n 6841 message = e.message + (\" name: \" + name if name is not None else \"\")\r\n 6842 # pylint: disable=protected-access\r\n-> 6843 six.raise_from(core._status_to_exception(e.code, message), None)\r\n 6844 # pylint: enable=protected-access\r\n 6845 \r\n\r\n/opt/conda/lib/python3.7/site-packages/six.py in raise_from(value, from_value)\r\n\r\nInternalError: Blas xGEMM launch failed : a.shape=[1,1,10], b.shape=[1,1,384], m=10, n=384, k=1 [Op:Einsum]\r\n```",
"This error means that your GPU doesn't have enough RAM to run an einsum operation. But yes, your dataset looks better. Still, you are not properly using the tokenizer, use a proper way to use it:\r\n```\r\nfrom transformers import XLNetTokenizer\r\ntokenizer = XLNetTokenizer.from_pretrained(\"xlnet-base-cased\")\r\ntokenizer(\"hello\")\r\n``` \r\nTo get tokenized data that looks like:\r\n```\r\n{'input_ids': [24717, 4, 3], 'token_type_ids': [0, 0, 2], 'attention_mask': [1, 1, 1]}\r\n```",
"I'm getting\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-4dd755fb81bd> in <module>\r\n 1 from transformers import XLNetTokenizer\r\n 2 tokenizer = XLNetTokenizer.from_pretrained(\"xlnet-base-cased\")\r\n----> 3 tokenizer(\"hello\")\r\n\r\nTypeError: 'XLNetTokenizer' object is not callable\r\n```",
"Which version of transformers are you using?",
"It went back to '2.0.0'. I don't know why but I'm trying to et 4.2.0 again.",
"Please stick to the 4.2.0 release :)",
"Do we need int64 or is int32 enough? I'm memory limited so anything that allow me to use less memory would help.",
"int32 is enough.",
"It is running again, but still not training.\r\n\r\nI'm using\r\n```\r\n x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=self.params['validation_split'])\r\n # train\r\n tokenized_inputs = xlnet_tokenizer(x_train.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True)\r\n numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()}\r\n train_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_train)).batch(1)\r\n # val\r\n tokenized_inputs = xlnet_tokenizer(x_val.values.tolist(), padding=True, max_length=self.params['MAX_LENGTH'], truncation=True)\r\n numpy_inputs = {x:np.array(tokenized_inputs[x]) for x in tokenized_inputs.keys()}\r\n val_dataset = tf.data.Dataset.from_tensor_slices((numpy_inputs, y_val)).batch(1)\r\n print(train_dataset,val_dataset)\r\n history = self.clf.fit(\r\n train_dataset,\r\n validation_data = val_dataset,\r\n batch_size=self.params['batchsize'],\r\n epochs=self.params['epochs'],\r\n verbose=1,\r\n max_queue_size=self.params['max_queue_size'],\r\n workers=self.params['workers'],\r\n use_multiprocessing=self.params['use_multiprocessing'],\r\n callbacks=[ModelCheckpoint_callback, EarlyStopping_callback])\r\n```\r\n\r\nThe shapes are\r\n`<BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int16)> <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int64, token_type_ids: tf.int64, attention_mask: tf.int64}, tf.int16)>`\r\n\r\nThe results are:\r\nepochs | loss | validation_loss | accuracy | validation_accuracy\r\n-- | -- | -- | -- | --\r\n0 | 6.980901 | 7.149147 | 0.015823 | 0.047779\r\n1 | 7.054768 | 7.217787 | 0.015194 | 0.047779\r\n2 | 7.099029 | 7.474302 | 0.014880 | 0.047779\r\n3 | 7.145690 | 7.359528 | 0.015509 | 0.047779\r\n4 | 7.183013 | 7.395905 | 0.013937 | 0.005448\r\n5 | 7.210382 | 7.452353 | 0.016137 | 0.047779\r\n\r\n",
"Ok, now your data looks correct. Can you try with just:\r\n```\r\nself.clf.fit(train_dataset, epochs=self.params['epochs'])\r\n```\r\n\r\nIf it still not working, try to use multiple other model such as Bert and see if it is still the cases. ",
"It's not working with that, and it's not working with 'bert-base-uncased' either.\r\n\r\nThis is the output for 'bert-base-uncased'.\r\n```\r\nDownloading: 100%\r\n433/433 [00:02<00:00, 199B/s]\r\n\r\nDownloading: 100%\r\n232k/232k [00:00<00:00, 1.25MB/s]\r\n\r\nDownloading: 100%\r\n466k/466k [00:00<00:00, 1.32MB/s]\r\n\r\ndataset <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int32, token_type_ids: tf.int32, attention_mask: tf.int32}, tf.int16)> <BatchDataset shapes: ({input_ids: (None, 500), token_type_ids: (None, 500), attention_mask: (None, 500)}, (None,)), types: ({input_ids: tf.int32, token_type_ids: tf.int32, attention_mask: tf.int32}, tf.int16)>\r\nDownloading: 100%\r\n536M/536M [00:11<00:00, 45.4MB/s]\r\n\r\nAll model checkpoint layers were used when initializing TFBertForSequenceClassification.\r\n\r\nSome layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nEpoch 1/40\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f0c5c70d210>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f0c5c70d210>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n9543/9543 [==============================] - 1784s 187ms/step - loss: 6.9231 - accuracy: 0.0181\r\nEpoch 2/40\r\n9543/9543 [==============================] - 1451s 152ms/step - loss: 7.0684 - accuracy: 0.0153\r\nEpoch 3/40\r\n9543/9543 [==============================] - 1157s 121ms/step - loss: 7.1112 - accuracy: 0.0171\r\nEpoch 4/40\r\n9543/9543 [==============================] - 1160s 122ms/step - loss: 7.1445 - accuracy: 0.0170\r\nEpoch 5/40\r\n9543/9543 [==============================] - 1160s 122ms/step - loss: 7.1869 - accuracy: 0.0159\r\n```\r\n\r\n",
"Is there anything else that could cause the model not to learn?",
"The problem seems not to come from the models, I would guess that they might come from the way you build your data before the creation of the tf.data.Dataset or from the data themselves (sometime we cannot learn anything from the data, it happend). But I cannot be sure of anything without being able to reproduce the same behavior on my side sorry.",
"That's strange because I can get 32% accuracy on validation data using a baseline model that finds the closest text and predicts its label. So it should be possible to learn from the data."
] | 1,609 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: '2.0.0'
- Platform: jupyter notebook
- Python version: 3.7.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger, @TevenLeScao, @jplu
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
my own modified scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
# I get my input, output from a dataframe. It's just a series of text and a series of
# integers representing classes.
x = df['description']
y_label = pd.Categorical(df['target'])
y_cat = y_label.categories
y = y_label.codes
n_label = len(y_cat)
# I use the tokenizer. Then convert it to a numpy array
xlnet_tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
train_tokenized_inputs = [xlnet_tokenizer.encode(text)
for text in x.values.tolist()]
# It needs to be at least 1 and no more than 2000
train_max_length = max(1,min(np.array([len(inp) for inp in train_tokenized_inputs]).max(), 2000))
train_padded_inputs = (tf.keras.preprocessing.sequence.pad_sequences(train_tokenized_inputs, maxlen=train_max_length,
value=0,
padding='post', truncating='post',dtype='int32'))
# I use the xlnet model
clf = TFAutoModelForSequenceClassification.from_pretrained("xlnet-base-cased", num_labels=n_label)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
clf.compile(optimizer='adam',loss=loss)
clf.fit(x=train_padded_inputs, y=y,
batch_size=32,
epochs=1,
verbose=1,
callbacks=None,
validation_split=0.2,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_freq=1,
max_queue_size=10,
workers=1,
use_multiprocessing=False,)
```
The error message is:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-68-c147be84f56e> in <module>
15 max_queue_size=10,
16 workers=1,
---> 17 use_multiprocessing=False,)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233 max_queue_size=max_queue_size,
234 workers=workers,
--> 235 use_multiprocessing=use_multiprocessing)
236
237 total_samples = _get_total_number_of_samples(training_data_adapter)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
550 batch_size=batch_size,
551 check_steps=False,
--> 552 steps=steps_per_epoch)
553 (x, y, sample_weights,
554 val_x, val_y,
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2344 # First, we build the model on the fly if necessary.
2345 if not self.inputs:
-> 2346 all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
2347 is_build_called = True
2348 else:
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _build_model_with_inputs(self, inputs, targets)
2570 else:
2571 cast_inputs = inputs
-> 2572 self._set_inputs(cast_inputs)
2573 return processed_inputs, targets, is_dict_inputs
2574
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _set_inputs(self, inputs, outputs, training)
2657 kwargs['training'] = training
2658 try:
-> 2659 outputs = self(inputs, **kwargs)
2660 except NotImplementedError:
2661 # This Model or a submodel is dynamic and hasn't overridden
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
771 not base_layer_utils.is_in_eager_or_tf_function()):
772 with auto_control_deps.AutomaticControlDependencies() as acd:
--> 773 outputs = call_fn(cast_inputs, *args, **kwargs)
774 # Wrap Tensors in `outputs` in `tf.identity` to avoid
775 # circular dependencies.
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
TypeError: in converted code:
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_xlnet.py:916 call *
output = self.sequence_summary(output)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:773 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:459 call *
output = self.first_dropout(output)
/opt/conda/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py:416 converted_call
return py_builtins.overload_of(f)(*args)
TypeError: 'NoneType' object is not callable
```
In addition, I tried to use TFTrainer in case I could solve my problem with it.
`from transformers import TFTrainer`
Gets this error message
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-51-aece35bcf827> in <module>
----> 1 from transformers import TFTrainer
ImportError: cannot import name 'TFTrainer' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)
```
## Expected behavior
I expect the code to run and the model to be fine-tuned on my dataset.
I expect that I shouldn't need the TFTrainer as the explanation on huggingface.co says the model is a standard tensorflow 2 layer. But I expect that I should be able to import it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9405/comments | https://api.github.com/repos/huggingface/transformers/issues/9405/events | https://github.com/huggingface/transformers/issues/9405 | 778,370,772 | MDU6SXNzdWU3NzgzNzA3NzI= | 9,405 | Retrieval Collapse when fine-tuning RAG | {
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe @patrickvonplaten or @lhoestq can chime in here.",
"It may very well be that this feature is not implemented. @ola13, @lhoestq do you have more insight here maybe?",
"The gradient does propagate to the question encoder. What configuration did you use ?",
"We used the default configuration and 'compressed' DPR. We are indeed seeing that the gradient is propagating to the question encoder; before training starts, a retriever retrieves related articles, but after training, the retriever universally retrieves the same few documents regardless of the query. That shows that the gradients are propagating, but not well.",
"Following up on this. Any thoughts? We are curious whether there is an issue with the implementation of DPR training in HF",
"Hi @JamesDeAntonis! Retrieval collapse is a problem we encountered in some setups and not necessarily caused by a bug in the retriever - basically what it means is that the passages retrieved at the beginning of the training are not useful enough so the models learns to ignore them.\r\n\r\nWe experienced collapse when training a RAG-Sequence model for FEVER, but we were successful with RAG-Token and RAG-Classifier. An option to move forward here could be:\r\n- try training a RAG-Token generative model (it'd be generating the labels)\r\n- share the classification code, maybe there's some issue there? Are you performing marginalization on top of the BART classification head logprobs?",
"Thanks for the response! Good to hear that you were able to train successfully. When you trained, did you use the two-label or three-label dataset? (we are currently using the three-label) I'm curious whether the inconclusive samples are contributing to the collapse.\r\n\r\n* We are using the final hidden state of RAG-token as input into a classification head, and the model properly trains with the generator and classifier heads unfrozen (just the retriever is frozen in this case). This gets to 72% accuracy, same as the paper. I think this implies that the generator and classifier head are configured properly.\r\n* We are indeed marginalizing on top of the BART classification head logprobs",
"Here is our classification head, mostly taken from HF:\r\n\r\n```{python}\r\nclass BartClassificationHead(Module):\r\n \"\"\"Head for sentence-level classification tasks.\"\"\"\r\n\r\n def __init__(\r\n self,\r\n input_dim: int,\r\n inner_dim: int,\r\n num_classes: int,\r\n pooler_dropout: float,\r\n **config_kwargs\r\n ):\r\n super().__init__(**config_kwargs)\r\n self.dense = Linear(input_dim, inner_dim)\r\n self.dropout = Dropout(p=pooler_dropout)\r\n self.out_proj = Linear(inner_dim, num_classes)\r\n\r\n def forward(self, hidden_states: torch.Tensor):\r\n hidden_states = self.dropout(hidden_states)\r\n hidden_states = self.dense(hidden_states)\r\n hidden_states = torch.tanh(hidden_states)\r\n hidden_states = self.dropout(hidden_states)\r\n hidden_states = self.out_proj(hidden_states)\r\n return hidden_states\r\n```",
"..and here is the high-level model code:\r\n\r\n```{python}\r\n # input_ids shape = (batch_size, 512)\r\n outputs = super().forward(input_ids=input_ids, attention_mask=attention_mask, **rag_kwargs)\r\n\r\n ### the following code is inspired by BartForSequenceClassification forward method\r\n # best practice for bart classification is to use the last hidden state\r\n # hidden.shape=(batch_size * n_documents, 300, 1024)\r\n hidden = outputs.generator_dec_hidden_states[-1] # last hidden state;\r\n #print (hidden)\r\n\r\n # eos_mask.shape = (batch_size * n_documents, 300)\r\n eos_mask = outputs.context_input_ids.eq(self.rag.generator.config.eos_token_id)\r\n\r\n if len(torch.unique(eos_mask.sum(1))) > 1:\r\n raise ValueError(\"All examples must have the same number of <eos> tokens.\")\r\n\r\n # pass along the hidden state at the eos token\r\n # (batch_size * n_documents, 1024)\r\n sentence_representation = hidden[eos_mask, :].view(hidden.size(0), -1, hidden.size(-1))[:, -1, :]\r\n\r\n # (batch_size * n_documents, 1, 3)\r\n document_level_logits = self.classification_head(sentence_representation)\r\n\r\n # finally, marginalize across all the retrieved documents\r\n # (batch_size, 1, 3)\r\n logits = self.marginalize(document_level_logits, outputs.doc_scores)\r\n\r\n # (batch_size, 3)\r\n logits = logits.squeeze(1)\r\n```",
"We were able to train RAG-Token and RAG-Classifier successfully both on 2-way and the 3-way variant of FEVER. One important thing to note though is that those were on our internal `fairseq` implementation.\r\n\r\n> try training a `RagToken` generative model (it'd be generating the labels)\r\n\r\nWhat I meant when suggesting to use `RagToken` would be to use it as-is, without a classification head - it might seem counterintuitive but the generative model is actually able to learn to generate the labels.\r\n\r\nAs for the classification implementation - what you're proposing is quite different from our implementation in `fairseq`. What happens currently in your implementation is that you marginalize twice - once inside the forward pass on `RagToken`, and then again after applying your classification head. What we do instead is the following: \r\n1) take the generator hidden states (not marginalized)\r\n2) apply BART-like classification head on top of that\r\n3) marginalize\r\n\r\nSo basically - you don't want to just add a `BartClassificationHead` on top of `RagToken` hidden states. You want to implement something similar to [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1254-L1346) - a `RagForSequenceClassification` of sorts, doing what I outlined above - if you're interested in implementing that I think it'd be a great contribution to the repo, cc @patrickvonplaten :)\r\n\r\nLet me know if this makes sense!\r\n\r\n",
"Thanks for the advice! I have a few questions\r\n\r\n(1) what makes you think we marginalize twice? `do_marginalize=False` in `RagToken` by default.\r\n(2) What's the difference between a `BartClassificationHead` on top of `Bart`, and `BartClassificationHead` on top of `RagToken`? Isn't the generator of `RagToken` simply `Bart` already, so the final hidden state of `RagToken` == final hidden state of `Bart`?\r\n(3) Did you use 'adam' as your optimizer",
"And yes, I would be delighted to contribute it to the repo :)",
"Hi there! I'm a colleague of @JamesDeAntonis and I just wanted to chime in and clarify that we are using RagTokenForGeneration only as a neat wrapper to have a `self.rag` variable if we ever needed it. During training we take `outputs.generator_dec_hidden_states[-1]` as posted in the code above. Then we proceed with the 3 steps you listed above, essentially ending up with a SequenceClassification head from the rag generator hidden state outputs. We don't interact with the generation aspect at all, as you correctly identified.",
"> (1) what makes you think we marginalize twice? do_marginalize=False in RagToken by default.\r\n\r\nHey James and @suriyakode, I didn't realize that was the default configuration, in such case indeed your implementation should be equivalent to what I was suggesting.\r\n\r\nAnd yes, we did use Adam as optimizer.\r\n\r\nIn such case I don't see anything obvious unfortunately. What accuracy do you get with the collapse?",
"Bummer :/\r\n\r\nWith frozen retriever, we achieved 72% accuracy, then unfroze the retriever and the accuracy fell to 68% after collapse\r\n\r\nTo be clear, you used `RagToken`, aka the current `RagTokenForGeneration` object that was implemented in HF? My original thought was that something could be wrong with the gradients or something in this specific implementation",
"@JamesDeAntonis all of my FEVER experiments were done on `fairseq`, but I have been able to replicate RAG paper results training HF `RagToken` models on Natural Questions, which gives me some level of confidence in the implementation.",
"Ok, thanks. What was your learning rate?",
"It was 1e-05 for training the classifier.",
"I also noticed an issue with the finetuning script. I ended up printing `doc_scores` while using the RAG finetuning scripts and saw that there was no gradient. Is there no gradient passed to the question encoder from the generator?",
"The `doc_scores` is supposed to have gradients. And the gradients are propagated to the weights of the RAG question encoder.",
"I cloned the transformers repo and pip installed transformers using the repo cloned to verify. I printed `doc_scores` in `RagModel` and got the following for what `doc_scores` was:\r\n\r\nI don't see gradients in the tensor.\r\n",
"Is there a fix to the `doc_scores` gradient?",
"It looks like there's no grad_fn on your `doc_scores`. Are the weights of the question encoder updated during finetuning ?",
"Nope the weights aren't updated.",
"Though should there even be a `grad_fn` when it's running \"Validation sanity check\" (as pictured above)?",
"Good catch @dblakely indeed during validation it makes sense to not have any gradient",
"Thanks for the clarification! I continued printing after and got a `grad_fn` in the tensor.",
"@ola13 is there any code that we can see regarding the internal `fairseq` implementation of RAG and the training you did with it? I don't think there's any RAG in the public `fairseq` repo, but would be useful for me to be able to compare the two implementations",
"@ola13 I have a couple of questions on this\r\n(1) how many docs did you retrieve?\r\n(2) regarding the `RagForFeverClassification` idea, what is meant by \"[we] first regenerate the claim\" in section C of the RAG paper's appendix? Does that mean that we should only provide tokens from the claim as `decoder_input_ids` in the generator? Does it mean we should run multiple passes of the generator? Curious what the correct interpretation is",
"Hi @JamesDeAntonis - sorry just noticed your previous commend, the `fairseq` implementation is not available publicly. \r\n\r\nRegarding your latest questions:\r\n1) we used 5 docs at training time and tried evaluation with up to 50 docs\r\n2) This idea is adapted from BART - https://arxiv.org/pdf/1910.13461.pdf - section 3.3 or Figure 3.a - in case of RAG, we don't copy all contextualized input, just the claim tokens. Since BART model will copy all input to `decoder_input_ids` if you just leave it at `None` ([here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1135-L1140)) you can try adding extra logic to only pass claim tokens to the BART decoder with RAG (that may mean explicitly setting `decoder_input_ids` like you mention). This does not require multiple runs of the generator."
] | 1,609 | 1,682 | 1,620 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest production version
- Platform:
- Python version: 3.8
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
RAG: @patrickvonplaten, @lhoestq
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Fine-tune RAG on FEVER dataset
2. Notice that the same documents are retrieved every time
We are trying to fine-tune RAG on the FEVER dataset. We follow the steps in the paper (we think) to a 'T' (RAG plus BART classifier head). However, when we fine-tune, retrieval "collapses" (a term used in the paper) so that all queries retrieve the same irrelevant documents. As a sanity check, we fine-tuned with a frozen retriever, and achieved similar results (72%) to what the paper achieves with frozen retriever. Thus, it appears that perhaps there is a bug in HF's implementation of the retriever (and its gradients) that is causing this. Alternatively, perhaps there is an obvious mistake in our config of the retriever. Do you have any insights into this? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9404/comments | https://api.github.com/repos/huggingface/transformers/issues/9404/events | https://github.com/huggingface/transformers/pull/9404 | 778,341,957 | MDExOlB1bGxSZXF1ZXN0NTQ4NDQ1NDA4 | 9,404 | Add head_mask/decoder_head_mask for BART | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Dear @patrickvonplaten and the rest of HuggingFace group.\r\n\r\nI implemented the concept of `head_mask` from BERT into BART so that the internal of decoder-encoder-like models can be studied as well. However, as this is my very first attempt to contribute to such a large-scale open-source project, I have been a bit struggling to pass the tests. Would you be, please, able to guide me what everything needs to be done in this case in order to achieve a valid pull request?\r\n\r\nThank you very much for all your time in advance. I really do appreciate it.",
"Hi @stancld - thanks a lot for pinging me! I'm happy to help you here :-) I think you're PR is a nice addition. Sadly, we did many changes to Bart recently (see https://github.com/huggingface/transformers/pull/9343) so that you'll probably have to rebase your PR to the current version of master. ",
"After that I'm happy to get the tests passing together!",
"Hi @patrickvonplaten, the model should be rebased according to the commit #9343 at this moment. :) I'll be more than happy to finish this PR with you. Thanks a lot in advance :) ",
"@stancld, please do let me know if you're stuck and need help or if your PR is ready for review, just ping me here :-)",
"Hi @patrickvonplaten, I would like to bring an update after the weekend off.\r\n\r\nFirst of all, I would like to apologise for a bit of messy PR, as I was initially struggling with on my local (I'll do better next time).\r\nRegarding this PR: To pass all the tests, `head_mask` and `decoder_head_mask` is now implemented for the following PyTorch BART-based models:\r\n\r\n- **BART**,\r\n- **MBart**,\r\n- **Blenderbot**,\r\n- **BlenderbotSmall**,\r\n- **Marian**,\r\n- **Pegasus**.\r\n\r\nBesides, I think some additional tests for head_mask for these models might be desired to implement, but I leave this decision up to you. In any case, please, let me know what it needs to do to complete this PR.\r\n",
"@patrickvonplaten I think this PR is ready for review. I've currently resolved one conflict arose last night after a commit to `master` and now I've been tracking changes on my local and everything still seems to be working.",
"Hey @stancld,\r\n\r\nThis is a super nice PR. It's very clean and that without any help - awesome!\r\n\r\nI think there are 3 things we should change/add:\r\n\r\n1) I think we should change the order of the forward args of all `...Model` and `...ForConditionalGeneration` as explained above. This a) means that there is no breaking change in the way Bart is used with torchscript and it's the better option IMO as well since the first 4 args should always be `input_ids, attention_mask, decoder_input_ids, decoder_attention_mask` for EncDec models\r\n\r\n2) Let's try to remove all \"hard-coded\" model names in the common tests. I've commented above. We don't really need to test torchscript with head_mask and for the signature it'd be better to change it according to 1)\r\n\r\n3) It would be awesome if you could a `if model.config.is_encoder_decoder` part to the `test_headmasking` test in `test_modeling_common.py` that tests headmasking correctly for Seq2Seq models. To enable this test for all Bart-like models you'll have to set `test_head_masking` to True in `BartModelTest` and others. One thing we'll have to adapt in the test is we should change the line:\r\n```\r\nattentions = outputs[-1]\r\n```\r\n\r\nto \r\n```python\r\nattentions = outputs.attetions\r\n```\r\n\r\nfor the `model.config.is_encoder_decoder is False` case and to \r\n\r\n```python\r\nencoder_attentions = outputs.encoder_attentions\r\ndecoder_attentions = outputs.decoder_attentions\r\n```\r\n\r\nfor the other case.\r\n\r\nI can also help you with 3) in case you're stuck.\r\nReally impressed by how clean the PR is! Think there is not much left to do. 1) and 2) are very easy changes and 3) will require a bit more time, but should be fine as well.",
"Hey @patrickvonplaten, thanks a lot for your thorough feedback. I believe to come back later today with a new commit fixing the listed issues :)",
"Hey @patrickvonplaten, this PR is again ready for review after making some changes according to your notes above. The one problem at this moment is that BART-like models do not satisfy one condition in `test_headmasking`:\r\n```\r\nself.assertNotEqual(attentions[1][..., 0, :, :].flatten().sum().item(), 0.0).\r\n```\r\n\r\nI am not sure whether the formula for masking attention heads (in BART-like models) is implemented correctly. Now, if `head_mask` in the test case is specified as\r\n```\r\nhead_mask = torch.ones(\r\n self.model_tester.num_hidden_layers,\r\n self.model_tester.num_attention_heads,\r\n device=torch_device,\r\n)\r\nhead_mask[0, 0] = 0\r\nhead_mask[-1, :-1] = 0\r\n```\r\nthen `outputs.encoder_attentions[1][..., :, :, :]` or `outputs.decoder_attentions[1][..., :, :, :]` equals tensor of `0.0` for all examples over all heads but the last one. This is not the case, however, for **non**-encoder-decoder models with `attentions[1][..., :, :, :]`. Do you have any idea where the problem can be?\r\n\r\nAnyway, I hope we will solve this issue and merge this PR. :) ",
"I made some mistakes during updating my branch, which resulted in the problem with tracking files not edited actually by myself. I find this quite inconvenient and I have failed to repair this issue so far. Therefore, I've created a new (clean) branch, which might be found here https://github.com/stancld/transformers/tree/head_mask_for_bart_new.\r\n\r\nIf you, @patrickvonplaten, were okay with that, I would close this PR (after resolving those rather minor issues raised in our discussion above) and create a new one from the new branch referenced above to make everything nice and clean before an eventual merge. ",
"@stancld absolutely! Feel free to close this PR and open a new one :-) This happens to me all the time as well ",
"We can just link this closed PR to the new PR to have a reference to the discussion we had",
"@patrickvonplaten - Great, you can find a newly open PR at #9569 :) "
] | 1,609 | 1,610 | 1,610 | CONTRIBUTOR | null | Description:
This PR adds `head_mask` and `decoder_head_mask` for BART PyTorch implementation according to BERT implementation.
Motivation:
According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model.
Reviewer: @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9404/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9404",
"html_url": "https://github.com/huggingface/transformers/pull/9404",
"diff_url": "https://github.com/huggingface/transformers/pull/9404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9404.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9403/comments | https://api.github.com/repos/huggingface/transformers/issues/9403/events | https://github.com/huggingface/transformers/pull/9403 | 778,336,461 | MDExOlB1bGxSZXF1ZXN0NTQ4NDQwNjM5 | 9,403 | added head_mask/decoder_head_mask for BART | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | Description:
This PR adds `head_mask` and `decoder_head_mask` for BART PyTorch implementation according to BERT implementation.
Motivation:
According to HuggingFace's websites "There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)." This PR enables to mask attention heads in encoder and decoder models exactly like for BERT. This PR thus creates an opportunity to study the importance of attention heads in encoder-decoder BERT-like model.
Reviewer: @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9403",
"html_url": "https://github.com/huggingface/transformers/pull/9403",
"diff_url": "https://github.com/huggingface/transformers/pull/9403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9403.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9402/comments | https://api.github.com/repos/huggingface/transformers/issues/9402/events | https://github.com/huggingface/transformers/pull/9402 | 778,154,833 | MDExOlB1bGxSZXF1ZXN0NTQ4Mjk0MjU2 | 9,402 | Bump notebook from 6.1.4 to 6.1.5 in /examples/research_projects/lxmert | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] | closed | false | null | [] | [
"Thanks dependabot!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9402",
"html_url": "https://github.com/huggingface/transformers/pull/9402",
"diff_url": "https://github.com/huggingface/transformers/pull/9402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9402.patch",
"merged_at": 1609772527000
} |
https://api.github.com/repos/huggingface/transformers/issues/9401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9401/comments | https://api.github.com/repos/huggingface/transformers/issues/9401/events | https://github.com/huggingface/transformers/pull/9401 | 778,133,316 | MDExOlB1bGxSZXF1ZXN0NTQ4Mjc3MDE4 | 9,401 | Put back LXMert example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Anyway, I doubt Lxmert is currently not supported for customized/ personal datasets (nlp+images, images are harder to prepare). See issues relevant to feature extraction in [Lxmert](https://github.com/airsplay/lxmert#faster-r-cnn-feature-extraction), for example, [issue#79](https://github.com/airsplay/lxmert/issues/79), [issue#86](https://github.com/airsplay/lxmert/issues/86)."
] | 1,609 | 1,609 | 1,609 | COLLABORATOR | null | # What does this PR do?
During the example reorganization, LXMert seems to have slipped into the cracks and got accidentally deleted. This PR puts it back.
Fixes #9309 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9401",
"html_url": "https://github.com/huggingface/transformers/pull/9401",
"diff_url": "https://github.com/huggingface/transformers/pull/9401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9401.patch",
"merged_at": 1609772348000
} |
https://api.github.com/repos/huggingface/transformers/issues/9400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9400/comments | https://api.github.com/repos/huggingface/transformers/issues/9400/events | https://github.com/huggingface/transformers/issues/9400 | 778,098,563 | MDU6SXNzdWU3NzgwOTg1NjM= | 9,400 | Generate Function - Manual decoder_input_ids Error (Bart, Pegasus) | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,610 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab
- Python version: 3.6.9
### Who can help
@patrickvonplaten
## To reproduce
Link to the forum discussion: [https://discuss.huggingface.co/t/rewriting-generate-function-for-manual-decoder-input/3034/3](https://discuss.huggingface.co/t/rewriting-generate-function-for-manual-decoder-input/3034/3)
Steps to reproduce the behavior:
```python
!pip install transformers==4.1.1
!pip install sentencepiece
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
# OR
'''
from transformers import PegasusTokenizer, PegasusForConditionalGeneration
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
model = PegasusForConditionalGeneration.from_pretrained('google/pegasus-large')
'''
text = "this is a sample text"
input_ids = tokenizer(text, return_tensors="pt").input_ids
decoder_input_ids = tokenizer("<s> Anatomy is", return_tensors="pt", add_special_tokens=False).input_ids
output = model.generate(input_ids, decoder_input_ids=decoder_input_ids, num_beams=4, num_return_sequences=4)
print("With decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True))
output = model.generate(input_ids, num_beams=4, num_return_sequences=4)
print("Without decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True))
```
Error:
```
TypeError Traceback (most recent call last)
<ipython-input-38-271e60997201> in <module>()
2 decoder_input_ids = tokenizer("<s> Anatomy is", return_tensors="pt", add_special_tokens=False).input_ids
3
----> 4 output = model.generate(input_ids, decoder_input_ids=decoder_input_ids, num_beams=4, num_return_sequences=4)
5
6 print("With decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True))
2 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs)
610 pad_token_id=pad_token_id,
611 eos_token_id=eos_token_id,
--> 612 **model_kwargs,
613 )
614
/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, max_length, pad_token_id, eos_token_id, **model_kwargs)
1041
1042 while cur_len < max_length:
-> 1043 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
1044
1045 outputs = self(**model_inputs, return_dict=True)
TypeError: prepare_inputs_for_generation() got multiple values for argument 'decoder_input_ids'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9400/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9399/comments | https://api.github.com/repos/huggingface/transformers/issues/9399/events | https://github.com/huggingface/transformers/issues/9399 | 778,094,416 | MDU6SXNzdWU3NzgwOTQ0MTY= | 9,399 | How to use Longformer for summarization | {
"login": "chetanambi",
"id": 37707687,
"node_id": "MDQ6VXNlcjM3NzA3Njg3",
"avatar_url": "https://avatars.githubusercontent.com/u/37707687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chetanambi",
"html_url": "https://github.com/chetanambi",
"followers_url": "https://api.github.com/users/chetanambi/followers",
"following_url": "https://api.github.com/users/chetanambi/following{/other_user}",
"gists_url": "https://api.github.com/users/chetanambi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chetanambi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chetanambi/subscriptions",
"organizations_url": "https://api.github.com/users/chetanambi/orgs",
"repos_url": "https://api.github.com/users/chetanambi/repos",
"events_url": "https://api.github.com/users/chetanambi/events{/privacy}",
"received_events_url": "https://api.github.com/users/chetanambi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Perhaps this will help: https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16",
"`LongformerEncoderDecoder` will be added to the lib once #9278 is merged! It can be used for summarization or any other seq2seq task.",
"@patil-suraj, @christianversloot I tried this model [longformer2roberta](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16). It is actually giving better summary than Pegasus (reddit-tifu). I will be waiting for LongformerEncoderDecoder to be added to the library.\r\n\r\nI just have one question - how many input tokens longformer2roberta model supports? I believe it's 2048. Could you please confirm,",
"`longformer2roberta` should support 4096 tokens.\r\n\r\nAnd LED is now on master!",
"@patil-suraj Awesome! I am trying LED but getting below error. Could you please take a look?\r\n\r\n```\r\nfrom transformers import LEDForConditionalGeneration, LEDTokenizer\r\nmodel_name = 'allenai/led-base-16384'\r\ntokenizer = LEDTokenizer.from_pretrained(model_name) \r\nmodel = LEDTokenizer.from_pretrained(model_name)\r\nbatch = tokenizer.prepare_seq2seq_batch(article, truncation=True, padding='longest', return_tensors=\"pt\").to(torch_device)\r\ntranslated = model.generate(**batch)\r\ntgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n```\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-33-d4827d7b5770> in <module>()\r\n 5 model = LEDTokenizer.from_pretrained(model_name)\r\n 6 batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors=\"pt\").to(torch_device)\r\n----> 7 translated = model.generate(**batch)\r\n 8 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)\r\n\r\nAttributeError: 'LEDTokenizer' object has no attribute 'generate'",
"```python\r\nmodel = LEDTokenizer.from_pretrained(model_name)\r\n```\r\nhere you are assigning tokenizer as the model, it should be \r\n```python\r\nmodel = LEDForConditionalGeneration.from_pretrained(model_name)\r\n```",
"@patil-suraj Thank you! My bad. I have corrected the typo. Below code seems to be working fine.\r\n\r\n```\r\nfrom transformers import LEDForConditionalGeneration, LEDTokenizer\r\nmodel_name = 'allenai/led-base-16384'\r\ntokenizer = LEDTokenizer.from_pretrained(model_name) \r\nmodel = LEDForConditionalGeneration.from_pretrained(model_name)\r\ninput_ids = tokenizer(src_text, return_tensors=\"pt\").input_ids\r\noutput_ids = model.generate(input_ids)\r\noutput = tokenizer.decode(output_ids[0], skip_special_tokens=True)\r\n```\r\n\r\nI am getting very short summary of just 20 tokens from the above code. So I was looking for the default values for below parameters for LED model. I could not find it in [config.json](https://huggingface.co/allenai/led-base-16384/blob/main/config.json) file. For the **longformer2roberta** model I found these values in [config.json](https://huggingface.co/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/blob/main/config.json). Could you please let me know where can I find these values for LED model.\r\n\r\n- num_beams\r\n- no_repeat_ngram_size,\r\n- early_stopping,\r\n- length_penalty,\r\n- min_length,\r\n- max_length",
"@patil-suraj Any inputs/suggestions here?",
"Hi, \r\nI have a question about the `LEDForConditionalGeneration` forward args. \r\nThe `decoder_input_ids` has a comment that `decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper.`. \r\nForm the forward method in `LEDForConditionalGeneration`, i can see that when not assigning the `decoder_input_ids` in the forward method of `LEDForConditionalGeneration` object , the `decoder_input_ids` will be generated by [shifting the `labels` value one token to right in the forward method](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337). \r\n\r\nSo my question is if i want to explictly pass the `decoder_input_ids` to the forward method, do i need to explictly shift it one token as the [code](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337) shows before the forward pass?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | Hi - Do you have a sample code on how to use Longformer for summarization tasks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9398/comments | https://api.github.com/repos/huggingface/transformers/issues/9398/events | https://github.com/huggingface/transformers/issues/9398 | 778,085,869 | MDU6SXNzdWU3NzgwODU4Njk= | 9,398 | trainer.predict() returns different values from model.logits | {
"login": "connectlym",
"id": 25912288,
"node_id": "MDQ6VXNlcjI1OTEyMjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/25912288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connectlym",
"html_url": "https://github.com/connectlym",
"followers_url": "https://api.github.com/users/connectlym/followers",
"following_url": "https://api.github.com/users/connectlym/following{/other_user}",
"gists_url": "https://api.github.com/users/connectlym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connectlym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connectlym/subscriptions",
"organizations_url": "https://api.github.com/users/connectlym/orgs",
"repos_url": "https://api.github.com/users/connectlym/repos",
"events_url": "https://api.github.com/users/connectlym/events{/privacy}",
"received_events_url": "https://api.github.com/users/connectlym/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\nI have the same problem different results using `model()` vs `trainer.predict()`.",
"> Hi,\r\n> I have the same problem different results using `model()` vs `trainer.predict()`.\r\n\r\nThanks for replying. I assumed this feature was made for other usages so I ended up using `model()`. Anyway, I would like to seek an answer for sure.\r\n\r\nReading the new issue's template I guess @sgugger could help us here. I'm sorry to disturb you, could you please give some details on `Trainer.predict()` here?",
"I solved it by returning to 4.0.1, here both methods return the same results. \r\n\r\nBut I still got a problem, before saving the model (so just at the end of the finetuning) with `TrainingArguments(..., load_best_model_at_end=True)` the `trainer.predict()` still differs from `model()`. But after reloading the model with `from_pretrained` with transformers==4.0.1 both methods are equal. So I guess the `trainer.predict()` does really load the best model at the end of the training.",
"I'm unsure of what the problem is since the code you indicate is not reproducible (what is `model`, `load_pt_data()` etc.). On my side, using an installation form source on current master, here is what I get. First I instantiate a model and tokenizer and preprocess some data (with padding to be able to batch):\r\n```\r\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\ntexts = [\"Hello there!\", \"This is another text\"]\r\ntokenized_texts = tokenizer(texts, padding=True)\r\n```\r\nThen I create a Dataset class to be able to feed my `tokenized_texts` to `Trainer`:\r\n```\r\nclass SimpleDataset:\r\n def __init__(self, tokenized_texts):\r\n self.tokenized_texts = tokenized_texts\r\n \r\n def __len__(self):\r\n return len(self.tokenized_texts[\"input_ids\"])\r\n \r\n def __getitem__(self, idx):\r\n return {k: v[idx] for k, v in self.tokenized_texts.items()}\r\n\r\ntest_dataset = SimpleDataset(tokenized_texts)\r\n```\r\nThen predicting through `Trainer` like this:\r\n```\r\ntrainer = Trainer(model=model)\r\npredictions = trainer.predict(test_dataset)\r\npredictions.predictions\r\n```\r\nreturns this:\r\n```\r\narray([[-0.68212456, 0.07081275],\r\n [-0.59134895, 0.16735002]], dtype=float32)\r\n```\r\nand predicting directly with the model:\r\n```\r\nimport torch\r\n\r\nmodel.eval()\r\npt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()}\r\nwith torch.no_grad():\r\n output = model(**pt_inputs)\r\noutput.logits.cpu().numpy()\r\n```\r\ngives me the exact same result.\r\n\r\nMake sure that you preprocess your inputs the same way in both instances, and when using the model directly, that it is in evaluation mode.",
"> pt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()}\r\n> with torch.no_grad():\r\n> output = model(**pt_inputs)\r\n> output.logits.cpu().numpy()\r\n\r\nHi, thanks for your answers! I think the reason I'm having different results is I did not use `model.eval()` but I only had <1000 lines of test data to predict. \r\n\r\nThank you so much! :)",
"> I'm unsure of what the problem is since the code you indicate is not reproducible (what is `model`, `load_pt_data()` etc.). On my side, using an installation form source on current master, here is what I get. First I instantiate a model and tokenizer and preprocess some data (with padding to be able to batch):\r\n> \r\n> ```\r\n> from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n> model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\")\r\n> texts = [\"Hello there!\", \"This is another text\"]\r\n> tokenized_texts = tokenizer(texts, padding=True)\r\n> ```\r\n> \r\n> Then I create a Dataset class to be able to feed my `tokenized_texts` to `Trainer`:\r\n> \r\n> ```\r\n> class SimpleDataset:\r\n> def __init__(self, tokenized_texts):\r\n> self.tokenized_texts = tokenized_texts\r\n> \r\n> def __len__(self):\r\n> return len(self.tokenized_texts[\"input_ids\"])\r\n> \r\n> def __getitem__(self, idx):\r\n> return {k: v[idx] for k, v in self.tokenized_texts.items()}\r\n> \r\n> test_dataset = SimpleDataset(tokenized_texts)\r\n> ```\r\n> \r\n> Then predicting through `Trainer` like this:\r\n> \r\n> ```\r\n> trainer = Trainer(model=model)\r\n> predictions = trainer.predict(test_dataset)\r\n> predictions.predictions\r\n> ```\r\n> \r\n> returns this:\r\n> \r\n> ```\r\n> array([[-0.68212456, 0.07081275],\r\n> [-0.59134895, 0.16735002]], dtype=float32)\r\n> ```\r\n> \r\n> and predicting directly with the model:\r\n> \r\n> ```\r\n> import torch\r\n> \r\n> model.eval()\r\n> pt_inputs = {k: torch.tensor(v).to(trainer.args.device) for k, v in tokenized_texts.items()}\r\n> with torch.no_grad():\r\n> output = model(**pt_inputs)\r\n> output.logits.cpu().numpy()\r\n> ```\r\n> \r\n> gives me the exact same result.\r\n> \r\n> Make sure that you preprocess your inputs the same way in both instances, and when using the model directly, that it is in evaluation mode.\r\n\r\nI have a more question that how can I load the model without using \"from_pretrained\" \r\n\r\nBecause I have some custom for the the model, nn.Model, it does not inherent from \"PreTrainedModel\", so I can't load it using \"from_pretrained\"\r\n"
] | 1,609 | 1,618 | 1,610 | NONE | null | Hi dear authors!
When I was using my **fine-tuned bert model** to do the sequence classification task, I found the values returned by `trainer.predict(test_dataset)` were very different from what I got from `model(**test_encodings)`. I did not find messages describing what the `predictions` actually are in the documents, so I'm not seeing what `trainer.predict()` returns. Could you please help me explain a bit more?
Here are some of my codes
- predicts with `model(**test_encodings)`
```python
def _predict_with_np(text_a, text_b, tokenizer, model):
scores = [0, 1]
encoded_input = tokenizer((text_a, text_b),
truncation=True, padding=True,
return_tensors="pt")
output = model(**encoded_input)
logit = output.logits[0]
softmax_score = F.softmax(logit,dim=-1)
score = scores[torch.argmax(softmax_score)]
return score, logit, softmax_score
def predict_with_np(params, printing=False, NUM=0):
texts = load_data()
print("===> Loading fine-tuned model and tokenizer...")
model = BertForSequenceClassification.from_pretrained(TUNED_MODEL)
tokenizer = BertTokenizer.from_pretrained(TUNED_TOKENIZER)
print("===> Classifying...")
for i, text in enumerate(texts):
if NUM > 0 and i > NUM:
break
text_a, text_b = text
text_a = text_a.strip()
text_b = text_b.strip()
score, logit, softmax_score = _predict_with_np(text_a, text_b, tokenizer, model)
```
- predicts with `trainer.predict()`
```python
def predict_with_hf(params):
test_dataset = load_pt_data()
print("===> Loading Model and Training Arguments...")
model = BertForSequenceClassification.from_pretrained(TUNED_MODEL)
training_args = TrainingArguments(
run_name=params.run_name,
disable_tqdm=True,
fp16=params.fp16,
gradient_accumulation_steps=params.gradient_accumulation_steps,
do_train=False,
do_eval=False,
do_predict=True,
output_dir=params.output_dir,
)
print("===> Predicting...")
trainer = Trainer(
model=model,
args=training_args,
eval_dataset=test_dataset
)
results = {}
logger.info("*** Predict ***")
result = trainer.predict(test_dataset)
output_pred_file = os.path.join(training_args.output_dir, "pred_results.txt")
with open(output_pred_file, "w") as writer:
logger.info("***** Pred results *****")
for pred in result.predictions:
logger.info(" predictions = %s", pred)
writer.write("predictions = %s\n" % pred)
```
With these two versions, I got outputs like this:
```json
text_a: "Don't worry. I'll take care of it."
text_b: "Why so long?"
score: 0
logits: [0.3749077320098877, -0.15262120962142944]
softmax: [0.6289066076278687, 0.37109342217445374]
predictions: [-0.04395686 0.29134133]
```
I've read several lines of code inside `src/trainer.py` so I guess predictions are supposed to be logits. But actually, they are away different from the logits I have here.
Am I calculating things in the wrong way, or are the predictions designed to be something else?
Thanks for reading my long questions! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9397/comments | https://api.github.com/repos/huggingface/transformers/issues/9397/events | https://github.com/huggingface/transformers/issues/9397 | 778,023,255 | MDU6SXNzdWU3NzgwMjMyNTU= | 9,397 | CUDA runtime error during benchmarking | {
"login": "serkansulun",
"id": 26304981,
"node_id": "MDQ6VXNlcjI2MzA0OTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26304981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serkansulun",
"html_url": "https://github.com/serkansulun",
"followers_url": "https://api.github.com/users/serkansulun/followers",
"following_url": "https://api.github.com/users/serkansulun/following{/other_user}",
"gists_url": "https://api.github.com/users/serkansulun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serkansulun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serkansulun/subscriptions",
"organizations_url": "https://api.github.com/users/serkansulun/orgs",
"repos_url": "https://api.github.com/users/serkansulun/repos",
"events_url": "https://api.github.com/users/serkansulun/events{/privacy}",
"received_events_url": "https://api.github.com/users/serkansulun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | Running `transformers/examples/benchmarking/run_benchmark.py` with any type of model, with multi-processing gives the following error:
```
1 / 1
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp line=54 error=3 : initialization error
cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:54
cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:54
Traceback (most recent call last):
File "run_benchmark.py", line 47, in <module>
main()
File "run_benchmark.py", line 43, in main
benchmark.run()
File "/home/dock/.conda/envs/torch/lib/python3.7/site-packages/transformers/benchmark/benchmark_utils.py", line 709, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
ValueError: too many values to unpack (expected 2)
```
It looks like `self.inference_memory` function is returning the string `N/A`. Everything works fine when `no_multi_processing` option is selected.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Ubuntu 18.04.1 LTS
- Python version: 3.7.9
- PyTorch version (GPU?): 1.2.0 with GPU support
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I guess so
### Who can help
@patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): GPT2, DistilGPT2
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9396/comments | https://api.github.com/repos/huggingface/transformers/issues/9396/events | https://github.com/huggingface/transformers/issues/9396 | 777,978,221 | MDU6SXNzdWU3Nzc5NzgyMjE= | 9,396 | run_glue.py with XLNet model on CoLA dataset reaches 0 accuracy | {
"login": "yonatanbitton",
"id": 26148975,
"node_id": "MDQ6VXNlcjI2MTQ4OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/26148975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonatanbitton",
"html_url": "https://github.com/yonatanbitton",
"followers_url": "https://api.github.com/users/yonatanbitton/followers",
"following_url": "https://api.github.com/users/yonatanbitton/following{/other_user}",
"gists_url": "https://api.github.com/users/yonatanbitton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonatanbitton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonatanbitton/subscriptions",
"organizations_url": "https://api.github.com/users/yonatanbitton/orgs",
"repos_url": "https://api.github.com/users/yonatanbitton/repos",
"events_url": "https://api.github.com/users/yonatanbitton/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonatanbitton/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"Hi, did you fix this issue? I also got this problem when using BART-large."
] | 1,609 | 1,642 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@TevenLeScao
## Information
Model I am using: XLNet
The problem arises when using:
* The official example scripts of `run_glue.py`
The tasks I am working on is:
* an official GLUE: CoLA
## To reproduce
Steps to reproduce the behavior:
I am using the "run_glue" cmd as described here: https://github.com/huggingface/transformers/tree/master/examples/text-classification
`python run_glue.py --task_name cola --model_name_or_path xlnet-base-cased --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --do_predict --overwrite_output_dir`
That's the results I get:
```
[p]$ cat res_cola_xlnet.txt
eval_loss = 0.612945020198822
eval_matthews_correlation = 0.0
epoch = 3.0
```
## Expected behavior
Results > 0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9395/comments | https://api.github.com/repos/huggingface/transformers/issues/9395/events | https://github.com/huggingface/transformers/issues/9395 | 777,868,857 | MDU6SXNzdWU3Nzc4Njg4NTc= | 9,395 | wrong output for Bert-larged-uncased | {
"login": "Twsschx",
"id": 45708562,
"node_id": "MDQ6VXNlcjQ1NzA4NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45708562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Twsschx",
"html_url": "https://github.com/Twsschx",
"followers_url": "https://api.github.com/users/Twsschx/followers",
"following_url": "https://api.github.com/users/Twsschx/following{/other_user}",
"gists_url": "https://api.github.com/users/Twsschx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Twsschx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Twsschx/subscriptions",
"organizations_url": "https://api.github.com/users/Twsschx/orgs",
"repos_url": "https://api.github.com/users/Twsschx/repos",
"events_url": "https://api.github.com/users/Twsschx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Twsschx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @Twsschx \r\n\r\nIn the newer version of transformers, PyTorch models have outputs that are instances of subclasses of `ModelOutput`,\r\nto access it as a tuple, you can use slicing, or to get a particular tensor, just provide the key to the output class.\r\n\r\ni.e \r\n```python3\r\nlast_hidden_states, pooling_output =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor)[:] # slice\r\n```\r\n\r\nor \r\n```python3\r\noutput =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor)\r\nlast_hidden_state = output[\"last_hidden_states\"]\r\npooling_output = output[\"pooler_output\"]\r\n```\r\n\r\nAnd if you want the models to output tuple like the previous versions, then pass `return_dict=False` to `forward`\r\n\r\n```python3\r\nlast_hidden_states, pooling_output = transformer_model(**inputs, return_dict=False)\r\n```\r\n\r\nYou can find more about output classes in this [doc](https://huggingface.co/transformers/main_classes/output.html).",
"Thanks. It helps me a lot!",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | I am running Bert for the pytorch version:
from transformers import BertConfig, BertTokenizer, BertModel
config_class, model_class, tokenizer_class = (BertConfig, BertModel, BertTokenizer)
transformer_config = config_class.from_pretrained(pretrained_model + "/bert_config.json")
tokenizer = tokenizer_class.from_pretrained(pretrained_model, do_lower_case = True)
transformer_model = model_class.from_pretrained(pretrained_model, config=transformer_config)
last_hidden_states, pooling_output =transformer_model(input_ids_tensor, attention_mask_tensor, segment_ids_tensor)
The output of 'transformer_model' -- last_hidden_states, should be a Tensor. But the result is 'last_hidden_state', that means it is a string object. What's wrong with it ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9394/comments | https://api.github.com/repos/huggingface/transformers/issues/9394/events | https://github.com/huggingface/transformers/pull/9394 | 777,690,098 | MDExOlB1bGxSZXF1ZXN0NTQ3OTE3MjM1 | 9,394 | Simplify marian distillation script | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | Simplify marian distillation script, by adding a suggested MAX_LEN and using finetune.py directly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9394/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9394",
"html_url": "https://github.com/huggingface/transformers/pull/9394",
"diff_url": "https://github.com/huggingface/transformers/pull/9394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9394.patch",
"merged_at": 1609739485000
} |
https://api.github.com/repos/huggingface/transformers/issues/9393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9393/comments | https://api.github.com/repos/huggingface/transformers/issues/9393/events | https://github.com/huggingface/transformers/issues/9393 | 777,660,030 | MDU6SXNzdWU3Nzc2NjAwMzA= | 9,393 | `run_glue.py` fails when using my own dataset of regression task | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is the correct fix indeed (though we can group this with the previous test with `elif data_args.task_name is None and not is_regression`)! Thanks for flagging this, do you want to open a PR with the fix you found?",
"@sgugger \r\nThank you for checking this issue and giving the comment.\r\nI'd love to open a PR. \r\nI'm sorry but could you please wait for a while? I think I can open it by the end of the week.",
"Thanks for the PR!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
examples/token-classification: @stefan-it
(Excuse me if I'm asking someone who is not in charge. I couldn't find `examples/text-classification` in the list.)
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
It seems that an error occurs when I use `run_glue.py` with my own dataset of regression task.
``` sh
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--model_name_or_path bert-base-cased \
--train_file ****.csv \
--validation_file ****.csv \
--do_train \
--do_eval \
--max_seq_length 64 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs **** \
--logging_steps **** \
--save_steps **** \
--save_total_limit **** \
--output_dir ****/v4.1.1/****
```
An example of the train/valid CSV file is as below:
``` csv
id,label,sentence1
__id_as_string__,3.0,__string__
```
Sorry for the lack of details. I use this heavily masked notation to take into account the licensing of the dataset.
You can see that the columns contain `label` and `sentence1`, and the value of `label` is `float`.
I confirmed that `is_regression` is `True` in this case.
The error message says:
``` sh
Traceback (most recent call last):
File "run_glue.py", line 419, in <module>
main()
File "run_glue.py", line 293, in main
label_to_id = {v: i for i, v in enumerate(label_list)}
UnboundLocalError: local variable 'label_list' referenced before assignment
```
It seems that the case `data_args.task_name is None` and `is_regression is True` has not been considered in the example.
Excuse me if I misunderstand something.
https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py#L277
```
if (
model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id
and data_args.task_name is not None
and is_regression
):
# Some have all caps in their config, some don't.
label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()}
if list(sorted(label_name_to_id.keys())) == list(sorted(label_list)):
label_to_id = {i: label_name_to_id[label_list[i]] for i in range(num_labels)}
else:
logger.warn(
"Your model seems to have been trained with labels, but they don't match the dataset: ",
f"model labels: {list(sorted(label_name_to_id.keys()))}, dataset labels: {list(sorted(label_list))}."
"\nIgnoring the model labels as a result.",
)
elif data_args.task_name is None:
label_to_id = {v: i for i, v in enumerate(label_list)}
```
When I modified the last two lines as below, I could go to the next step.
May I ask you that is it the correct way to avoid the error?
```
elif data_args.task_name is None:
# No definition for 'data_args.task_name is None' and 'is_regression is True'?
if not is_regression:
label_to_id = {v: i for i, v in enumerate(label_list)}
```
## Expected behavior
`run_glue.py` can be used for our own dataset of regression task. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9392/comments | https://api.github.com/repos/huggingface/transformers/issues/9392/events | https://github.com/huggingface/transformers/issues/9392 | 777,651,721 | MDU6SXNzdWU3Nzc2NTE3MjE= | 9,392 | Model inputs and outputs are ``None`` when converting fine-tuned gpt2 to Tensorflow? | {
"login": "farazk86",
"id": 33456896,
"node_id": "MDQ6VXNlcjMzNDU2ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/33456896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farazk86",
"html_url": "https://github.com/farazk86",
"followers_url": "https://api.github.com/users/farazk86/followers",
"following_url": "https://api.github.com/users/farazk86/following{/other_user}",
"gists_url": "https://api.github.com/users/farazk86/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farazk86/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farazk86/subscriptions",
"organizations_url": "https://api.github.com/users/farazk86/orgs",
"repos_url": "https://api.github.com/users/farazk86/repos",
"events_url": "https://api.github.com/users/farazk86/events{/privacy}",
"received_events_url": "https://api.github.com/users/farazk86/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @farazk86,\r\n\r\nthanks for your issue! I've sadly never worked with tflite so not really sure how to best help you here. Maybe @jplu (or @LysandreJik) ? ",
"> Hey @farazk86,\r\n> \r\n> thanks for your issue! I've sadly never worked with tflite so not really sure how to best help you here. Maybe @jplu (or @LysandreJik) ?\r\n\r\nThanks @patrickvonplaten but the ``print(model.inputs)`` line is before the tflite conversion. For starters, I wanted to convert the fine-tuned distilgpt2 from pytorch to tensorflow and then to tensorflow lite.\r\n",
"Hello @farazk86 \n\nIt is normal that the inputs/outputs are not set when using from_pretrained because they are not explicitely given when the model is built. You have to create yourself a model by setting them.",
"> Hello @farazk86\r\n> \r\n> It is normal that the inputs/outputs are not set when using from_pretrained because they are not explicitely given when the model is built. You have to create yourself a model by setting them.\r\n\r\nThanks for your reply.\r\n\r\nIs my method for converting the pretrained model to tflite wrong? I followed the code and explanation mentioned here: https://towardsdatascience.com/on-device-machine-learning-text-generation-on-android-6ad940c00911",
"I would also like to add that when I convert the model to tflite, using the above code I get the following warnings\r\n\r\n```\r\nWARNING:absl:Found untraced functions such as wte_layer_call_fn, wte_layer_call_and_return_conditional_losses, wpe_layer_call_fn, wpe_layer_call_and_return_conditional_losses, dropout_layer_call_fn while saving (showing 5 of 380). These functions will not be directly callable after loading.\r\nWARNING:absl:Found untraced functions such as wte_layer_call_fn, wte_layer_call_and_return_conditional_losses, wpe_layer_call_fn, wpe_layer_call_and_return_conditional_losses, dropout_layer_call_fn while saving (showing 5 of 380). These functions will not be directly callable after loading.\r\n```\r\n does this help identify the issue?",
"These warnings are not directly related to your issue, you can safely ignore them for now and the tutorial you linked is wrong for the TFLite creation part.\r\n\r\nUnfortunately, the current state of the TF models in transformers are not fully compliant with TFLite so, I suggest to do not push to far the conversion. It is in our plans to have a better compliancy, but we don't know when yet.\r\n\r\nYou use the following piece of code to create your TFLite model:\r\n```python\r\nfrom transformers import TFGPT2LMHeadModel\r\nimport tensorflow as tf\r\n\r\nbase_model = TFGPT2LMHeadModel.from_pretrained(\"gpt2\")\r\ninput_ids = tf.keras.layers.Input((128, ), batch_size=1, dtype=tf.int32, name='input_ids')\r\nattention_mask = tf.keras.layers.Input((128, ), batch_size=1, dtype=tf.int32, name='attention_mask')\r\ninputs = {\"input_ids\": input_ids, \"attention_mask\": attention_mask}\r\noutput = base_model(inputs)\r\nmodel = tf.keras.models.Model(inputs=inputs, outputs=output)\r\nconverter = tf.lite.TFLiteConverter.from_keras_model(model)\r\nconverter.experimental_new_converter = True\r\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\r\nconverter.inference_input_type = tf.float32\r\nconverter.target_spec.supported_ops = [tf.lite.OpsSet.SELECT_TF_OPS]\r\ntflite_quant_model = converter.convert()\r\n\r\nwith open(\"model.tflite\", \"wb\") as f:\r\n f.write(tflite_quant_model)\r\n```\r\n\r\nWith this piece of code you should be able to convert your model into a TFLite one. Note also that the current TF models are not compliant with float16 so you have to keep with float32.",
"Thank you, but with the code above I am getting the following error: \r\n\r\n```\r\nRegular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. \r\nNode number 0 (FlexIdentity) failed to prepare.\r\n```\r\n\r\nI even tried with ``tf-nightly`` as a google search of this error suggested that the nightly has flex delegate support. But still got the above error.\r\n\r\nThe android tflite interpreter I am using works fine with all the models presented here: https://github.com/huggingface/tflite-android-transformers/tree/master/gpt2#change-the-model . They are even quantized.\r\n\r\nWould it be possible for me to train the models using a previous version of transformers? Which version was used at the time of writing the article and for providing the above tflite models? Would it help to change to that version?\r\n\r\nP.S. I would like to add that I am currently using the April 21st 2020 git version: ``!git checkout b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8`` as I needed the ``line-by-line`` parameter during training.\r\n\r\nThank you\r\n",
"To make this work you have to use the current release of TF and Transformers, not below.",
"> To make this work you have to use the current release of TF and Transformers, not below.\r\n\r\nhmm, that would make sense why your code did not work. The reason I was using the April 21st version is because I needed the ``line-by-line`` parameter during fine tuning. Does the current version and ``run_clm.py`` have ``line-by-line`` support?",
"I don't think the new `run_clm.py` still supports `line-by-line`. Better to open a new issue to discuss of this.",
"> To make this work you have to use the current release of TF and Transformers, not below.\r\n\r\nSo I am now using the current version of transformers and tf and fine-tuned a model using ``run_clm.py`` and used your above code to convert that model to tflite but still got the same error:\r\n\r\n```\r\nRegular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference. \r\nNode number 0 (FlexIdentity) failed to prepare.\r\n```\r\n\r\nWhen converting this model, I got a lot of messages in console, \r\n\r\n```\r\nTensorflow version: 2.4.0\r\nWARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2df19a0660>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: <cyfunction Socket.send at 0x7f2e091e6e58> is not a module, class, method, function, traceback, frame, or code object\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nWARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7f2df19a0660>> and will run it as-is.\r\nPlease report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\r\nCause: <cyfunction Socket.send at 0x7f2e091e6e58> is not a module, class, method, function, traceback, frame, or code object\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:AutoGraph could not transform <function wrap at 0x7f2e06b7a8c8> and will run it as-is.\r\nCause: while/else statement not yet supported\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING: AutoGraph could not transform <function wrap at 0x7f2e06b7a8c8> and will run it as-is.\r\nCause: while/else statement not yet supported\r\nTo silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:absl:Found untraced functions such as wte_layer_call_and_return_conditional_losses, wte_layer_call_fn, wpe_layer_call_and_return_conditional_losses, wpe_layer_call_fn, dropout_layer_call_and_return_conditional_losses while saving (showing 5 of 385). These functions will not be directly callable after loading.\r\nWARNING:absl:Found untraced functions such as wte_layer_call_and_return_conditional_losses, wte_layer_call_fn, wpe_layer_call_and_return_conditional_losses, wpe_layer_call_fn, dropout_layer_call_and_return_conditional_losses while saving (showing 5 of 385). These functions will not be directly callable after loading.\r\nINFO:tensorflow:Assets written to: /tmp/tmpus6vmwet/assets\r\nINFO:tensorflow:Assets written to: /tmp/tmpus6vmwet/assets\r\nThe parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nThe parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nINFO:absl:Using new converter: If you encounter a problem please file a bug. You can opt-out by setting experimental_new_converter=False\r\n```\r\ncomparatively, a model fine tuned using the old v2.4 version of transformers did not generate any such messages:",
"Those are just warnings and expected messages, this is ok. Can you try with usual `gpt2` models? If it works, the issue is certainly coming from your model. Otherwise, we will check deeper what is going wrong. Nevertheless, TFLite compliancy is not our priority for now, so if we have to fix something it will certainly be postponed to a bit later in the coming months.",
"Hi, I'm getting the same error as above when using the default pretrained gpt2.",
"Humm with Transformers from source and TF 2.4 I get no errors. What is your env?",
"> Humm with Transformers from source and TF 2.4 I get no errors. What is your env?\r\n\r\nI'm trying to run on android using a flutter tflite interpreter. and yes, I was also considering that maybe the fault is not in the converted model but the interpreter. \r\n\r\nTo confirm this I wanted to use the python interpreter, if I can get an output here then that means that the converted models are fine and its an issue with the flutter interpreter.\r\n\r\nBut when using the python tflite interpreter, everytime I invoke the model tflite generated model using your above provided code, my colab runtime crashes. \r\n\r\nI'm using current version of transformers and TF2.4\r\n\r\nThe same interpreter works for the tflite models provided here and produces output in the expected shape : https://github.com/huggingface/tflite-android-transformers/tree/master/gpt2#change-the-model\r\n\r\nBelow is the code I am using for tflite interpreter\r\n\r\n```python\r\nfrom transformers import *\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\n# start with their provided model, the -O will change downloaded file name to model.tflite\r\n\r\n# !wget -O model.tflite https://s3.amazonaws.com/models.huggingface.co/bert/distilgpt2-64.tflite\r\n\r\n# Encode random strings\r\nsentance = (\"\"\"You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist. You are John, a punk living in the futuristic city of Zail. You have a small xore blaster hidden in you jacket and a holoband on your wrist.\"\"\")\r\nreview_token = tokenizer.encode(sentance)\r\nprint(len(review_token))\r\nreview_token = np.array(review_token, dtype=np.int32)\r\nreview_token = review_token[:128]\r\nreview_token = np.expand_dims(review_token, axis=0) # unsqueeze to add the batch dimension\r\nprint(sentance)\r\nprint(review_token)\r\n\r\ntflite_interpreter = tf.lite.Interpreter(model_path='/content/model.tflite')\r\ntflite_interpreter.allocate_tensors()\r\n\r\ninput_details = tflite_interpreter.get_input_details()\r\noutput_details = tflite_interpreter.get_output_details()\r\n\r\nprint(\"== Input details ==\")\r\nprint(\"name:\", input_details[0]['name'])\r\nprint(\"shape:\", input_details[0]['shape'])\r\nprint(\"type:\", input_details[0]['dtype'])\r\n\r\nprint(\"\\n== Output details ==\")\r\nprint(\"name:\", output_details[0]['name'])\r\nprint(\"shape:\", output_details[0]['shape'])\r\nprint(\"type:\", output_details[0]['dtype'])\r\n\r\ntflite_interpreter.set_tensor(input_details[0]['index'], review_token)\r\n\r\ntflite_interpreter.invoke()\r\n\r\ntflite_model_predictions = tflite_interpreter.get_tensor(output_details[0]['index'])\r\nprint(\"Prediction results shape:\", tflite_model_predictions.shape)\r\n```\r\n",
"Ok, thanks for sharing, I will check this once I can dedicate some time.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | Hi,
I've fine tuned a distilgpt2 model using my own text using ``run_language_modeling.py`` and its working fine after training and ``run_generation.py`` script produces the expected results.
Now I want to convert this to a Tensorflow Lite model and did so by using the following
```python
from transformers import *
CHECKPOINT_PATH = '/content/drive/My Drive/gpt2_finetuned_models/checkpoint-2500'
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
model.save_pretrained(CHECKPOINT_PATH)
model = TFGPT2LMHeadModel.from_pretrained(CHECKPOINT_PATH, from_pt=True)
```
But I dont think I'm doing this right as after conversion, when I write
```python
print(model.inputs)
print(model.outputs)
```
I get
```
None
None
```
But I still went ahead with the TFLite conversion using :
```python
import tensorflow as tf
input_spec = tf.TensorSpec([1, 64], tf.int32)
model._set_inputs(input_spec, training=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# FP16 quantization:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("/content/gpt2-fp16.tflite", "wb").write(tflite_model)
```
But does not work and when using the generated ``tflite`` model I get the error:
> tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true.
Which I'm sure has something to to with my model not converting properly and getting ``None`` for input/output.
Does anyone have any idea how to fix this?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9391/comments | https://api.github.com/repos/huggingface/transformers/issues/9391/events | https://github.com/huggingface/transformers/issues/9391 | 777,579,543 | MDU6SXNzdWU3Nzc1Nzk1NDM= | 9,391 | Similar usage of `past_key_values` in CausalLM and Seq2SeqLM | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hey @forest1988,\r\n\r\nIn order to fully use `BartDecoder` separately from `BartModel` as a `BartForCausalLM` model, we're still waiting on this PR: #9128.\r\nAnd again, you're very correct in your assessment that the behavior between `BartDecoder` and `GPT2` is not fully aligned here. IMO, we should change GPT2's cache format from a single tensor of `[2, batch_size, ...,]` to `tuple([batch_size, ...])` => If you're keen feel free to open a PR for it! Actually, we could make this a \"Good second issue\" here. \r\n\r\nSo to answer your question, no you should not contact the 2 tensors in `self_attn_past_key_value` to a single tensor, but we should rather change the code in GPT2 slightly to also have a tuple of 2 tensors instead of one tensor. \r\nIn GPT2, we create a new tensor at each iteration when using `use_cache` here: https://github.com/huggingface/transformers/blob/d944966b19a4d6860bddc7cdc1ba928ca8a0da91/src/transformers/models/gpt2/modeling_gpt2.py#L235 => this is a bit unnecessary IMO. When the inputs are getting longer allocating new memory for `key` and `value` can actually lead to a small slow-down IMO. If instead we would just use a tuple => `present = (key, value)` we would not allocate new memory. \r\n\r\nSo 1) As soon as #9128 is merged you can use `BartForCausalLM` the same way as `GPT2` without having to change anything.\r\n2) Let's see if someone is interested in tackling this \"inconsistency\" issue in GPT2. This \"First good issue\" should replace this line: https://github.com/huggingface/transformers/blob/d944966b19a4d6860bddc7cdc1ba928ca8a0da91/src/transformers/models/gpt2/modeling_gpt2.py#L235\r\nwith \r\n```python\r\npresent = (key.transpose(-2, -1), value))\r\n```\r\n(I think it should actually be that simple)\r\n",
"Hi @patrickvonplaten, \r\n\r\nThank you for answering this issue!\r\nI'm sorry I haven't checked the PR https://github.com/huggingface/transformers/pull/9128 before creating this issue. I'll check it!\r\n\r\nAnd, thanks for telling me your opinion about the need to change GPT2's cache format from a single tensor to a tuple of 2 tensors.\r\nI'd love to open a PR, but I'm afraid I don't have enough time now.\r\nI will work on it as soon as I find the time, but of course, if someone else who is interested in the same issue would like to work on it, I would appreciate that!\r\n\r\nI now understand that I should not contact the 2 tensors in `self_attn_past_key_value` to a single tensor, but rather the code in GPT2 should be changed.\r\n\r\nI'm looking forward to seeing the PR https://github.com/huggingface/transformers/pull/9128 is merged.\r\nAlso, I would like to think about how to avoid contacting the 2 tensors in `self_attn_past_key_value` for what I am currently working on.\r\n\r\nThank you so much!\r\n",
"Hi,\r\n\r\nI've started fixing the issue but failed in some tests.\r\n\r\n```\r\n========================================================================================= short test summary info ==========================================================================================\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_sample_generate - AttributeError: 'tuple' object has no attribute 'index_select'\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_generate - AttributeError: 'tuple' object has no attribute 'index_select'\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_gpt2_gradient_checkpointing - TypeError: CheckpointFunctionBackward.forward: expected Tensor or tuple of Tensor (got tuple) for return value 1\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_group_beam_search_generate - AttributeError: 'tuple' object has no attribute 'index_select'\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_parallel_beam_search - AttributeError: 'tuple' object has no attribute 'index_select'\r\nFAILED tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer - OSError: [Errno 12] Cannot allocate memory\r\n================================================================== 6 failed, 4626 passed, 834 skipped, 642 warnings in 1766.16s (0:29:26) ==================================================================\r\n```\r\n\r\nI think this may be related to the `index_select` used in [generation_utils.py](\r\nhttps://github.com/huggingface/transformers/blob/143289dcf759a663c03317e30167e89ee6d86588/src/transformers/generation_utils.py).\r\n\r\nI will continue to look into this when I have more time.",
"Hi @patrickvonplaten,\r\n\r\nNow that I have time, I'm thinking about how to fix this issue so that the testing part works well.\r\n\r\nI think where to modify in `generation_utils.py` is here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L506-L517\r\n\r\n`past` is taken from `past_key_values` or other output variations in:\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L477-L489\r\n\r\nThen, it is treated like this:\r\nhttps://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/generation_utils.py#L1654-L1655\r\n\r\nCan I modify #L506-L517 so that `past` is replaced from `Tuple[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`,\r\nor should I consider other output variations, `output.mem` and `outputs.past_buckets_states`?\r\n\r\nThank you.\r\n",
"Hey @forest1988,\r\n\r\nThanks for you in-detail code snippet. I think the easiest solution would be to open a PR showcasing the required changes :-) I think you're right `Tuple[torch.Tensor]` should indeed be `Tuple[Tuple[torch.Tensor]]`.\r\n\r\nWould you be interested in opening a PR so that we can add the fixes there? ",
"Hi @patrickvonplaten,\r\n\r\nThank you for your comment! \r\nAfter doing some additions to change `Tuple[torch.Tensor]` to `Tuple[Tuple[torch.Tensor]]`, I would like to open a PR and ask you all to add fixes.\r\nI'll open the PR in a few days!",
"I'm sorry to keep you waiting. I have opened a PR #9596 for this issue.\r\nI marked the PR as WIP because it has not yet been resolved.\r\nI will continue to look into this issue myself in the future and any advice would be greatly appreciated."
] | 1,609 | 1,611 | 1,611 | CONTRIBUTOR | null | # 🚀 Feature request
It seems GPT-2 and BartDecoder has a different style of `past_key_values`.
In GPT-2, `past_key_values` is explained as below:
(the explanation is from https://huggingface.co/transformers/model_doc/gpt2.html#gpt2model)
```
(parameters)
past_key_values (List[torch.FloatTensor] of length config.n_layers) – Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed.
(returns)
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
```
In BartDecoder and its inner BartDecoderLayer, `past_key_values` is explained and treated as below:
(the explanation is from https://huggingface.co/transformers/model_doc/bart.html#bartmodel)
```
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
```
`v4.1.1 modeling_bart`
https://github.com/huggingface/transformers/blob/v4.1.1/src/transformers/models/bart/modeling_bart.py
``` python
# in BartDecoder
for idx, decoder_layer in enumerate(self.layers):
# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
if output_hidden_states:
all_hidden_states += (hidden_states,)
dropout_probability = random.uniform(0, 1)
if self.training and (dropout_probability < self.layerdrop):
continue
past_key_value = past_key_values[idx] if past_key_values is not None else None
hidden_states, layer_self_attn, present_key_value, layer_cross_attn = decoder_layer(
hidden_states,
attention_mask=combined_attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
past_key_value=past_key_value,
output_attentions=output_attentions,
)
```
``` python
# in BartDecoderLayer
# Self Attention
# decoder uni-directional self-attention cached key/values tuple is at positions 1,2
self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
# add present self-attn cache to positions 1,2 of present_key_value tuple
hidden_states, self_attn_weights, present_key_value = self.self_attn(
hidden_states=hidden_states,
past_key_value=self_attn_past_key_value,
attention_mask=attention_mask,
output_attentions=output_attentions,
)
hidden_states = F.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
if not self.normalize_before:
hidden_states = self.self_attn_layer_norm(hidden_states)
# Cross-Attention Block
cross_attn_present_key_value = None
cross_attn_weights = None
if encoder_hidden_states is not None:
residual = hidden_states
if self.normalize_before:
hidden_states = self.encoder_attn_layer_norm(hidden_states)
# cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
hidden_states=hidden_states,
key_value_states=encoder_hidden_states,
attention_mask=encoder_attention_mask,
past_key_value=cross_attn_past_key_value,
output_attentions=output_attentions,
)
hidden_states = F.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
if not self.normalize_before:
hidden_states = self.encoder_attn_layer_norm(hidden_states)
# add cross-attn to positions 3,4 of present_key_value tuple
present_key_value = present_key_value + cross_attn_present_key_value
```
## Motivation
It seems that one of the aims of the refactoring of Bart by @patrickvonplaten https://github.com/huggingface/transformers/pull/8900 is "Allow to use BartEncoder and BartDecoder separately from the BartModel".
I appreciate this very much and would love to treat `BartDecoder` as well as `gpt2`, but I feel that the difference in the handling of `past_key_values` is a barrier.
In `gpt2`, `past_key_value` in `past_key_values` is `torch.tensor` with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
However, in `Bart`, `past_key_value` in `past_key_values` is `Tuple[torch.Tensor]` and `self_atten` part is not a tensor but a "2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head))".
If we want to handle `self_attn_past_key_value` in `Bart` like that in `gpt2`, is it the right way to concatename the 2 tensors in `past_key_value`?
Or, is there the other correct way to treat it?
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9390/comments | https://api.github.com/repos/huggingface/transformers/issues/9390/events | https://github.com/huggingface/transformers/pull/9390 | 777,579,109 | MDExOlB1bGxSZXF1ZXN0NTQ3ODM2MzM1 | 9,390 | [trainer] self.model_wrapped + _model_unwrap | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This should be good now, @sgugger - thanks a lot for all the suggestions!",
"@LysandreJik GitHub Reviewers is down, so tagging you instead."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR adds:
* [x] adds `self.model_wrapped` - to have access to the outmost module regardless of how many times it was wrapped (e.g. under DeepSpeed there is a double wrapping ` DDP(Deepspeed(Transformers Model))`)
* [x] makes sure that `self.model` is always set to the normal model
* [x] fixes a bug where under `model_parallel` `self.model` was not set (twice)!
* [x] simplifies the `model_init` checking logic
* [x] replaces `_actual_model` which couldn't handle multiple wrapping levels with `_model_unwrap` which can and integrate it
Please ignore the small mentions of DeepSpeed, this PR is split of from https://github.com/huggingface/transformers/pull/9211 to get all the non-DeepSpeed related changes into a separate review to make things a bit easier on the reviewers as it was suggested by @sgugger. This PR was made by copying from the other PR and manually removing all the added deepspeed code.
If possible, let's get it in asap so that I could rebase and we could move on with the DeepSpeed PR. Thank you very much!
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9390",
"html_url": "https://github.com/huggingface/transformers/pull/9390",
"diff_url": "https://github.com/huggingface/transformers/pull/9390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9390.patch",
"merged_at": 1609933812000
} |
https://api.github.com/repos/huggingface/transformers/issues/9389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9389/comments | https://api.github.com/repos/huggingface/transformers/issues/9389/events | https://github.com/huggingface/transformers/pull/9389 | 777,578,223 | MDExOlB1bGxSZXF1ZXN0NTQ3ODM1Njgx | 9,389 | [trainer] self.model_wrapped + _model_unwrap | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR adds:
* [x] self.wrapped
https://github.com/huggingface/transformers/pull/9211
in progress | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9389",
"html_url": "https://github.com/huggingface/transformers/pull/9389",
"diff_url": "https://github.com/huggingface/transformers/pull/9389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9389.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9388/comments | https://api.github.com/repos/huggingface/transformers/issues/9388/events | https://github.com/huggingface/transformers/issues/9388 | 777,564,229 | MDU6SXNzdWU3Nzc1NjQyMjk= | 9,388 | Conditional Generation using input_embeds instead of input_ids | {
"login": "frankgandiao",
"id": 31948011,
"node_id": "MDQ6VXNlcjMxOTQ4MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/31948011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankgandiao",
"html_url": "https://github.com/frankgandiao",
"followers_url": "https://api.github.com/users/frankgandiao/followers",
"following_url": "https://api.github.com/users/frankgandiao/following{/other_user}",
"gists_url": "https://api.github.com/users/frankgandiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankgandiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankgandiao/subscriptions",
"organizations_url": "https://api.github.com/users/frankgandiao/orgs",
"repos_url": "https://api.github.com/users/frankgandiao/repos",
"events_url": "https://api.github.com/users/frankgandiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankgandiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @frankgandiao,\r\n\r\nNote that Encoder-Decoder models usually require both `input_ids` and `decoder_input_ids`. Bart is special in a sense that it can automatically create the `decoder_input_ids` from the `input_ids` if you **don't** provide the `decoder_input_ids`. However, the model is not able to automatically create the `decoder_inputs_embeds` from the `inputs_embeds` if you provide only the `inputs_embeds` => to solve your problem you should provide the `decoder_inputs_embeds` as well. What you could do is the following: \r\n\r\n```python\r\nfrom transformers.models.mbart.modeling_mbart.py import shift_tokens_right\r\n\r\ninput_ids = tokenizer(text, return_tensors='pt')['input_ids']\r\ndecoder_input_ids = shift_tokens_right(input_ids, tokenizer.pad_token_id)\r\n\r\ninputs_embeds = model.get_input_embeddings()(input_ids).squeeze()\r\ndecoder_inputs_embeds = model.get_input_embeddings()(decoder_input_ids).squeeze()\r\n\r\nmodel(inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds)\r\n```",
"Hey @patrickvonplaten, \r\n\r\nThanks for your reply! That makes sense and the problem is resolved!\r\n"
] | 1,609 | 1,609 | 1,609 | NONE | null | Hi @patrickvonplaten!
When using input_embeds instead of input_ids as inputs to the BartForConditionalGeneration model, I am not able to generate the result. Could you please take a look? The same code works with GPT2 in place of Bart. Thanks!
Here is the script
```
import torch
from transformers import BartForConditionalGeneration, BartTokenizer
model_path = "facebook/bart-large"
model = BartForConditionalGeneration.from_pretrained(model_path, output_hidden_states=True)
tokenizer = BartTokenizer.from_pretrained(model_path)
text = "I disapprove of what you <mask> , but"
input_ids = tokenizer.encode_plus(text, return_tensors='pt')['input_ids']
with torch.no_grad():
x = model.get_input_embeddings()(input_ids).squeeze()
model(inputs_embeds = x)
```
Here is the Traceback I got
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-e17be5532b10> in <module>()
18 x = model.get_input_embeddings()(input_ids).squeeze()
19
---> 20 model(inputs_embeds = x)
21
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1244 output_attentions=output_attentions,
1245 output_hidden_states=output_hidden_states,
-> 1246 return_dict=return_dict,
1247 )
1248 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1079 # -> is this used for backward compatibility
1080 if decoder_input_ids is None and decoder_inputs_embeds is None:
-> 1081 decoder_input_ids = shift_tokens_right(input_ids, self.config.pad_token_id)
1082
1083 output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
/usr/local/lib/python3.6/dist-packages/transformers/models/bart/modeling_bart.py in shift_tokens_right(input_ids, pad_token_id)
67 Shift input ids one token to the right, and wrap the last non pad token (usually <eos>).
68 """
---> 69 prev_output_tokens = input_ids.clone()
70
71 assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
AttributeError: 'NoneType' object has no attribute 'clone'
```
- `transformers` version: 4.1.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9388/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9387/comments | https://api.github.com/repos/huggingface/transformers/issues/9387/events | https://github.com/huggingface/transformers/issues/9387 | 777,540,791 | MDU6SXNzdWU3Nzc1NDA3OTE= | 9,387 | Where is the impact when output_attentions=True? | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If `output_attentions=True` memory consumption should increase significantly (for large `sequence_length`) since we now store all attentions of size (`batch_size`, `num_heas`, `sequence_length`, `sequence_length`). This is less significant in training since the stored activations for training consume most RAM anyways. Speed should not be really affected by this.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`?
```python
self.bert_encoder = BertModel.from_pretrained(
hparams.architecture, # "bert-base-uncased"
output_attentions=True)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9386/comments | https://api.github.com/repos/huggingface/transformers/issues/9386/events | https://github.com/huggingface/transformers/pull/9386 | 777,522,803 | MDExOlB1bGxSZXF1ZXN0NTQ3Nzk2NjYw | 9,386 | replace apex.normalization.FusedLayerNorm with torch.nn.LayerNorm | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging since it's blocking #9343 ."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR proposes to drop `apex.normalization.FusedLayerNorm` in favor of faster `torch.nn.LayerNorm`.
1. For performance and background details please see the discussions in https://github.com/huggingface/transformers/issues/9377
2. It's also needed for https://github.com/huggingface/transformers/pull/9384 since `apex.normalization.FusedLayerNorm` corrupts data under model parallel https://github.com/NVIDIA/apex/issues/1022
Fixes: #9377
@LysandreJik, @sgugger, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9386",
"html_url": "https://github.com/huggingface/transformers/pull/9386",
"diff_url": "https://github.com/huggingface/transformers/pull/9386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9386.patch",
"merged_at": 1609783209000
} |
https://api.github.com/repos/huggingface/transformers/issues/9385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9385/comments | https://api.github.com/repos/huggingface/transformers/issues/9385/events | https://github.com/huggingface/transformers/pull/9385 | 777,519,130 | MDExOlB1bGxSZXF1ZXN0NTQ3Nzk0MTI1 | 9,385 | [logging] autoflush | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR proposes to:
* auto-flush `transformers` logging
When using logging for tracing signals from different parts of the code and which could be mixed with print debug this aids to get all the logging events synchronized.
I don't think this change will introduce any performance impacts.
If it helps someone here is the code I used to sync `transformers` logging with various other debug prints.
I was porting bart to MP and I needed to trace that the device switching happens correctly and I added a bunch of `logger.info` calls inside `modeling_bart.py` and also had some other helpers `print` debug messages which weren't logger based:
```
# auto flush std streams
from sys import stdout, stderr
def stdout_write_flush(args, w=stderr.write): w(args); stderr.flush()
def stderr_write_flush(args, w=stderr.write): w(args); stderr.flush()
stdout.write = stdout_write_flush
stderr.write = stderr_write_flush
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
import logging
import transformers.utils.logging
import transformers.models.bart.modeling_bart
# I wanted a shorter simpler format
handlers = transformers.utils.logging._get_library_root_logger().handlers
for handler in handlers:
formatter = logging.Formatter("[%(funcName)s] %(message)s")
handler.setFormatter(formatter)
transformers.models.bart.modeling_bart.logger.setLevel(transformers.logging.INFO)
# then all the model creation and generate() goes next
```
@LysandreJik, @sgugger, @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9385",
"html_url": "https://github.com/huggingface/transformers/pull/9385",
"diff_url": "https://github.com/huggingface/transformers/pull/9385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9385.patch",
"merged_at": 1609837077000
} |
https://api.github.com/repos/huggingface/transformers/issues/9384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9384/comments | https://api.github.com/repos/huggingface/transformers/issues/9384/events | https://github.com/huggingface/transformers/pull/9384 | 777,517,730 | MDExOlB1bGxSZXF1ZXN0NTQ3NzkzMjU0 | 9,384 | [model parallelism] Bart goes parallel | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"That looks great! Model parallelism would be very nice for Bart. We should coordinate here a bit with all the open PRs. I'm also more or less done with the big \"split-bart-into-separate-models\" PR: https://github.com/huggingface/transformers/pull/9343.\r\nThink the merge conflicts can become a bit painful here :D. \r\n\r\nI'd propose the following:\r\n-Merge: #9347, #9386 (they should be pretty trivial to merge)\r\n-Rebase and Merge the big Bart refactor (https://github.com/huggingface/transformers/pull/9343)\r\n-Discuss/Merge the \"new\" model parallel design: #9316 and #9323\r\n-Rebase and Discuss/Merge this PR",
"Is this PR ready for review? There's a lot of comments that were probably here for debugging purposes. Let me know if you want a review or if we should come back to it after #9347 and #9386 have been merged.",
"It's very ready, functionality/concept-wise. It's not ready 100% commentary, debug traces, etc. but that's very unimportant until the rest is sorted out, since there are multiple overlapping PRs happening.\r\n\r\nBecause of the holidays there is a lot of new code which is all inter-dependent and unreviewed and then there is a huge change to merge of #9343.\r\n\r\nSo I think it's the best to review it as it is - sort things out and then once everybody is happy with the logic, and #9343 I will most likely have to do a new PR anyway.\r\n\r\nBut I need your feedback that what I did is correct.\r\n\r\nThink of it as an ongoing code design and generalization PR.\r\n\r\nThanks. ",
"@patrickvonplaten, your plan works for me. Please ping me when #9343 is in and you're happy with the outcome so that it'll be safe to add MP w/o creating conflicts. Thank you.\r\n\r\nBut as I commented above this blocking event doesn't have to interefere with this PR's review - we are just not going to merge it, but it should proceed as normal and I will take all the agreed upon changes to the new PR once the dust around Bart split settled down. \r\n",
"@stas00 @LysandreJik @patrickvonplaten This PR introduces a `device_map` that is not backwards compatible with 4.1.0. We have to do that at some point (as @stas00 discovered), but let's not have three different versions. We really need to make sure that we have consensus on the final form of the `device_map` that will work for all models going forward or we will have to change it again when model parallelization is generalized and some of its functionality is placed in `PreTrainedModel`. Have you tested this on `gpt2`, @stas00 and is the code generalizable to models that don't have decoder architectures and can store their attention blocks in attributes like `self.h`?\r\n\r\nHas everyone read [this comment](https://github.com/huggingface/transformers/pull/9323#issuecomment-753518885)? Are we all on board for the plan to generalize model parallelism? Don't have to implement it now, but we need to make sure we've thought through any changes that affect user experience and backward compatibility.\r\n\r\nSorry, I'm in the middle of moving so not keeping close track of all the traffic and could easily have missed something. Also, this content is spread across several PRs, so sometimes I'm getting confused.",
"@alexorona, I'm basically leaving all the old code in place, so that gpt2 works as is and t5 as is, so this PR only impacts Bart. And in any case it doesn't look like this PR will be merged since Bart went through a split https://github.com/huggingface/transformers/pull/9343, which isn't finalized yet and I will need to re-do it anyway. But it's no problem, since I know what to do. And see the end of this comment - the whole MP implementation might need to be redesigned altogether.\r\n\r\nSince there are so many moving parts, it's very difficult to manage things and definitely makes things difficult for reviewers. \r\n\r\nSo my intention was to merge each of the new things separately, while keeping the old code working and then to start integrating things in bit. The holidays made things pile up, but since the HF team is back I trust in the next few days we will form a plan.\r\n\r\nImportant notes:\r\n\r\n1. MP is new here and should be clearly marked as an experimental feature. Which means device maps are not fixed and can change at any moment. https://github.com/huggingface/transformers/pull/9412\r\n\r\n What we could commit to is having the default device map work - i.e users don't supply any device map and then it just works.\r\n\r\n That's why I propose we start with each model implementing its own device map format (while sharing bits with common code where possible) and then over time we will find a common format.\r\n\r\n If the HF team wants to allocate time then we need to sit down, look at all the models and decide on the format ahead of time. If I'm not mistaken it looks like at the moment it's just @alexorona and I that mostly understand what's going on, so it'd be great to have someone from HF to get on top of MP. I'd be happy to sit down with that person and explain what I learned in the last few weeks in person. It's not complicated.\r\n\r\n2. As it was just pointed out https://github.com/pytorch/pytorch/issues/49961#issuecomment-754342632 this implementation is highly inefficient since it doesn't take advantage of the idle gpus, so we might have to scratch a big part of it and re-implement it using PP or something similar. The current implementation just uses extra gpus to expand available memory, but doesn't take advantage of the extra hardware.\r\n\r\nUntil then we have deepspeed integration [almost ready](https://github.com/huggingface/transformers/pull/9211) and `sharded_ddp` should be available in the next few days, so users will have excellent ways to fit huge transformers models on limited hardware already. So let's not rush with MP here and think.\r\n\r\n ",
"From what I understand, model parallelism as it's currently implemented is a naive implementation of what it's supposed to do: offer more memory so that bigger models may be trained using memory of several devices instead of a single device. It is indeed inefficient as devices as idle while others compute, so there's definitely a way of making it more efficient.\r\n\r\n@stas00, @alexorona, if you could walk us through what you have learned so that @patrickvonplaten, @sgugger and myself can understand the different options available, that would be great.\r\n\r\nSome thoughts to structure our approach towards MP:\r\n\r\n- You mention pipeline parallelism (PP) as a way to be more efficient than model parallelism (MP), as the idle devices can be used while other compute. This intuitively seems like an algorithm to set up during training, do you think we would have to modify the models themselves like what is currently done with model parallelism?\r\n- As noted by @sgugger and approved by @patrickvonplaten and myself, working on the MP API of the current models (GPT-2 and T5) is a better test-bed than trying to make it work for all models all at once. Most models are similar, and finding a generic approach (if possible!) should be feasible with just these two models for now.\r\n- You're right that we should not rush it, and take our time to understand what we can do best for both inference and training.",
"@LysandreJik No, it's not a naïve implementation of model parallelism. In addition to **data parallelism** and **model parallelism**, there is **pipeline parallelism**, which is the next level of complexity along with **zero redundancy**. Model parallelism allows us to train bigger models on GPU. Pipeline parallelism + model parallelism would allow us to train these large models faster because the GPUs are not idle. I really think the next step is to make sure model parallelism is generalized and rely on a library -- probably deepspeed -- to implement pipeline parallelism and zero redundancy. deepspeed has something called **3D parallelism**, which I believe is a form of pipeline parallelism. @stas00 is that correct?\r\n\r\nFrom my understanding, deepspeed has three major enhancements:\r\n\r\n- 3D parallelism\r\n- zero-redundancy that reduces the GPU memory footprint of a given module\r\n- some support for clusters, but I'm hazy on the details\r\n\r\n**Practical feature implications:** We can currently train t5-11b -- I believe the largest model in the library -- in a practical and affordable amount of time on the newest cloud instances. There are three benefits to pursuing pipeline parallelism and zero redundancy libraries:\r\n\r\n- Users could train large models faster\r\n- Users could train large models on more modest hardware\r\n- We would be prepared for the eventual release of even larger models in the 20 billion and potentially up to 100 billion parameter range",
"Some notes following up to the raised issues:\r\n\r\n- I need to study and experiment before I'm able to answer a lot of the questions you have been asking. For example one of the important questions @alexorona asks is whether the idling GPUs can be utilized to a high capacity by integrating other libraries like deepspeed. I will be able to answer that once I try that.\r\n\r\n- The \"naive\" part @LysandreJik referred to is that, say, you spread the model over 8 gpus - 7 gpus will be idling most of the time, so it'd a terribly expensive training as you would be paying per gpu and not per its utilization. So while the current solution works there must be a more efficient ways to do that. One much more complex solution suggested here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-754306157 is with the RPC mechanism. Again, I don't have any experience with it, so I will eventually get to try it and comment back.\r\n\r\n- DeepSpeed's solution to huge model size is ZeRO - while it says it can support models implementing MP, it says it's not needed since we have a working solution (100B param model w/o needing MP) and my experiments showed that with sharded DDP on my weird hardware setup I can fit 3x more data, and with DeepSpeed 3-5x, and that's just with some default config.\r\n\r\n- We are on the same page wrt to making things working on a few models - t5, gpt2 and bart is ready too. Note that Bart is a better candidate than t5 because it can be asymmetrical wrt encoder/decoder-size - so it's slighly more complex (but not by much). We were discussing a specific issue of `device_map` design, which requires us to look at all models. But that's where it can stop.\r\n\r\nMy plan is to finish the DeepSpeed integration - almost there and then look into Pipelines next.\r\n\r\nOf course, nobody needs to wait for me, I'd be just as happy for others to experiment and teach me instead ;)\r\n\r\nI commented on the current design so that the HF team better understand what we have here:\r\nhttps://github.com/huggingface/transformers/issues/8771#issuecomment-755113545\r\nLet's keep the design discussion focused in one thread, otherwise we are all over multiple threads... doesn't matter which - just pick one... If you have questions or need for clarifications please don't hesitate to ask.\r\n",
"I rebased on https://github.com/huggingface/transformers/pull/9343 so now it's no longer possible to develop anything on Bart - the check fails because it wants all copy-cats to be the same:\r\n```\r\npython utils/check_copies.py\r\nTraceback (most recent call last):\r\n File \"utils/check_copies.py\", line 305, in <module>\r\n check_copies(args.fix_and_overwrite)\r\n File \"utils/check_copies.py\", line 166, in check_copies\r\n raise Exception(\r\nException: Found the following copy inconsistencies:\r\n- src/transformers/models/pegasus/modeling_pegasus.py: copy does not match models.bart.modeling_bart.BartAttention at line 141\r\n- src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartAttention at line 140\r\n- src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 275\r\n- src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 331\r\n- src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartAttention at line 124\r\n- src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 259\r\n- src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 315\r\n- src/transformers/models/mbart/modeling_mbart.py: copy does not match models.bart.modeling_bart.BartAttention at line 133\r\n- src/transformers/models/blenderbot/modeling_blenderbot.py: copy does not match models.bart.modeling_bart.BartAttention at line 126\r\nRun `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them.\r\nmake: *** [Makefile:25: extra_quality_checks] Error 1\r\n```\r\n\r\nHow do I move forward with my work then? I suppose the only way to proceed is to drop Bart and use one of the derivatives? So Bart isn't going MP...\r\n\r\n@patrickvonplaten, @sgugger ",
"That's also why we should pause the BART PR for MP and make sure the general API is solid enough. Any change in BART will impact all related models (that was true before the split, since the other models were subclasses) so the same PR will need to do BART/Pegasus/mBART/marian etc. And probably the ses2seq template. So better make sure we're happy with the design on a model independent from the others like GPT-2 or T5 first :-) ",
"> I rebased on #9343 so now it's no longer possible to develop anything on Bart - the check fails because it wants all copy-cats to be the same:\r\n> \r\n> ```\r\n> python utils/check_copies.py\r\n> Traceback (most recent call last):\r\n> File \"utils/check_copies.py\", line 305, in <module>\r\n> check_copies(args.fix_and_overwrite)\r\n> File \"utils/check_copies.py\", line 166, in check_copies\r\n> raise Exception(\r\n> Exception: Found the following copy inconsistencies:\r\n> - src/transformers/models/pegasus/modeling_pegasus.py: copy does not match models.bart.modeling_bart.BartAttention at line 141\r\n> - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartAttention at line 140\r\n> - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 275\r\n> - src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 331\r\n> - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartAttention at line 124\r\n> - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 259\r\n> - src/transformers/models/blenderbot_small/modeling_blenderbot_small.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 315\r\n> - src/transformers/models/mbart/modeling_mbart.py: copy does not match models.bart.modeling_bart.BartAttention at line 133\r\n> - src/transformers/models/blenderbot/modeling_blenderbot.py: copy does not match models.bart.modeling_bart.BartAttention at line 126\r\n> Run `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them.\r\n> make: *** [Makefile:25: extra_quality_checks] Error 1\r\n> ```\r\n> \r\n> How do I move forward with my work then? I suppose the only way to proceed is to drop Bart and use one of the derivatives? So Bart isn't going MP...\r\n> \r\n> @patrickvonplaten, @sgugger\r\n\r\nI agree with @sgugger that it would be better to just work on TF and GPT2 until we have a solid API for now...But in general the idea is to implement the feature in Bart and then run `make fix-copies` and all other models are updated automatically. In case you add a lot of code to Bart (outside of `BartAttention`) it can very well be that this code has to be manually copied inside the other models as well (happy to help then :-) )",
"And big sorry for making this PR so much harder for you now! But that Bart split had to happen sooner or later",
"> And big sorry for making this PR so much harder for you now! But that Bart split had to happen sooner or later\r\n\r\nSurprisingly, the rebasing was super-simple. So it wasn't a hurdle at all.",
"1. Bart and t5 aren't exactly the same, so in order to generalize a variety of models is needed. \r\n2. And this PR is much further ahead than t5, albeit I can spend more time merging it back into t5.\r\n\r\nIf I switch to one of the original subclasses, say, MBart, and work with it instead - will the copy-checker complain just the same?",
"> If I switch to one of the original subclasses, say, MBart, and work with it instead - will the copy-checker complain just the same?\r\n\r\nI'm afraid so, unless you remove all `# Copied from` comments, but that defeats the purpose.",
"Understood. thank you!\r\n\r\nIt sounds like this change will make future development of the bart family somewhat painful. Since the developer will have to constantly sync multiple files with their new development and it won't help the reviewers since now there will be multiple duplicated diffs.\r\n\r\nIt'd be much more useful to run the check/sync periodically or at will, rather than enforcing them on each `make style`, IMO. I guess time will tell.",
"Thinking more about the situation - the thing is - this PR works - I put a ton of work into it - users can start using MP with the Bart family yesterday, e.g. with `--model_parallel` flag in trainer - we don't have to expose the unstable device map and using the internal default device map is sufficient for most simple uses. And if we change to a different more efficient implementation down the road - it'd be totally transparent to the users. And if it's not trainer, they can just use `model.parallelize()` without the device map, or use the device map but know it may change down the road.\r\n\r\nI'd just need to enable `self.is_parallelizable` that was just added and clean up a bit.\r\n\r\nBut it's your call.",
"> e.g. with --model_parallel flag in trainer\r\n\r\nThat's one of the thing to clean up: this flag is not necessary with the current API: we can detect if a model is parallelized and avoid a confusion with the name. I'm not saying we should throw this PR in the thrash, just that it should be paused until we have had time to do all clean up we want.",
">> e.g. with --model_parallel flag in trainer\r\n>\r\n> That's one of the thing to clean up: this flag is not necessary with the current API: we can detect if a model is parallelized and avoid a confusion with the name. \r\n\r\nDo tell more? Are you planning to launch MP just because a model supports it? It sounds that you are considering dropping the `--model_parallel` cl arg in trainer\r\n\r\nOr are we talking about different things?\r\n\r\n> I'm not saying we should throw this PR in the thrash, just that it should be paused until we have had time to do all clean up we want.\r\n\r\ntldr; \r\n1. **I'm fine with hitting the pause button as you suggested.**\r\n2. this is a fully functional implementation - so **you actually can send users to this PR branch if they want to use MP with Bart** (as the family has been cast out after I rebased on master, it will require quite some work to re-add it to other Bart-like models).\r\n\r\nthe full story:\r\n\r\nThe issue is simple. Is that things are complicated. This PR overlaps with https://github.com/huggingface/transformers/pull/9323 - both have multiple changes and improvements, and I have already documented and commented on each one of the changes in both PRs, well actually 3 PRs (this one too https://github.com/huggingface/transformers/pull/9316), so leaving such code spread out over several PRs is a recipe for a huge mess down the road. It all came to be as I was working over the holidays and wasn't getting feedback (No complaints, I'm just explaining how it came to be.). As a result of it I was working on new changes but with Bart so that I could see how to generalize better. Not knowing what you'd decide I tried to leave the existing code without any API changes, hence the separate independent PRs.\r\n\r\nThe bottom line is this. Regardless of whether the current implementation is efficient or not, it works. And any future more efficient implementation will use the same API on the user-side (or perhaps something more complicated) - at the moment its just one command to turn the feature on. \r\n\r\nSo you actually can send users to this PR branch if they want to use MP with Bart-only.\r\n\r\nSo the other approach I can take is to merge parts of this PR into t5-mp PR https://github.com/huggingface/transformers/pull/9323, but it'll be again a lot of work and nobody has even looked at any of those PRs...\r\n\r\nBut then we are talking about perhaps finding a more efficient solution, and perhaps deepspeed will render a lot of it pointless anyway... (Alex thinks not.) So why waste reviewers' time... makes sense not to.\r\n\r\nSo yes, let's freeze this up and I go back to work on deepspeed.\r\n\r\nI have convinced myself it's the right thing to do and you got to hear my inner talk.\r\n\r\nJust remember it's totally functional in case someone needs it.\r\n\r\nThank you for reading.\r\n\r\n",
"As t5 MP is broken in the trainer, I needed to see if it was the same with my Bart MP port - but it works:\r\n\r\n```\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 2 --n_val 2 --n_test 2 --do_predict --task summarization --data_dir xsum --model_parallel \r\n```\r\n\r\nSo with this PR you **can** use `--model_parallel` automatically with out trainer scripts with Bart models. ",
"As I was trying to see if I can find a way to utilize the idling GPUs, I run these benchmarks - haven't found anything useful yet, but the interesting finding is that while we get a huge performance hit with evaluation and beam size > 1, actually the training time is faster than non-MP version, despite all the data copying \r\n\r\nThis PR beats master on training time almost by half 8.6sec vs 15.8 sec, but of course it has 2 gpus vs 1 gpus!!! But it beats even the DDP solution 10.6sec by 20%!\r\n\r\nSo perhaps there is something good in here we just need to understand why is it faster than DDP.\r\n\r\nUnfortunately I have an uneven GPUs setup, so it's hard to get very useful benchmarks. Perhaps someone with 2 identical GPUs could re-run these and report back.\r\n\r\nFor posterity here are the results I'm getting with 1x 8gb and 1x 24gb gpus:\r\n\r\n```\r\n# w/o MP w/o DDP\r\n\r\n\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --task summarization --data_dir xsum\r\n\r\n2021-01-10 16:57:43 | INFO | __main__ | train_runtime = 15.8407\r\n2021-01-10 16:58:02 | INFO | __main__ | val_runtime = 19.0772\r\n\r\n# w/o MP w/ DDP\r\n\r\n\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --task summarization --data_dir xsum\r\n\r\n2021-01-10 16:58:42 | INFO | __main__ | train_runtime = 10.6299\r\n2021-01-10 16:58:53 | INFO | __main__ | val_runtime = 11.4454\r\n\r\n# w/ MP w/o DDP\r\n\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --model_parallel --task summarization --data_dir xsum\r\n\r\n2021-01-10 16:49:00 | INFO | __main__ | train_runtime = 8.6264\r\n2021-01-10 16:51:14 | INFO | __main__ | val_runtime = 134.0955\r\n\r\nruntime is very slow due to beam search (==4).\r\n\r\nsame w/ --eval_beams 1\r\n\r\n2021-01-10 16:56:10 | INFO | __main__ | train_runtime = 8.657\r\n2021-01-10 16:56:41 | INFO | __main__ | val_runtime = 31.4318\r\n\r\n\r\n# w/ MP w/ DDP\r\n\r\nrm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distilbart-xsum-6-6 --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 200 --n_val 200 --model_parallel --task summarization --data_dir xsum\r\n\r\nthis doesn't work: can't mix this implementation of MP w/ DDP\r\n\r\nAssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device GPU modules, but got device_ids [0], output_device 0, and module parameters {device(type='cuda', index=0), device(type='cuda', index=1)}.\r\n``` ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"too long. closing.",
"Hello, @stas00 is there any update on BART based model parallelism? also about model.parallelize() for BlenderBot? Thanks. ",
"This line of work has been abandoned as it's highly inefficient. Please use DeeepSpeed which works with any model https://huggingface.co/docs/transformers/main/main_classes/deepspeed"
] | 1,609 | 1,654 | 1,622 | CONTRIBUTOR | null | This PR implements model parallelism (MP) in Bart.
This is the latest incarnation of generalization of the MP in `transformers`, based on @alexorona's original work. I have done some of it already in https://github.com/huggingface/transformers/pull/9323 and this PR builds upon the other one. It's slightly complex what to merge when, but this PR is independent and can be merged on its own.
For reviewers I propose to read things in this order:
1. https://github.com/huggingface/transformers/pull/9316
2. https://github.com/huggingface/transformers/pull/9323
3. this PR
4. Additional important design discussions https://github.com/huggingface/transformers/issues/8771
If all is in agreement, I propose:
1. ☐ merging this PR first,
2. ☐ then I'll backport the new code from this PR to https://github.com/huggingface/transformers/pull/9323 and we merge that.
3. ☐ then we handle gpt2, which I haven't touched yet. Perhaps @alexorona could help there if his time permits or one of us.
4. ☐ complete Bart's other heads (can be item 3) and `deparallelize` - the latter is not really needed in practice so will handle those when dust around design settles.
5. ☐ add Bart to trainer's supported for `--model_parallel` flags
6. ☐ write tests for `model_parallel_utils.py`
7. ☐ meanwhile we can polish the concept of device maps which will require a review of all architectures `transformers` has implemented.
Actually first we need to merge smaller bits:
1. https://github.com/huggingface/transformers/pull/9347
2. https://github.com/huggingface/transformers/pull/9386
---------
So this PR:
* [x] Implements MP in Bart based on discussions in all of the threads/PRs listed above. Only `BartForConditionalGeneration` at the moment while we are sorting out the API. But the bulk of the work is done, since `BartModel` has all in place.
* [x] switches to the concept of `main_device` rather than `(first|last)_device` so the first device of encoder becomes the main_device and almost everything happens there (`embeddings`, `lm_head`, etc), and other devices are used exclusively for encoder and decoder purposes.
* [x] switches to a more explicit `device_map` that can support non-symmetrical models (different number of layers in encoder and decoder). It can also handle different types of maps. See the demo at the end this post for details.
* [x] further improves the magical `to()` functions that can operate on any type of variable except opaque objects. Can be used to put the inputs on the correct devices either automatically via a `forward` decorator or explicitly inside `forward`. We could use either or both.
* [x] adds a bunch of debug functions that make it easy to trace device IDs of variables, params and whole layers.
* [x] further improves the device map validation function
* [x] improves tests
* [x] needs to remove apex.normalization.FusedLayerNorm as it's buggy under MP (corrupts data) per https://github.com/huggingface/transformers/issues/9377 a dedicated to removal PR is https://github.com/huggingface/transformers/pull/9386
Here is a quick demo (you will need 2 gpus to run it):
```
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
#mname = "sshleifer/tinier_bart"
mname = "sshleifer/distilbart-xsum-6-6"
model = BartForConditionalGeneration.from_pretrained(mname)
tokenizer = BartTokenizer.from_pretrained(mname)
sentences = ["I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder."]
inputs = tokenizer(sentences, max_length=1024, return_tensors='pt', truncation="longest_first")
device_maps_flat = {
"sshleifer/tinier_bart": {
"encoder": {0: [0, 1] },
"decoder": {1: [0] },
},
"sshleifer/distilbart-xsum-6-6": {
"encoder": {0: [0, 1, 2, 3, 4, 5] },
"decoder": {1: [0, 1, 2, 3, 4, 5] },
},
}
device_maps_split = {
"sshleifer/tinier_bart": {
"encoder": {0: [0],
1: [1],
},
"decoder": {1: [0] },
},
"sshleifer/distilbart-xsum-6-6": {
"encoder": {0: [0, 1, 2],
1: [3, 4, 5],
},
"decoder": {0: [0, 1, 2],
1: [3, 4, 5],
},
},
}
# 3 different ways (2 different device maps and 1 autogenerated device map)
model.parallelize() # autogenerated
#model.parallelize(device_maps_flat[mname])
#model.parallelize(device_maps_split[mname])
inputs = inputs.to("cuda:0")
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=25, early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
# prints: [" I'm sitting in a room where I'm waiting for something to happen."]
```
You can see from the demo, that when calling `model.parallelize` you can skip the `device_map` arg altogether and the model will generate the right one. Or you can provide one that:
1. gives some gpus exclusively to encoder and others to decoder
2. splits the model horizontally so that the encoder uses all gpus so so the decoder
the model transparently handles all the remappings
Note, the user still needs to put the data on the `main_device`, so perhaps that will eventually become not hardcoded via:
```
# inputs = inputs.to("cuda:0")
inputs = inputs.to(model.main_device)
```
As we have been discussing elsewhere the device map format is unstable yet. So I propose we document it as unstable yet, but the users can rely on the autogenerated device map which requires no input from the user (i.e. calling `model.parallelize() ) - if it changes it'll happen transparently for the user.
Also note that in situations of Trainer-based scripts, like `finetune_trainer.py`, the user has no way to supply such device map at the moment so in effect the model generates the map on the fly as in the above para.
Fixes: #8344
@LysandreJik, @patrickvonplaten, @sgugger, @alexorona | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9384/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9384/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9384",
"html_url": "https://github.com/huggingface/transformers/pull/9384",
"diff_url": "https://github.com/huggingface/transformers/pull/9384.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9384.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9383/comments | https://api.github.com/repos/huggingface/transformers/issues/9383/events | https://github.com/huggingface/transformers/issues/9383 | 777,484,807 | MDU6SXNzdWU3Nzc0ODQ4MDc= | 9,383 | [Marian] Doc says `config.add_bias_logits=True`, but config is has `config.add_bias_logits=False` | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think what's going on is that `config.add_final_bias_logits` is unused.\r\n\r\nIn modeling_bart.py line 148 we call\r\n\r\n```python\r\nself.register_buffer(\"final_logits_bias\", torch.zeros((1, self.model.shared.num_embeddings)))\r\n```\r\nregardless of the config, and then if it's in the state dict it will get loaded by `from_pretrained`.\r\n\r\nI do think that `final_bias_logits` is in the marian state dict, as this line would have `KeyError`'d during conversion otherwise: https://github.com/sshleifer/transformers_fork/blob/121ec9dced3d068352078e7c3523ecd66830e39e/src/transformers/models/marian/convert_marian_to_pytorch.py#L461-L461 \r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | MEMBER | null | **Question**:
In the docs, it is written that Marian has (contrary to Bart), `config.add_bias_logits=True`: https://huggingface.co/transformers/model_doc/marian.html#implementation-notes. But when looking into the code:
https://github.com/huggingface/transformers/blob/b01f451ca38695c60175b34d245997ef4d18231d/src/transformers/models/marian/configuration_marian.py#L25 Marian has the exact same default config as Bart and also Marian's config files online have `config.add_bias_logits=False` - see:
https://huggingface.co/Helsinki-NLP/opus-mt-en-de/resolve/main/config.json
@sshleifer @patil-suraj
Is the documentation not up-to-date anymore? Because all the slow tests are passing....
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9382/comments | https://api.github.com/repos/huggingface/transformers/issues/9382/events | https://github.com/huggingface/transformers/pull/9382 | 777,475,473 | MDExOlB1bGxSZXF1ZXN0NTQ3NzY0NDY0 | 9,382 | [docs] Fix TF base model examples: outputs.last_hidden_states -> state | {
"login": "ck37",
"id": 50770,
"node_id": "MDQ6VXNlcjUwNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ck37",
"html_url": "https://github.com/ck37",
"followers_url": "https://api.github.com/users/ck37/followers",
"following_url": "https://api.github.com/users/ck37/following{/other_user}",
"gists_url": "https://api.github.com/users/ck37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ck37/subscriptions",
"organizations_url": "https://api.github.com/users/ck37/orgs",
"repos_url": "https://api.github.com/users/ck37/repos",
"events_url": "https://api.github.com/users/ck37/events{/privacy}",
"received_events_url": "https://api.github.com/users/ck37/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in the examples of Tensorflow-based base models, in which the returned last_hidden_state attribute of model output is incorrectly listed as "last_hidden_states".
Fixes #9376
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@julien-c @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9382/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9382",
"html_url": "https://github.com/huggingface/transformers/pull/9382",
"diff_url": "https://github.com/huggingface/transformers/pull/9382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9382.patch",
"merged_at": 1609606697000
} |
https://api.github.com/repos/huggingface/transformers/issues/9381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9381/comments | https://api.github.com/repos/huggingface/transformers/issues/9381/events | https://github.com/huggingface/transformers/pull/9381 | 777,468,806 | MDExOlB1bGxSZXF1ZXN0NTQ3NzU5Nzc0 | 9,381 | [Docs] `past_key_values` return a tuple of tuple as a default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR corrects the docs regarding `past_key_values`. `past_key_values` should always be of type `Tuple[...]`.
Fixes #9380
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9381/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9381/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9381",
"html_url": "https://github.com/huggingface/transformers/pull/9381",
"diff_url": "https://github.com/huggingface/transformers/pull/9381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9381.patch",
"merged_at": 1609599307000
} |
https://api.github.com/repos/huggingface/transformers/issues/9380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9380/comments | https://api.github.com/repos/huggingface/transformers/issues/9380/events | https://github.com/huggingface/transformers/issues/9380 | 777,455,936 | MDU6SXNzdWU3Nzc0NTU5MzY= | 9,380 | BartModel's `past_key_values` seems to have different explanations in input_doc and output_doc | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/forest1988/subscriptions",
"organizations_url": "https://api.github.com/users/forest1988/orgs",
"repos_url": "https://api.github.com/users/forest1988/repos",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/forest1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @forest1988,\r\n\r\nThanks for your issue! You're 100% correct. The docs need to be updated here! The output is actually never a list, it should always be a `Tuple(Tuple(torch.FloatTensor))` - I'll make a PR afterward. \r\nAnd in Bart, `past_key_values` always consists of `selt_attn_present_key_value` and `cross_attn_present_key_value`.",
"Hi @patrickvonplaten,\r\n\r\nThank you for your quick response to this issue!\r\nThe update of the docs and your answer to my question -- what `past_key_values` consists of -- are very helpful for me!\r\n",
"Hi @patrickvonplaten,\r\n\r\nExcuse me for my frequent questions.\r\nI created a new issue https://github.com/huggingface/transformers/issues/9391, in which I ask your help about the `past_key_values` in Bart (Seq2SeqLM) and GPT-2 (CausalLM).\r\n\r\nI think it is not an error, but a feature request.\r\n\r\nIf you could check it out when you have time, it would be greatly appreciated."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises in [the document](https://huggingface.co/transformers/model_doc/bart.html) of BartModel and BartForConditionalGeneration
## To reproduce
Thank you for kindly answering my question https://github.com/huggingface/transformers/issues/9298.
I'm now trying to use Bart in transformers v4.1.1.
I'd like to make use of `past_key_values`, which seems to have been the major change of the refactoring https://github.com/huggingface/transformers/pull/8900,
but I am a bit confused about the type and shape of it.
About the input of the `forward` function, it is explained as:
```
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) –
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
```
About the output, it is explained as:
```
past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) – List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)).
Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding.
```
I think it will be natural if the input `past_key_values` and the output `past_key_values` have the same format and the output can be used as the input in the next step.
If my understanding is correct, the document of the input is generated with `BART_INPUTS_DOCSTRING`, and the output is from `Seq2SeqModelOutput`.
```
@add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING)
@add_code_sample_docstrings(
tokenizer_class=_TOKENIZER_FOR_DOC,
checkpoint="facebook/bart-large",
output_type=Seq2SeqModelOutput,
config_class=_CONFIG_FOR_DOC,
```
I'm sorry if I'm wrong, but maybe the `Seq2SeqModelOutput` documentation hasn't been updated for refactoring?
(When I look at the [git log](https://github.com/huggingface/transformers/commits/88ef8893cd649cc2b4adb9885aba88c750118cff/src/transformers/modeling_outputs.py), I cannot find the related commit.)
I apologize if the difference in input/output format is due to some intention.
If you don't mind, I'd like to ask one more question.
In the refactoring of Bart, the `BartDecoderLayer` (renamed from `DecoderLayer`) seems to be updated as below:
``` python
# make sure decoder uni-directional self-attn at 1st position and cross-attn at 2nd position.
present_key_value = (self_attn_present_key_value, cross_attn_present_key_value)
return (
hidden_states,
self_attn_weights,
present_key_value,
cross_attn_weights,
)
```
And in the `BartDecoder`, cache is updated as below:
``` python
if use_cache:
next_decoder_cache += (present_key_value,)
...
next_cache = next_decoder_cache if use_cache else None
```
Does it mean the Bart (and other Seq2Seq Language Models) have both `selt_atten_present_key_value` and `cross_attn_present_key_value` in `past_key_values`?
## Expected behavior
Maybe the document of `Seq2SeqModelOutput` needs to be updated.
I apologize if the difference in the input/output explanations is due to some intention. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9379/comments | https://api.github.com/repos/huggingface/transformers/issues/9379/events | https://github.com/huggingface/transformers/pull/9379 | 777,454,243 | MDExOlB1bGxSZXF1ZXN0NTQ3NzQ5Nzcx | 9,379 | Improve documentation coverage for Bertweet | {
"login": "Qbiwan",
"id": 69753975,
"node_id": "MDQ6VXNlcjY5NzUzOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/69753975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qbiwan",
"html_url": "https://github.com/Qbiwan",
"followers_url": "https://api.github.com/users/Qbiwan/followers",
"following_url": "https://api.github.com/users/Qbiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Qbiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qbiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qbiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Qbiwan/orgs",
"repos_url": "https://api.github.com/users/Qbiwan/repos",
"events_url": "https://api.github.com/users/Qbiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qbiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @Qbiwan!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9035
@sgugger added docs for Bertweet
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9379/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9379",
"html_url": "https://github.com/huggingface/transformers/pull/9379",
"diff_url": "https://github.com/huggingface/transformers/pull/9379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9379.patch",
"merged_at": 1609783979000
} |
https://api.github.com/repos/huggingface/transformers/issues/9378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9378/comments | https://api.github.com/repos/huggingface/transformers/issues/9378/events | https://github.com/huggingface/transformers/pull/9378 | 777,454,030 | MDExOlB1bGxSZXF1ZXN0NTQ3NzQ5NjE4 | 9,378 | [Docs] Tokenizer Squad 2.0 example | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you for the pr. \r\nPlease also update the documentation here: https://huggingface.co/transformers/custom_datasets.html#qa-squad\r\nLine-> end_positions[-1] = encodings.char_to_token(i, answers[i]['answer_end'] + 1)\r\nto -> end_positions[-1] = tokenizer.model_max_length",
"So what is the state of this issue? What version of processing script should we use?"
] | 1,609 | 1,633 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9326
This PR fixes the docs. I ran following code from (https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0) to see whether the Squad tokenization works as expected.
Concatenated code from examples:
```python
#!/usr/bin/env python3
import json
from pathlib import Path
from transformers import DistilBertTokenizerFast
def read_squad(path):
path = Path(path)
with open(path, 'rb') as f:
squad_dict = json.load(f)
contexts = []
questions = []
answers = []
for group in squad_dict['data']:
for passage in group['paragraphs']:
context = passage['context']
for qa in passage['qas']:
question = qa['question']
for answer in qa['answers']:
contexts.append(context)
questions.append(question)
answers.append(answer)
return contexts, questions, answers
train_contexts, train_questions, train_answers = read_squad('train-v2.0.json')
val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json')
def add_end_idx(answers, contexts):
for answer, context in zip(answers, contexts):
gold_text = answer['text']
start_idx = answer['answer_start']
end_idx = start_idx + len(gold_text)
# sometimes squad answers are off by a character or two – fix this
if context[start_idx:end_idx] == gold_text:
answer['answer_end'] = end_idx
elif context[start_idx-1:end_idx-1] == gold_text:
answer['answer_start'] = start_idx - 1
answer['answer_end'] = end_idx - 1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
answer['answer_start'] = start_idx - 2
answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))
# if None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
```
Then I checked that the tokenization is correct with this helper function for a couple of ids:
```python
def show_answer(idx):
print("Tokenized", tokenizer.decode(train_encodings['input_ids'][idx][train_encodings['start_positions'][idx]: train_encodings['end_positions'][idx]]))
print("Real", train_answers[idx]['text'])
```
It turns out that the tokenization was almost always incorrect:
1) The standard case should not be:
```python
encodings.char_to_token(i, answers[i]['answer_end'] - 1)
```
, but
```python
encodings.char_to_token(i, answers[i]['answer_end'])
```
2)
It might happen that `char_to_token` points to a space character which has no corresponding token and is therefore `None`. In this case the character after the space should be used.
The fix proposed in the PR corrects this behavior.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9378",
"html_url": "https://github.com/huggingface/transformers/pull/9378",
"diff_url": "https://github.com/huggingface/transformers/pull/9378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9378.patch",
"merged_at": 1609777650000
} |
https://api.github.com/repos/huggingface/transformers/issues/9377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9377/comments | https://api.github.com/repos/huggingface/transformers/issues/9377/events | https://github.com/huggingface/transformers/issues/9377 | 777,413,099 | MDU6SXNzdWU3Nzc0MTMwOTk= | 9,377 | replacing apex.normalization.FusedLayerNorm with torch.nn.LayerNorm | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm good with changing to `torch.nn.LayerNorm`. At @stas00 - do you know what the advantage of `apex.normalization.FusedLayerNorm` is supposed to be? Why did we add `apex.normalization.FusedLayerNorm` in the first place?",
"Prior to about a year ago, `apex.normalization.FusedLayerNorm` was faster than `torch.nn.LayerNorm`, but then the former got ported to native `torch.nn.LayerNorm`, and now the native appears to be faster - at least the 2 cards I have experimented with. I checked with pt-1.4 .. pt-1.8.dev\r\n\r\nIf you have other than gtx-1070/rtx-3090 cards which I benchmarked with please run that benchmark and see if it stands true for other cards: https://github.com/pytorch/pytorch/issues/37713#issuecomment-753434842\r\nit only takes a few seconds if you have apex installed already. To install apex:\r\n\r\n```\r\ngit clone https://github.com/NVIDIA/apex\r\ncd apex\r\nrm -rf build\r\npip install --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" .\r\n```\r\n\r\nThe benchmark measures and reports a total run time, so the smaller the numbers the faster it is.\r\n\r\nIf you do run the benchmarks please post your results at https://github.com/pytorch/pytorch/issues/37713 so that it can be seen whether it's safe to drop `apex.normalization.FusedLayerNorm` based on hard data and not anecdotal info.\r\n\r\nThank you."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | It seems that time has arrived to drop `apex.normalization.FusedLayerNorm` in favor of `torch.nn.LayerNorm`
1. the latter was ported more than a year ago from apex https://github.com/pytorch/pytorch/pull/27634 (around pt-1.4)
2. it's faster than the apex according to my benchmarks https://github.com/pytorch/pytorch/issues/37713#issuecomment-753434842 (**33% faster on rtx-3090!**, 10% faster on gtx-1070)
**but note:** this same benchmark run here https://github.com/pytorch/fairseq/issues/2012#issuecomment-622607286 on V100 reports the opposite - that the native is slower (pt-1.5). So it might help to run this very quick benchmark on other cards and compare. In particular if you have access to V100 please report back the findings at this thread: https://github.com/pytorch/pytorch/issues/37713
The main reason for this need is that `apex.normalization.FusedLayerNorm` is buggy (corrupts memory) when it comes to switching devices, which is done a lot under Model Parallel. https://github.com/NVIDIA/apex/issues/1022
With `apex.normalization.FusedLayerNorm` things fail a lot under MP and requires sticking `torch.cuda.set_device(id)` in many many places as a workaround :( Since this overload is used at model's init time it's not possible to not use it under MP as the latter gets activate after model's init.
I will use that workaround if you find out that apex is faster still on some important-to-consider hardware. And, of course, in that case please report back to the pytorch team so that they could fix it. Otherwise apex support is pretty much no more and it's just a matter of time before apex will be unusable.
The models that need that change are bart/fsmt/prophetnet.
@patrickvonplaten, @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9376/comments | https://api.github.com/repos/huggingface/transformers/issues/9376/events | https://github.com/huggingface/transformers/issues/9376 | 777,323,655 | MDU6SXNzdWU3NzczMjM2NTU= | 9,376 | [docs] TFRobertaModel example: last_hidden_states -> last_hidden_state | {
"login": "ck37",
"id": 50770,
"node_id": "MDQ6VXNlcjUwNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ck37",
"html_url": "https://github.com/ck37",
"followers_url": "https://api.github.com/users/ck37/followers",
"following_url": "https://api.github.com/users/ck37/following{/other_user}",
"gists_url": "https://api.github.com/users/ck37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ck37/subscriptions",
"organizations_url": "https://api.github.com/users/ck37/orgs",
"repos_url": "https://api.github.com/users/ck37/repos",
"events_url": "https://api.github.com/users/ck37/events{/privacy}",
"received_events_url": "https://api.github.com/users/ck37/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"looks like this is what you're looking for: https://github.com/huggingface/transformers/blob/ae333d04b29a25be1a70eaccd6260c294c243c5b/src/transformers/file_utils.py#L842-L855",
"Hey @ck37,\r\n\r\nThanks for your issue! Yes, this typo should be corrected -> it would be great if you could open a PR :-) "
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This is a documentation error on the currently deployed version of https://huggingface.co/transformers/
### Who can help
examples/distillation: @VictorSanh
documentation: @sgugger
## Information
Model I am using: TFRoberta
The problem arises when using:
* [x] the official example scripts
## To reproduce
Steps to reproduce the behavior:
1. View the code example at https://huggingface.co/transformers/model_doc/roberta.html#tfrobertamodel
2. `last_hidden_states = outputs.last_hidden_states` should be `last_hidden_states = outputs.last_hidden_state`
The current incorrect spelling will yield an error. I apologize that I was not able to find that line in the repo, otherwise I would submit a PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9375/comments | https://api.github.com/repos/huggingface/transformers/issues/9375/events | https://github.com/huggingface/transformers/pull/9375 | 777,285,914 | MDExOlB1bGxSZXF1ZXN0NTQ3NjI4Njkx | 9,375 | Fix Typo | {
"login": "vanche",
"id": 10228650,
"node_id": "MDQ6VXNlcjEwMjI4NjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/10228650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vanche",
"html_url": "https://github.com/vanche",
"followers_url": "https://api.github.com/users/vanche/followers",
"following_url": "https://api.github.com/users/vanche/following{/other_user}",
"gists_url": "https://api.github.com/users/vanche/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vanche/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vanche/subscriptions",
"organizations_url": "https://api.github.com/users/vanche/orgs",
"repos_url": "https://api.github.com/users/vanche/repos",
"events_url": "https://api.github.com/users/vanche/events{/privacy}",
"received_events_url": "https://api.github.com/users/vanche/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
I fixed typo in the comment.
## Before submitting
- [ V] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9375/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9375",
"html_url": "https://github.com/huggingface/transformers/pull/9375",
"diff_url": "https://github.com/huggingface/transformers/pull/9375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9375.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9374/comments | https://api.github.com/repos/huggingface/transformers/issues/9374/events | https://github.com/huggingface/transformers/issues/9374 | 777,281,096 | MDU6SXNzdWU3NzcyODEwOTY= | 9,374 | How do I handle class imbalance for text data when using pretrained models like BERT? | {
"login": "nikhil6041",
"id": 42090593,
"node_id": "MDQ6VXNlcjQyMDkwNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/42090593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikhil6041",
"html_url": "https://github.com/nikhil6041",
"followers_url": "https://api.github.com/users/nikhil6041/followers",
"following_url": "https://api.github.com/users/nikhil6041/following{/other_user}",
"gists_url": "https://api.github.com/users/nikhil6041/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikhil6041/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikhil6041/subscriptions",
"organizations_url": "https://api.github.com/users/nikhil6041/orgs",
"repos_url": "https://api.github.com/users/nikhil6041/repos",
"events_url": "https://api.github.com/users/nikhil6041/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikhil6041/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi!\r\n\r\nYou could try replacing the CrossEntropy loss with [this Dice Loss](https://github.com/fursovia/self-adj-dice), which may help you with the imbalance issues. In the paper linked in the repo they explain the design process. I have tried it with mixed results, although for my (skewed) dataset, weighting the CrossEntropy loss with the inverse frequency of each category has worked best.\r\n\r\nLet me know if it works for you 👍🏻 \r\n\r\nAs a last resort, you could try undersampling category 1 to match the second, and maybe combine this with a weighted loss as well.",
"Thanks @viantirreau for your suggestions I actually tried to use the dice loss as well as class weights with crossentropy loss and the results I got from the crossentropyloss was actually better than what I am getting with the dice loss , however both of them fails to detect the categories 3-5 . I will try to do undersampling as the last resort however speaking of the class weights I have used the sklearn's compute_class_weight for getting my class_weights as follows:\r\n```\r\nfrom sklearn.utils.class_weight import compute_class_weight\r\n\r\n#compute the class weights\r\nclass_wts = compute_class_weight('balanced', np.unique(train_labels), train_labels)\r\n\r\n```\r\nCan you suggest any other workaround other than this strategy , I have came to know that neural networks tends to ignore class weights through an answer on one of the stackexchange sites . \r\n",
"Hi!\r\n\r\nI think it shouldn't make any difference, using my method returns exactly the same as Sklearn's `compute_class_weight`, but normalized so as to add up to 1. \r\n\r\nUsing these class counts,\r\n\r\n> Category 1 10000\r\n> Category 2 2000\r\n> Category 3 400\r\n> Category 4 300 \r\n> Category 5 100\r\n\r\n I get the following weights for the respective categories `array([0.00608519, 0.03042596, 0.15212982, 0.20283976, 0.60851927])`. \r\nAnother nice (and extreme) experiment you could try is to over-emphasize the weights for the underrepresented classes, for example using something like `array([0.001, 0.001, 0.001, 0.001, 0.996])`, just as a sanity check to confirm the optimizer learns something about category 5.\r\nI would also start testing the model's predictions on the training data first (it should overfit), and only then try to measure its generalization abilities on a held out development set. Maybe your gradients are not backpropagating to the first layers, your learning rate is way too big or you need some warmup steps.\r\n\r\nLet me know if any of this works :)\r\nGood luck!",
"Hi , @viantirreau Thanks for your suggestions. I did try it by reducing the class weights for majority classes and emphasizing the weights for minority class say category 5 , I found out that my neural network is still not able to learn anything about those classes , I have tried it with learning rates 1e-5,2e-5,5e-5 and warmup step of 1000 however no improvement is still being made on it . Any optimization strategies for the hyperparameters you can suggest?",
"You're welcome! \r\nMmh interesting, what Transformers model are you using? Also, from what pretrained checkpoint are you initializing it? Are you sure there are no warnings like 'missing parameters, initializing from scratch'?\r\n\r\nI have faced some vanishing gradient problems in the past that manifest as an unexplainable \"preference\" for a class, so I'd make sure that your gradients are alright. I find [this](https://gist.github.com/viantirreau/ec591a428a5c0112bd8fa84f70968574) code snippet pretty useful to diagnose the gradient flow by plotting its values across each layer. If you use Weights&Biases as a logging tool, you can `watch` the model and create even nicer plots in their dashboard. A warmup strategy was crucial in my experience to eliminate the gradient problems.\r\n\r\nAlso, if you are manually adjusting the attention masks or some of the model inputs, make sure to not pass ignore_index in some/all of the inputs. Some prints will help in making sure that the model inputs are as expected.\r\nAnother idea I'd test is to completely eliminate your categories 1 and 2 from the training examples, and see if the same phenomenon happens to the most common class by then (should be category 3). Try this alone and see if including the inverse frequency weights in the loss helps in any way.\r\n\r\nGood luck!\r\nGood luck!",
"Hi , @viantirreau sorry for the delay in response I haven't received any warnings as such . I am using the bert transformer with bert-base-multilingual-cased as the checkpoint , I was trying to first build a custom model from the final output layer of the BERT model in order to accomodate the class imbalance issue . I haven't tried the weights and biases yet will surely check it out. I will try your other suggestions and will let you know about it. Thanks for your suggestions.\r\n",
"Hi , @viantirreau sorry for the delay in response .I finally figured out the reason behind the performance degradation it was because I was freezing the base layers and only fine tuning one extra layer which I added to the base model. Since, the model had the data imbalance issue already into it ,it was being biased towards the majority samples . It however performed much better when I unfreezed the base layers , however on the cost of additional gpu training time.",
"Hi, @nikhil6041. I'm glad you figured it out. Thanks for reaching back with your experience and solution! 🙌🏻 ",
"I am actually having the same problem you experienced. I am building a multi-label multi-class classification Bert/distilbert model and encountered the same issue with my 20 classes. Of course the data is imbalanced, and like you I thought I had locked down the base layers but I realized I hadn't and that model performed slight better with the imbalanced data than the locked down model. I could not figure out why other than knowing imbalanced data is a big deal. Unfortunately, the data set I have is extremely small so that is also probably playing a big role. @viantirreau and @nikhil6041, one method I have seen used is a weighted cost function like adacost. Has anyone had any success implementing this with Distilbert? I can provide more details or open a new ticket but this seemed very closely related.",
"Hi @johnrodriguez190380 have you tried KL divergence loss function ? Try to use it once. Also there have been certain instances where the usage of weighted cost function doesn't help much. I don't remember the paper which pointed out this thing but I read it somewhere in a stackoverflow answer. If in case the dataset you are using is really small you can try some data augmentation techniques , you can also use this [](https://github.com/makcedward/nlpaug )repo it maybe helpful for u i guess. Let me know if anyone of this serves ur usecase.",
"Hi nikhil, how are you? maybe can you share your code/colab/repo to see how you solve the issue? ",
"Hi @nikhil6041 could you please share the script you wrote for changing the loss? I really appreciate it if you can share it with us!",
"Hey @sasu08 and @un-lock-me sorry for the late response I used the sadice loss for this one however it didnt solve my problem fully however there are certain other ways you can possibly try for this thing try to use text augmentations (there are various ways for it like using synonyms ,back translation to name a few , there is one library named nlpaug which might come in handy for both of you , have a look at it. For the sadice loss part you can have a look at my repo [](https://github.com/nikhil6041/OLI-and-Meme-Classification) , here is the link to the nlpaug library [](https://github.com/makcedward/nlpaug). Hope it helps!!",
"For the Sadie loss I could not find it in the repo could you please share the link here?",
"@un-lock-me Here it is [sadice loss] https://github.com/fursovia/self-adj-dice ",
"Hi @nikhil6041 can you please share your notebook? ",
"Hi @pratikchhapolika you can find all my notebooks in this [repo](https://github.com/nikhil6041/OLI-and-Meme-Classification)",
"@nikhil6041 thanks for the helpful repo. I was running the code on Google Colab and I used the provided dataset.\r\nHowever, I am getting this error in the Training Loop. \r\nTypeError: forward() got an unexpected keyword argument 'token_type_ids' \r\nWhat can be the issue? Thanks "
] | 1,609 | 1,698 | 1,615 | NONE | null | I have a skewed dataset consisting of samples of the form:
```
Category 1 10000
Category 2 2000
Category 3 400
Category 4 300
Category 5 100
```
The dataset consists of text with data labeled into one of the five categories. I am trying to use the pretrained models like BERT for the classification task but the model fails to identify the categories 3-5 .I have tried to apply class weights in the loss criterion however it doesn't help much although it gives better performance as compared to simple fine tuning of the pretrained models. I have came to know about SMOTE and other methods in order to handle the class imbalance issues . But since most of the transformer models expect the inputs as text which are later tokenized by their respective tokenizers I am not able to do any kind of oversampling . If there is a workaround for this thing I would be interested to know about it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9374/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9373/comments | https://api.github.com/repos/huggingface/transformers/issues/9373/events | https://github.com/huggingface/transformers/issues/9373 | 777,275,417 | MDU6SXNzdWU3NzcyNzU0MTc= | 9,373 | how to evaluate models on SUPER_GLUE benchmark? | {
"login": "YoungTimmy",
"id": 39907234,
"node_id": "MDQ6VXNlcjM5OTA3MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39907234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YoungTimmy",
"html_url": "https://github.com/YoungTimmy",
"followers_url": "https://api.github.com/users/YoungTimmy/followers",
"following_url": "https://api.github.com/users/YoungTimmy/following{/other_user}",
"gists_url": "https://api.github.com/users/YoungTimmy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YoungTimmy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YoungTimmy/subscriptions",
"organizations_url": "https://api.github.com/users/YoungTimmy/orgs",
"repos_url": "https://api.github.com/users/YoungTimmy/repos",
"events_url": "https://api.github.com/users/YoungTimmy/events{/privacy}",
"received_events_url": "https://api.github.com/users/YoungTimmy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | Hi, I am trying to evaluate models on SUPER_GLUE benchmark.
However, I can load SUPER_GLUE dataset from Transformer but I cant find any metrics of this benchmark.
Is there any script like _**superglue_metrics.py**_ that can evaluate models on superglue?
thanks a lot! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9372/comments | https://api.github.com/repos/huggingface/transformers/issues/9372/events | https://github.com/huggingface/transformers/issues/9372 | 777,137,329 | MDU6SXNzdWU3NzcxMzczMjk= | 9,372 | Why does datasets get imported when running "from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast" | {
"login": "vgoklani",
"id": 180487,
"node_id": "MDQ6VXNlcjE4MDQ4Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/180487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vgoklani",
"html_url": "https://github.com/vgoklani",
"followers_url": "https://api.github.com/users/vgoklani/followers",
"following_url": "https://api.github.com/users/vgoklani/following{/other_user}",
"gists_url": "https://api.github.com/users/vgoklani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vgoklani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vgoklani/subscriptions",
"organizations_url": "https://api.github.com/users/vgoklani/orgs",
"repos_url": "https://api.github.com/users/vgoklani/repos",
"events_url": "https://api.github.com/users/vgoklani/events{/privacy}",
"received_events_url": "https://api.github.com/users/vgoklani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"hi @vgoklani \r\n\r\n`datasets` is not required for tokenizers, so it's unlikely to get this error when just importing the tokenizer. Are you running any examples scripts? because those require `datasets` lib",
"Hi @patil-suraj Happy New Year!\r\n\r\nHere is the stack trace:\r\n\r\n root@b5d80f9670ea:~/src# ipython\r\n Python 3.8.5 (default, Sep 4 2020, 07:30:14)\r\n Type 'copyright', 'credits' or 'license' for more information\r\n IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\n In [1]: from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast\r\n 2021-01-01 12:03:22.340215: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\n ---------------------------------------------------------------------------\r\n ImportWarning Traceback (most recent call last)\r\n <ipython-input-1-2758fed1e79c> in <module>\r\n ----> 1 from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast\r\n\r\n /opt/conda/lib/python3.8/site-packages/transformers/__init__.py in <module>\r\n 32 absl.logging._warn_preinit_stderr = False\r\n 33\r\n ---> 34 from . import dependency_versions_check\r\n 35\r\n 36 # Configuration\r\n\r\n /opt/conda/lib/python3.8/site-packages/transformers/dependency_versions_check.py in <module>\r\n 32 if pkg == \"tokenizers\":\r\n 33 # must be loaded here, or else tqdm check may fail\r\n ---> 34 from .file_utils import is_tokenizers_available\r\n 35\r\n 36 if not is_tokenizers_available():\r\n\r\n /opt/conda/lib/python3.8/site-packages/transformers/file_utils.py in <module>\r\n 101\r\n 102 try:\r\n --> 103 import datasets # noqa: F401\r\n 104\r\n 105 # Check we're not importing a \"datasets\" directory somewhere\r\n\r\n /opt/conda/lib/python3.8/site-packages/datasets/__init__.py in <module>\r\n 51\r\n 52 if int(pyarrow.__version__.split(\".\")[1]) < 16 and int(pyarrow.__version__.split(\".\")[0]) == 0:\r\n ---> 53 raise ImportWarning(\r\n 54 \"To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition.\\n\"\r\n 55 \"If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.\"\r\n\r\n ImportWarning: To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition.\r\n If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.\r\n\r\n\r\n---\r\n\r\nAn older version of pyarrow was installed, but regardless, this happens immediately after the import. Upgrading pyarrow makes this warning disappear, but regardless, this shouldn't happen.\r\n",
"cc @sgugger ",
"This is because transformers imports all optional dependencies (like datasets) during its init. There will be some work to avoid doing that in the coming weeks.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | When running
from transformers.models.roberta.tokenization_roberta_fast import RobertaTokenizerFast
I get this warning:
ImportWarning: To use `datasets`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
Why is datasets getting imported, when we import the tokenizer?
Thanks! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9371/comments | https://api.github.com/repos/huggingface/transformers/issues/9371/events | https://github.com/huggingface/transformers/issues/9371 | 777,097,353 | MDU6SXNzdWU3NzcwOTczNTM= | 9,371 | Excessive GPU-GPU communication with GPT2 making multi-GPU training slow? | {
"login": "moyix",
"id": 34380,
"node_id": "MDQ6VXNlcjM0Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/34380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyix",
"html_url": "https://github.com/moyix",
"followers_url": "https://api.github.com/users/moyix/followers",
"following_url": "https://api.github.com/users/moyix/following{/other_user}",
"gists_url": "https://api.github.com/users/moyix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moyix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moyix/subscriptions",
"organizations_url": "https://api.github.com/users/moyix/orgs",
"repos_url": "https://api.github.com/users/moyix/repos",
"events_url": "https://api.github.com/users/moyix/events{/privacy}",
"received_events_url": "https://api.github.com/users/moyix/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2604155188,
"node_id": "MDU6TGFiZWwyNjA0MTU1MTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks",
"name": "Benchmarks",
"color": "2DF372",
"default": false,
"description": "Issues related to Memory regressions in tests and scripts"
},
{
"id": 2690307185,
"node_id": "MDU6TGFiZWwyNjkwMzA3MTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Performance",
"name": "Performance",
"color": "207F32",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Not an answer to your issue/question, but have you tried running in distributed training (DDP), which is the recommended way of running over multiple GPUs: https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision\r\n\r\nWould be curious to see the same with/without NVLink experiment there.",
"Hmm, I don't have much experience using torch.distributed. I tried just running the existing script with `python -m torch.distributed.launch --nproc_per_node 2 train.py`, but it runs out of GPU memory almost immediately, so I assume I'm doing something wrong.\r\n\r\nIf you have a link to some documentation that explains how to set up the training script so that it can be used with torch.distributed, I can give that a try.",
"The command you posted \"should\" work.\r\n\r\n@sgugger might have links to better content when he's back, but the PyTorch tutorials are pretty good: https://pytorch.org/tutorials/beginner/dist_overview.html#data-parallel-training\r\n\r\nYour initial experiment is using `DataParallel` (not `DistributedDataParallel`) under the hood.",
"OK, I got around to spending some more time with this today. I realized that the `run_language_modeling.py` script can do everything my script was doing, and it uses DDP by default (Note: looking at the most recent version on git, I see that `run_language_modeling.py` has been replaced by `run_clm.py`. However, after trying to upgrade transformers to that version, it seems to no longer use the GPU for reasons I don't have time to debug.).\r\n\r\nSo now I'm just using that, with:\r\n\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 2 \\\r\n ~/git/transformers/examples/language-modeling/run_language_modeling.py \\\r\n --model_type gpt2 \\\r\n --config_name ./csrc_config \\\r\n --tokenizer_name ./csrc_tokenizer \\\r\n --fp16 --fp16_opt_level O3 \\\r\n --do_train --output_dir csrc_output \\\r\n --per_device_train_batch_size 4 \\\r\n --train_data_file plainsrc_all.txt --block_size 128\r\n```\r\n\r\nFor single GPU I drop the `torch.distributed.launch` and use `CUDA_VISIBLE_DEVICES=1`, to disable NVLINK I use `NCCL_P2P_DISABLE=1` as before. The `--block_size 128` argument is to match the default from my training script (without it I run out of GPU RAM).\r\n\r\nResults:\r\n\r\nModel | Block Size | GPUs | NVLINK | ETA | Perf\r\n------|------------|------|--------|-----|-----\r\nSmall | 512 | 2GPU | No | 17:08:12 | 4.75it/s\r\nSmall | 512 | 2GPU | Yes | 10:24:20 | 7.79it/s\r\nSmall | 512 | 1GPU | N/A | 18:37:17 | 8.74it/s\r\nMedium | 512 | 2GPU | No | 43:07:49 | 1.89it/s\r\nMedium | 512 | 2GPU | Yes | 26:19:09 | 3.09it/s\r\nMedium | 512 | 1GPU | N/A | 45:36:37 | 3.57it/s\r\nSmall | 128 | 2GPU | No | 48:12:05 | 6.75it/s\r\nSmall | 128 | 2GPU | Yes | 21:26:31 | 15.17it/s\r\nSmall | 128 | 1GPU | N/A | 30:54:41 | 21.06it/s\r\nMedium | 128 | 2GPU | No | 118:43:09 | 2.74it/s\r\nMedium | 128 | 2GPU | Yes | 51:55:58 | 6.27it/s\r\nMedium | 128 | 1GPU | N/A | 74:02:16 | 8.79it/s\r\nLarge | 128 | 2GPU | No | 239:19:44 | 1.36it/s\r\nLarge | 128 | 2GPU | Yes | 102:17:18 | 3.18it/s\r\nLarge | 128 | 1GPU | N/A | 143:34:42 | 4.54it/s\r\n\r\nSo the general observation is that for block size 512, two GPUs without NVLink are about the same performance as a single GPU. For block size 128, two GPUs without NVLink are typically quite a bit *slower* than a single GPU.\r\n\r\nIt doesn't seem like DistributedDataParallel helps with this issue, in other words.\r\n",
"I think @sgugger has experience with multi-GPU, and works on the example scripts, pinging him!",
"A friend was linking me to this issue. Thank you for your work on this benchmark! It is some interesting data. I still believe the poor performance could be a hardware issue though.\r\n\r\nAs far as I know, RTX 3090 GPUs have peer-to-peer access disable, or in other words, you cannot transfer memory from GPU to GPU on these GPUs. All data is first routed through the CPU, which is often slow because the CPU buffers are not pinned, meaning that memory transfers are _synchronous_. So in my eyes, slow performance without NVLink is a hardware issue in this case. It would be curious, though, if these numbers would be similar for peer-to-peer enabled GPUs. Do you have access to such a GPU?",
"You're thinking of something like P2P over PCIe? You're right that NVIDIA has disabled that for the 3090s. The only other hardware I have access to is our HPC cluster, which has RTX8000s and V100s (non-NVLINKed); I believe both show similar slowdowns.\r\n\r\nOne thing I have been looking into is whether using something like DeepSpeed will help. I got their Megatron-LM example working and it does much better at scaling to two at least GPUs without NVLINK using the 1-bit Adam optimizer. I'm still waiting for my HPC job to get scheduled to confirm that it scales well there too. If that works then presumably something like what's being done for the t5-3b model here would help? https://github.com/huggingface/transformers/issues/8771",
"If you confirm you have the same results for the RTX 8000 that would rule out any GPU issue. It could still be a hardware issue with PCIe lanes. There is a bandwidth test I believe among the NVIDIA samples that come with CUDA with which you can test the available bandwidth to/from GPUs. If this shows good numbers it should be purely an issue of software or network architecture.",
"OK, I'll give this a try. Our HPC cluster is a bit busy so it may be a while before I can get a slot on the RTX 8000 nodes.",
"I managed to get some time on a node with 4x V100s. For the Large model, it gets 3.83s/it with an ETA of 1248:01:43 (!).\r\n\r\nHere's the output of p2pBandwidthLatencyTest on the V100 system:\r\n\r\n```\r\n[bd52@gv02 p2pBandwidthLatencyTest]$ ./p2pBandwidthLatencyTest \r\n[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]\r\nDevice: 0, Tesla V100-PCIE-32GB, pciBusID: 6, pciDeviceID: 0, pciDomainID:0\r\nDevice: 1, Tesla V100-PCIE-32GB, pciBusID: 2f, pciDeviceID: 0, pciDomainID:0\r\nDevice: 2, Tesla V100-PCIE-32GB, pciBusID: 86, pciDeviceID: 0, pciDomainID:0\r\nDevice: 3, Tesla V100-PCIE-32GB, pciBusID: d8, pciDeviceID: 0, pciDomainID:0\r\nDevice=0 CAN Access Peer Device=1\r\nDevice=0 CAN Access Peer Device=2\r\nDevice=0 CAN Access Peer Device=3\r\nDevice=1 CAN Access Peer Device=0\r\nDevice=1 CAN Access Peer Device=2\r\nDevice=1 CAN Access Peer Device=3\r\nDevice=2 CAN Access Peer Device=0\r\nDevice=2 CAN Access Peer Device=1\r\nDevice=2 CAN Access Peer Device=3\r\nDevice=3 CAN Access Peer Device=0\r\nDevice=3 CAN Access Peer Device=1\r\nDevice=3 CAN Access Peer Device=2\r\n\r\n***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.\r\nSo you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.\r\n\r\nP2P Connectivity Matrix\r\n D\\D 0 1 2 3\r\n 0 1 1 1 1\r\n 1 1 1 1 1\r\n 2 1 1 1 1\r\n 3 1 1 1 1\r\nUnidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 2 3 \r\n 0 768.57 11.42 11.52 11.53 \r\n 1 11.39 770.46 11.50 11.53 \r\n 2 11.42 11.43 771.22 11.45 \r\n 3 11.42 11.43 11.44 769.70 \r\nUnidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)\r\n D\\D 0 1 2 3 \r\n 0 767.06 9.93 9.68 9.49 \r\n 1 9.93 769.33 9.33 9.50 \r\n 2 9.87 9.35 769.70 10.05 \r\n 3 9.66 9.68 9.92 770.08 \r\nBidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 2 3 \r\n 0 771.22 15.98 16.04 16.16 \r\n 1 16.00 773.51 16.11 16.07 \r\n 2 15.90 15.99 772.75 15.83 \r\n 3 16.05 16.01 15.85 772.55 \r\nBidirectional P2P=Enabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 2 3 \r\n 0 770.84 18.72 18.41 18.07 \r\n 1 18.52 772.94 18.82 18.30 \r\n 2 18.41 18.16 771.80 19.13 \r\n 3 18.40 17.99 18.94 771.22 \r\nP2P=Disabled Latency Matrix (us)\r\n GPU 0 1 2 3 \r\n 0 1.89 14.77 14.42 14.59 \r\n 1 14.52 1.91 15.50 15.50 \r\n 2 15.53 15.42 1.87 14.44 \r\n 3 14.76 14.71 14.51 1.82 \r\n\r\n CPU 0 1 2 3 \r\n 0 2.52 8.33 8.61 8.55 \r\n 1 8.20 2.49 8.50 8.49 \r\n 2 8.30 8.29 2.61 8.69 \r\n 3 8.41 8.36 8.74 2.56 \r\nP2P=Enabled Latency (P2P Writes) Matrix (us)\r\n GPU 0 1 2 3 \r\n 0 1.86 1.60 1.65 1.64 \r\n 1 1.59 1.91 1.64 1.65 \r\n 2 1.65 1.63 1.88 1.58 \r\n 3 1.65 1.64 1.59 1.82 \r\n\r\n CPU 0 1 2 3 \r\n 0 2.51 2.05 2.02 2.02 \r\n 1 2.14 2.54 2.04 2.02 \r\n 2 2.28 2.18 2.61 2.18 \r\n 3 2.32 2.19 2.24 2.73 \r\n\r\nNOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.\r\n```\r\n\r\nAnd for comparison, here's the dual 3090 w/NVLINK system:\r\n\r\n```\r\n[P2P (Peer-to-Peer) GPU Bandwidth Latency Test]\r\nDevice: 0, GeForce RTX 3090, pciBusID: 1, pciDeviceID: 0, pciDomainID:0\r\nDevice: 1, GeForce RTX 3090, pciBusID: 21, pciDeviceID: 0, pciDomainID:0\r\nDevice=0 CAN Access Peer Device=1\r\nDevice=1 CAN Access Peer Device=0\r\n\r\n***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure.\r\nSo you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases.\r\n\r\nP2P Connectivity Matrix\r\n D\\D 0 1\r\n 0 1 1\r\n 1 1 1\r\nUnidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 831.56 11.25 \r\n 1 11.33 831.12 \r\nUnidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 810.85 52.77 \r\n 1 52.85 832.89 \r\nBidirectional P2P=Disabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 812.31 16.55 \r\n 1 16.75 838.03 \r\nBidirectional P2P=Enabled Bandwidth Matrix (GB/s)\r\n D\\D 0 1 \r\n 0 821.29 101.41 \r\n 1 101.80 835.34 \r\nP2P=Disabled Latency Matrix (us)\r\n GPU 0 1 \r\n 0 1.59 33.13 \r\n 1 20.55 1.48 \r\n\r\n CPU 0 1 \r\n 0 2.89 8.85 \r\n 1 8.81 2.85 \r\nP2P=Enabled Latency (P2P Writes) Matrix (us)\r\n GPU 0 1 \r\n 0 1.59 1.43 \r\n 1 1.40 1.47 \r\n\r\n CPU 0 1 \r\n 0 2.93 2.45 \r\n 1 2.39 2.90 \r\n```",
"Thank you - these data are very valuable! It also shows that no hardware problem exists. It seems you could confirm poor performance on the V100 which makes it very likely that you can also reproduce performance issues with the RTX 8000. With that, it seems the only option is that it is an issue with the combination of parallelism and network architecture. ",
"Great benchmarks! Thank you for sharing the data, @moyix \r\n\r\nDo you have the same benchmarks for V100s too - just one set is enough (1 vs 2).\r\n\r\nAlso, why are you running comparison benchmarks on such huge number of items? Running enough items so that runtime is around a few minutes should be plenty to see the difference. Or is it that you were aborting these early and just recording the projected ETA and it/s from tqdm? `e.g. --max_steps 1000`\r\n\r\nHere are some ideas that may address your issue\r\n\r\n1. If I understand things right 3090 won't work at full capacity until we get pytorch w/ cuda-11.2\r\nhttps://github.com/pytorch/pytorch/issues/50232\r\nI don't know the nuances yet, but could it be that the communication channel is limited with cuda-11.0?\r\n\r\n That's why I wanted to see the results from VT100\r\n\r\n2. In one place it was suggested to check how your GPUs are inter-connected with help of:\r\n ```\r\n nvidia-smi topo -m\r\n ```\r\n that's do this check with NVLink disconnected.\r\n\r\n3. Also are sure your both GPUs running on the same speed PCIx (e.g. 8x if it's a consumer MB)? It must be, but just checking. I suppose doing a single GPU test on the other GPU would show if it's somehow on a slow PCIx slot. But I'd just test to rule that out. Should you get a slower outcome doing the same test on the 2nd gpu would explain the situation.\r\n",
"OK, so here is my benchmark with the same tool.\r\n\r\n**edit**: my initial benchmark had a bug in it as pointed out by @sgugger as one has to tweak `--max_steps` if changed to more gpus - I'm proposing to change that and have a way to have a fixed dataset truncation regardless of the number of gpus used. https://github.com/huggingface/transformers/issues/9801\r\n\r\nSo for 1 gpu, I had to double `--max_steps` to get the same number of items. The rest of this comment has been updated to reflect the corrected state:\r\n\r\nHardware 2x TITAN RTX 24GB each + NVlink\r\n\r\n|type| time secs |\r\n|----|-----|\r\n| 1: | 204 |\r\n| 2:DP w/ NVlink| 110 |\r\n| 2:DDP w/ NVlink| 101 |\r\n| 2:DDP w/o NVlink | 131 |\r\n\r\nI get the same bus report w/ and w/o NCCL_P2P_DISABLE=1 - I don't think `nvidia-smi` respects this env var:\r\n\r\n```\r\nNCCL_P2P_DISABLE=1 nvidia-smi topo -m\r\n\r\n GPU0 GPU1 CPU Affinity NUMA Affinity\r\nGPU0 X NV2 0-23 N/A\r\nGPU1 NV2 X 0-23 N/A\r\n```\r\n\r\nbut clearly the runtime is much slower w/o the NVlink as the benchmark shows, so pytorch/cuda does respect it.\r\n\r\nAnalysis:\r\n\r\n1. DP is ~10% slower than DDP w/ NVlink, but ~15% faster than DDP w/o NVlink\r\n2. DDP w/ NVLink doubles the speed of single gpu, so the communication overheard is almost nill in this particular experiment \r\n\r\nHere is the full benchmark code and outputs:\r\n\r\n```\r\n# 1 gpu\r\n\r\nrm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0 python run_clm.py --model_name_or_path gpt2 \\\r\n--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \\\r\n/tmp/test-clm --per_device_train_batch_size 4 --max_steps 400\r\n\r\n{'train_runtime': 204.8202, 'train_samples_per_second': 1.953, 'epoch': 0.69}\r\n\r\n# DP\r\n\r\nrm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python run_clm.py --model_name_or_path gpt2 \\\r\n--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \\\r\n/tmp/test-clm --per_device_train_batch_size 4 --max_steps 200\r\n\r\n{'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69}\r\n\r\n# DDP\r\n\r\nrm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \\\r\nrun_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \\\r\n--do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200\r\n\r\n{'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69}\r\n\r\n# DDP w/o NVlink\r\n\r\nrm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \\\r\n--nproc_per_node 2 run_clm.py --model_name_or_path gpt2 --dataset_name wikitext \\\r\n--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \\\r\n--per_device_train_batch_size 4 --max_steps 200\r\n\r\n{'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69}\r\n```",
"Yes, apologies for the confusion; the ETA numbers above are from aborting early (after a few minutes) and noting the ETA. I actually did compile PyTorch from source with CUDA 11.2 and it doesn't seem to have changed the results (although I don't know if there are further changes PyTorch will make to take full advantage of 11.2).\r\n\r\nYour benchmark code is much more self-contained than mine, so I will give your benchmarks a shot with the RTX8000 and V100 nodes on our cluster, but it will probably be a few days before I can get time there as the ICML deadline is very close :)\r\n\r\nHere's nvidia-smi -m topo for the 3090 machine:\r\n\r\n```\r\nnvidia-smi topo -m\r\n GPU0 GPU1 CPU Affinity NUMA Affinity\r\nGPU0 X NV4 0-31 N/A\r\nGPU1 NV4 X 0-31 N/A\r\n```",
"Note that the timing compare 200 training steps, so the numbers you reported wrong @stas00 in the sense that 2 GPUs have covered 400 samples instead of 200. Training on the full dataset would therefore go twice as fast as with one GPU.",
"This is correct - that my report was incorrect. Thank you for validating my concern in https://github.com/huggingface/transformers/issues/9801, @sgugger \r\n\r\nThat's why I'm asking for a less confusing way to truncate the dataset.\r\n\r\nI need to find an easy-way to do it so I don't have to be in full thinking capacity if I do it late at night which was the case last night.\r\n\r\nI will revisit my benchmark with corrections hopefully today. \r\n\r\nBut it doesn't change the fact that nvlink gives 30% faster performance.\r\n",
"> Yes, apologies for the confusion; the ETA numbers above are from aborting early (after a few minutes) and noting the ETA. \r\n\r\nThat's what I guessed - I am glad you didn't waste all that electricity to run these to completion! It was a smart move, since you waited a few minutes.\r\n\r\n> I actually did compile PyTorch from source with CUDA 11.2 and it doesn't seem to have changed the results (although I don't know if there are further changes PyTorch will make to take full advantage of 11.2).\r\n\r\nOh, thank you for validating that! \r\n\r\nBuilding pytorch from source is hard! Hat off to you!\r\n\r\nYes, we don't know whether everything has been put in place for 11.2 support. \r\n\r\n> Your benchmark code is much more self-contained than mine, so I will give your benchmarks a shot with the RTX8000 and V100 nodes on our cluster, but it will probably be a few days before I can get time there as the ICML deadline is very close :)\r\n\r\nplease note that I corrected a mistake in my benchmark as kindly pointed out by @sgugger:\r\nhttps://github.com/huggingface/transformers/issues/9371#issuecomment-767323420\r\n \r\n> Here's nvidia-smi -m topo for the 3090 machine:\r\n> \r\n> ```\r\n> nvidia-smi topo -m\r\n> GPU0 GPU1 CPU Affinity NUMA Affinity\r\n> GPU0 X NV4 0-31 N/A\r\n> GPU1 NV4 X 0-31 N/A\r\n> ```\r\n\r\nLooks very similar. Do you know what exactly:\r\n```\r\n NV# = Connection traversing a bonded set of # NVLinks\r\n```\r\nmeans? is NV4 better than NV2? since I get NV2. Why do you have 4? As I can see you only have 2 gpus.\r\n\r\n",
"According to [this table](https://docs.nvidia.com/datacenter/nvtags/0.1/nvtags-user-guide/index.html#supported-link-names) NV4 means \"Connection traversing a bonded set of 4 NVLinks\".\r\n\r\nThere are some more details in the [GA102 whitepaper](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf):\r\n\r\n> GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. ",
"Super! Thank you for that insight, @moyix!\r\n\r\nI started compiling performance/scalability notes here: https://github.com/huggingface/transformers/issues/9824\r\n\r\nI summarized the useful insights from this thread. If you get a chance to validate the GPU inter-connectivity section that would be great!\r\n\r\nAnd if you have other insights to contribute I'm all ears. If you don't have time/inspiration to write something complete even a stab would be great and then over time we will fill it out with details and benchmarks.\r\n\r\nThe idea is to discuss in-depth the different hardware/software nuances to speed up training and fit larger models.\r\n\r\nThank you!",
"Very nice, I will take a look at it!\r\n\r\nWhile I am waiting for HPC time, I ran your benchmark script on the 3090 system while varying two parameters: the model size (gpt2, gpt2-medium, and gpt2-large) and the block size (128, 256, 512).\r\n\r\nThe script:\r\n```\r\nfor MODEL in gpt2 gpt2-medium gpt2-large; do\r\n for BLOCK_SIZE in 128 256 512 ; do \r\n # Skip gpt2-large at block size 512 due to memory constraints\r\n if [ $MODEL = \"gpt2-large\" ] && [ $BLOCK_SIZE -eq 512 ] ; then continue ; fi\r\n # 1 gpu\r\n\r\n rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0 python run_clm.py --model_name_or_path $MODEL \\\r\n --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \\\r\n /tmp/test-clm --per_device_train_batch_size 4 --max_steps 400 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log\r\n result=$(grep train_runtime /tmp/clm_bench.log)\r\n echo $MODEL $BLOCK_SIZE \"1GPU\" $result >> clm_bench_results.log\r\n\r\n # DP\r\n\r\n rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python run_clm.py --model_name_or_path $MODEL \\\r\n --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir \\\r\n /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log\r\n\r\n result=$(grep train_runtime /tmp/clm_bench.log)\r\n echo $MODEL $BLOCK_SIZE \"DP\" $result >> clm_bench_results.log\r\n\r\n # DDP\r\n\r\n rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node 2 \\\r\n run_clm.py --model_name_or_path $MODEL --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \\\r\n --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log\r\n\r\n result=$(grep train_runtime /tmp/clm_bench.log)\r\n echo $MODEL $BLOCK_SIZE \"DDP\" $result >> clm_bench_results.log\r\n\r\n # DDP w/o NVlink\r\n\r\n rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch \\\r\n --nproc_per_node 2 run_clm.py --model_name_or_path $MODEL --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm \\\r\n --per_device_train_batch_size 4 --max_steps 200 --block_size $BLOCK_SIZE 2>&1 > /tmp/clm_bench.log\r\n\r\n result=$(grep train_runtime /tmp/clm_bench.log)\r\n echo $MODEL $BLOCK_SIZE \"DDP_no_NV\" $result >> clm_bench_results.log\r\n done\r\ndone\r\n```\r\n\r\nAnd the results:\r\n\r\n```\r\ngpt2 128 1GPU {'train_runtime': 19.5621, 'train_samples_per_second': 20.448, 'epoch': 0.09}\r\ngpt2 128 DP {'train_runtime': 16.6426, 'train_samples_per_second': 12.017, 'epoch': 0.09}\r\ngpt2 128 DDP {'train_runtime': 13.5368, 'train_samples_per_second': 14.775, 'epoch': 0.09}\r\ngpt2 128 DDP_no_NV {'train_runtime': 30.0181, 'train_samples_per_second': 6.663, 'epoch': 0.09}\r\ngpt2 256 1GPU {'train_runtime': 30.423, 'train_samples_per_second': 13.148, 'epoch': 0.17}\r\ngpt2 256 DP {'train_runtime': 22.6101, 'train_samples_per_second': 8.846, 'epoch': 0.17}\r\ngpt2 256 DDP {'train_runtime': 18.6943, 'train_samples_per_second': 10.698, 'epoch': 0.17}\r\ngpt2 256 DDP_no_NV {'train_runtime': 35.4208, 'train_samples_per_second': 5.646, 'epoch': 0.17}\r\ngpt2 512 1GPU {'train_runtime': 58.0856, 'train_samples_per_second': 6.886, 'epoch': 0.34}\r\ngpt2 512 DP {'train_runtime': 37.6376, 'train_samples_per_second': 5.314, 'epoch': 0.34}\r\ngpt2 512 DDP {'train_runtime': 32.3616, 'train_samples_per_second': 6.18, 'epoch': 0.34}\r\ngpt2 512 DDP_no_NV {'train_runtime': 49.1999, 'train_samples_per_second': 4.065, 'epoch': 0.34}\r\ngpt2-medium 128 1GPU {'train_runtime': 49.3823, 'train_samples_per_second': 8.1, 'epoch': 0.09}\r\ngpt2-medium 128 DP {'train_runtime': 40.5947, 'train_samples_per_second': 4.927, 'epoch': 0.09}\r\ngpt2-medium 128 DDP {'train_runtime': 33.4365, 'train_samples_per_second': 5.981, 'epoch': 0.09}\r\ngpt2-medium 128 DDP_no_NV {'train_runtime': 74.9924, 'train_samples_per_second': 2.667, 'epoch': 0.09}\r\ngpt2-medium 256 1GPU {'train_runtime': 79.6724, 'train_samples_per_second': 5.021, 'epoch': 0.17}\r\ngpt2-medium 256 DP {'train_runtime': 56.0446, 'train_samples_per_second': 3.569, 'epoch': 0.17}\r\ngpt2-medium 256 DDP {'train_runtime': 47.7543, 'train_samples_per_second': 4.188, 'epoch': 0.17}\r\ngpt2-medium 256 DDP_no_NV {'train_runtime': 89.3616, 'train_samples_per_second': 2.238, 'epoch': 0.17}\r\ngpt2-medium 512 1GPU {'train_runtime': 152.6255, 'train_samples_per_second': 2.621, 'epoch': 0.34}\r\ngpt2-medium 512 DP {'train_runtime': 92.4563, 'train_samples_per_second': 2.163, 'epoch': 0.34}\r\ngpt2-medium 512 DDP {'train_runtime': 82.1935, 'train_samples_per_second': 2.433, 'epoch': 0.34}\r\ngpt2-medium 512 DDP_no_NV {'train_runtime': 124.1163, 'train_samples_per_second': 1.611, 'epoch': 0.34}\r\ngpt2-large 128 1GPU {'train_runtime': 98.5939, 'train_samples_per_second': 4.057, 'epoch': 0.09}\r\ngpt2-large 128 DP {'train_runtime': 79.2193, 'train_samples_per_second': 2.525, 'epoch': 0.09}\r\ngpt2-large 128 DDP {'train_runtime': 65.7918, 'train_samples_per_second': 3.04, 'epoch': 0.09}\r\ngpt2-large 128 DDP_no_NV {'train_runtime': 152.2178, 'train_samples_per_second': 1.314, 'epoch': 0.09}\r\ngpt2-large 256 1GPU {'train_runtime': 154.5437, 'train_samples_per_second': 2.588, 'epoch': 0.17}\r\ngpt2-large 256 DP {'train_runtime': 106.7075, 'train_samples_per_second': 1.874, 'epoch': 0.17}\r\ngpt2-large 256 DDP [out of memory]\r\ngpt2-large 256 DDP_no_NV [out of memory]\r\ngpt2-large 512 1GPU [out of memory]\r\ngpt2-large 512 DP [out of memory]\r\ngpt2-large 512 DDP [out of memory]\r\ngpt2-large 152 DDP_no_NV [out of memory]\r\n```\r\n\r\nOne thing that I find interesting is that the behavior I originally observed where training on a single GPU could be slower than on multiple GPUs without NVLink only seems to be true for small block sizes like 128 or (sometimes) 256. So my hypothesis is that with smaller block sizes it is effectively using smaller batches and therefore synchronizing between GPUs more often?\r\n\r\nAs soon as I can get some time on our HPC I can update this with numbers for the 4xRTX8000 and the 4xV100, although the NVLink rows will no longer be applicable (since I don't have access to a machine with those cards in NVLink/NVSwitch configuration).",
"Awesome! Thank you for more benchmarks, @moyix\r\n\r\nLet's apply some magic to your log:\r\n```\r\nperl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|]; print \"|-\" x 5, \"|\"} $d=qr/([\\d\\.]+)/; m|^(\\S+) $d (\\S+) ..train_runtime.. $d, .train_samples_per_second.. $d| && print qq[|$1|$2|$3|$4|$5|]' log.txt\r\n```\r\nbut let's round it up to make reading easier:\r\n```\r\nperl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|]; print \"|-\" x 5, \"|\"} $d=qr/([\\d\\.]+)/; m|^(\\S+) $d (\\S+) ..train_runtime.. $d, .train_samples_per_second.. $d| && print qq[|$1|$2|$3|] . int($4). \"|\". sprintf(\"%0.1f\", $5).\"|\"' log.txt\r\n```\r\n|model|block|type|runtime|sample/sec|\r\n|-|-|-|-|-|\r\n|gpt2|128|1GPU|19|20.4|\r\n|gpt2|128|DP|16|12.0|\r\n|gpt2|128|DDP|13|14.8|\r\n|gpt2|128|DDP_no_NV|30|6.7|\r\n|gpt2|256|1GPU|30|13.1|\r\n|gpt2|256|DP|22|8.8|\r\n|gpt2|256|DDP|18|10.7|\r\n|gpt2|256|DDP_no_NV|35|5.6|\r\n|gpt2|512|1GPU|58|6.9|\r\n|gpt2|512|DP|37|5.3|\r\n|gpt2|512|DDP|32|6.2|\r\n|gpt2|512|DDP_no_NV|49|4.1|\r\n|gpt2-medium|128|1GPU|49|8.1|\r\n|gpt2-medium|128|DP|40|4.9|\r\n|gpt2-medium|128|DDP|33|6.0|\r\n|gpt2-medium|128|DDP_no_NV|74|2.7|\r\n|gpt2-medium|256|1GPU|79|5.0|\r\n|gpt2-medium|256|DP|56|3.6|\r\n|gpt2-medium|256|DDP|47|4.2|\r\n|gpt2-medium|256|DDP_no_NV|89|2.2|\r\n|gpt2-medium|512|1GPU|152|2.6|\r\n|gpt2-medium|512|DP|92|2.2|\r\n|gpt2-medium|512|DDP|82|2.4|\r\n|gpt2-medium|512|DDP_no_NV|124|1.6|\r\n|gpt2-large|128|1GPU|98|4.1|\r\n|gpt2-large|128|DP|79|2.5|\r\n|gpt2-large|128|DDP|65|3.0|\r\n|gpt2-large|128|DDP_no_NV|152|1.3|\r\n|gpt2-large|256|1GPU|154|2.6|\r\n|gpt2-large|256|DP|106|1.9|\r\n\r\nDoing a quick scan it's clear that as the model grows in size and the block in its size they results start to diverge more and more, though proportions don't change much. Probably could pipe this to convert into relative sizes and then it'd very clear.\r\n\r\n> my hypothesis is that with smaller block sizes it is effectively using smaller batches and therefore synchronizing between GPUs more often?\r\n\r\nIt certainly has less data to communicate to the other gpus with smaller blocks",
"ok, a quick hack to add ratios relative to 1gpu, so now it's easier to see the comparison.\r\n```\r\nperl -lne 'BEGIN{ print qq[|model|block|type|runtime|sample/sec|ratios]; print \"|-\" x 6, \"|\"} $d=qr/([\\d\\.]+)/; if (m|^(\\S+) $d (\\S+) ..train_runtime.. $d, .train_samples_per_second.. $d|) {if($3==\"1GPU\") {$s=$4; print \"| \" x 6, \"|\"}; print qq[|$1|$2|$3|] . int($4). \"|\". sprintf(\"%0.1f\", $5).\"|\".sprintf(\"%0.1f\", $4/$s).\"|\"}' log.txt\r\n```\r\n\r\nSo I added a new column runtime `ratios` and each 4 rows get recalculated wrt to their first runtime entry with 1gpu.\r\n\r\nedit: someone asked to explain the ratio and why the runtime is faster for DDP, but samples per second is smaller.\r\n\r\nHere is a puzzle to solve:\r\n\r\n1. one cake eater eats the cake at 60 sec/cake\r\n2. now a second cake eater joins and who eats at the same speed as the first one, but now after every bite they have to shout \"ML rocks\", which slows down both of them, so they are now eating 20% slower than when alone\r\n\r\nWill one cake eater finish the cake faster than two of them?\r\n\r\n(the answer is after the table, so you don't see it right away)\r\n\r\n|model|block|type|runtime|sample/sec|ratios\r\n|-|-|-|-|-|-|\r\n| | | | | | |\r\n|gpt2|128|1GPU|19|20.4|1.0|\r\n|gpt2|128|DP|16|12.0|0.9|\r\n|gpt2|128|DDP|13|14.8|0.7|\r\n|gpt2|128|DDP_no_NV|30|6.7|1.5|\r\n| | | | | | |\r\n|gpt2|256|1GPU|30|13.1|1.0|\r\n|gpt2|256|DP|22|8.8|0.7|\r\n|gpt2|256|DDP|18|10.7|0.6|\r\n|gpt2|256|DDP_no_NV|35|5.6|1.2|\r\n| | | | | | |\r\n|gpt2|512|1GPU|58|6.9|1.0|\r\n|gpt2|512|DP|37|5.3|0.6|\r\n|gpt2|512|DDP|32|6.2|0.6|\r\n|gpt2|512|DDP_no_NV|49|4.1|0.8|\r\n| | | | | | |\r\n|gpt2-medium|128|1GPU|49|8.1|1.0|\r\n|gpt2-medium|128|DP|40|4.9|0.8|\r\n|gpt2-medium|128|DDP|33|6.0|0.7|\r\n|gpt2-medium|128|DDP_no_NV|74|2.7|1.5|\r\n| | | | | | |\r\n|gpt2-medium|256|1GPU|79|5.0|1.0|\r\n|gpt2-medium|256|DP|56|3.6|0.7|\r\n|gpt2-medium|256|DDP|47|4.2|0.6|\r\n|gpt2-medium|256|DDP_no_NV|89|2.2|1.1|\r\n| | | | | | |\r\n|gpt2-medium|512|1GPU|152|2.6|1.0|\r\n|gpt2-medium|512|DP|92|2.2|0.6|\r\n|gpt2-medium|512|DDP|82|2.4|0.5|\r\n|gpt2-medium|512|DDP_no_NV|124|1.6|0.8|\r\n| | | | | | |\r\n|gpt2-large|128|1GPU|98|4.1|1.0|\r\n|gpt2-large|128|DP|79|2.5|0.8|\r\n|gpt2-large|128|DDP|65|3.0|0.7|\r\n|gpt2-large|128|DDP_no_NV|152|1.3|1.5|\r\n| | | | | | |\r\n|gpt2-large|256|1GPU|154|2.6|1.0|\r\n|gpt2-large|256|DP|106|1.9|0.7|\r\n\r\nand the answer to the puzzle posted at the beginning of this comment: 2 cake eaters will eat the cake faster together despite the slowdown, because they only have half a cake to finish each!\r\n\r\nSame here, while each of the GPUs in the DDP assembly performs slower due to the gradient syncing, but because it has to consume half the samples, overall the assembly will train faster.\r\n\r\nFurther, this benchmark is just for 2 GPUs\r\n\r\nSo going from 1GPU to 2GPUs, you create the overhead, and so you get some loss in performance, and some gain\r\n\r\nWhen you go from 2GPUs to 4GPUs (on the same node), it's pure performance doubling.\r\ni.e. 4GPUs will perform disproportionally faster than 2GPUs over 1 GPU.\r\n\r\n- 1 GPU has no inter-gpu communication to do\r\n- 2+ gpus have to average gradients\r\n\r\nso they add this overhead, but then they can parallelize the processing so the overhead becomes almost negligible as the number of GPUs grows\r\n\r\nThe next problem is once you outgrow a single node. So the next issue is inter-node connects, which can be blazing fast (Infiniband) or super-slow (ethernet hub). So to scale from 8GPUs to 10 (for 8-gpu node), you first lose performance, because now the inter-node connection is the slow component that slows everything down. But as you add more nodes, again that overhead becomes less and less significant.\r\n\r\nOf course when working with multi-node one often uses other parallelization techniques than DDP, so it's PP or TP (https://huggingface.co/transformers/parallelism.html#concepts), and there one generally performs TP only inside a node, and PP and DP over nodes.\r\n\r\n**It'd be amazing if someone re-did this table for 1, 2, 4 gpus, then 1, 2, 4 nodes.**",
"OK, now we have some extensive benchmarks for the RTX8000 machine. This machine does not have NVLink, but it apparently can do P2P GPU-GPU communication via the PCI bus. However, this seems to be quite slow – slower, in fact, than disabling P2P altogether.\r\n\r\nHere's `nvidia-smi topo -m`:\r\n\r\n```\r\n GPU0 GPU1 GPU2 GPU3 mlx5_0 CPU Affinity NUMA Affinity\r\nGPU0 X SYS SYS SYS SYS 0-7 0-1\r\nGPU1 SYS X SYS SYS SYS 0-7 0-1\r\nGPU2 SYS SYS X SYS SYS 0-7 0-1\r\nGPU3 SYS SYS SYS X SYS 0-7 0-1\r\nmlx5_0 SYS SYS SYS SYS X \r\n\r\nLegend:\r\n\r\n X = Self\r\n SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)\r\n NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node\r\n PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)\r\n PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\r\n PIX = Connection traversing at most a single PCIe bridge\r\n NV# = Connection traversing a bonded set of # NVLinks\r\n```\r\n\r\nI used the script from before (slightly expanded) and set `max-steps` to 800 for the single GPU case, 400 for two GPUs, and 200 for 4 GPUs. Here are the benchmarks (long!):\r\n\r\n|model|block|type|runtime|sample/sec|ratios\r\n|-|-|-|-|-|-|\r\n| | | | | | |\r\n|gpt2|128|1GPU|67|11.9|1.0|\r\n|gpt2|128|DP_2GPU|530|0.8|7.9|\r\n|gpt2|128|DDP_2GPU|350|1.1|5.2|\r\n|gpt2|128|DDP_no_P2P_2GPU|119|3.3|1.8|\r\n|gpt2|128|DP_4GPU|243|0.8|3.6|\r\n|gpt2|128|DDP_4GPU|159|1.3|2.4|\r\n|gpt2|128|DDP_no_P2P_4GPU|88|2.3|1.3|\r\n| | | | | | |\r\n|gpt2|256|1GPU|113|7.0|1.0|\r\n|gpt2|256|DP_2GPU|582|0.7|5.1|\r\n|gpt2|256|DDP_2GPU|376|1.1|3.3|\r\n|gpt2|256|DDP_no_P2P_2GPU|142|2.8|1.3|\r\n|gpt2|256|DP_4GPU|313|0.6|2.8|\r\n|gpt2|256|DDP_4GPU|174|1.1|1.5|\r\n|gpt2|256|DDP_no_P2P_4GPU|102|1.9|0.9|\r\n| | | | | | |\r\n|gpt2|512|1GPU|215|3.7|1.0|\r\n|gpt2|512|DP_2GPU|694|0.6|3.2|\r\n|gpt2|512|DDP_2GPU|426|0.9|2.0|\r\n|gpt2|512|DDP_no_P2P_2GPU|192|2.1|0.9|\r\n|gpt2|512|DP_4GPU|454|0.4|2.1|\r\n|gpt2|512|DDP_4GPU|201|1.0|0.9|\r\n|gpt2|512|DDP_no_P2P_4GPU|124|1.6|0.6|\r\n| | | | | | |\r\n|gpt2-medium|128|1GPU|183|4.4|1.0|\r\n|gpt2-medium|128|DP_2GPU|1476|0.3|8.0|\r\n|gpt2-medium|128|DDP_2GPU|863|0.5|4.7|\r\n|gpt2-medium|128|DDP_no_P2P_2GPU|280|1.4|1.5|\r\n|gpt2-medium|128|DP_4GPU|653|0.3|3.6|\r\n|gpt2-medium|128|DDP_4GPU|375|0.5|2.0|\r\n|gpt2-medium|128|DDP_no_P2P_4GPU|193|1.0|1.1|\r\n| | | | | | |\r\n|gpt2-medium|256|1GPU|306|2.6|1.0|\r\n|gpt2-medium|256|DP_2GPU|1600|0.2|5.2|\r\n|gpt2-medium|256|DDP_2GPU|919|0.4|3.0|\r\n|gpt2-medium|256|DDP_no_P2P_2GPU|339|1.2|1.1|\r\n|gpt2-medium|256|DP_4GPU|814|0.2|2.7|\r\n|gpt2-medium|256|DDP_4GPU|401|0.5|1.3|\r\n|gpt2-medium|256|DDP_no_P2P_4GPU|218|0.9|0.7|\r\n| | | | | | |\r\n|gpt2-medium|512|1GPU|573|1.4|1.0|\r\n|gpt2-medium|512|DP_2GPU|1884|0.2|3.3|\r\n|gpt2-medium|512|DDP_2GPU|1053|0.4|1.8|\r\n|gpt2-medium|512|DDP_no_P2P_2GPU|472|0.8|0.8|\r\n|gpt2-medium|512|DP_4GPU|1177|0.2|2.1|\r\n|gpt2-medium|512|DDP_4GPU|462|0.4|0.8|\r\n|gpt2-medium|512|DDP_no_P2P_4GPU|278|0.7|0.5|\r\n| | | | | | |\r\n|gpt2-large|128|1GPU|402|2.0|1.0|\r\n|gpt2-large|128|DP_2GPU|3181|0.1|7.9|\r\n|gpt2-large|128|DDP_2GPU|1760|0.2|4.4|\r\n|gpt2-large|128|DDP_no_P2P_2GPU|565|0.7|1.4|\r\n|gpt2-large|128|DP_4GPU|1361|0.1|3.4|\r\n|gpt2-large|128|DDP_4GPU|717|0.3|1.8|\r\n|gpt2-large|128|DDP_no_P2P_4GPU|349|0.6|0.9|\r\n| | | | | | |\r\n|gpt2-large|256|1GPU|642|1.2|1.0|\r\n|gpt2-large|256|DP_2GPU|3440|0.1|5.4|\r\n|gpt2-large|256|DDP_2GPU|1882|0.2|2.9|\r\n|gpt2-large|256|DDP_no_P2P_2GPU|686|0.6|1.1|\r\n|gpt2-large|256|DP_4GPU|1673|0.1|2.6|\r\n|gpt2-large|256|DDP_4GPU|770|0.3|1.2|\r\n|gpt2-large|256|DDP_no_P2P_4GPU|403|0.5|0.6|\r\n| | | | | | |\r\n|gpt2-large|512|1GPU|1168|0.7|1.0|\r\n|gpt2-large|512|DP_2GPU|3947|0.1|3.4|\r\n|gpt2-large|512|DDP_2GPU|2145|0.2|1.8|\r\n|gpt2-large|512|DDP_no_P2P_2GPU|952|0.4|0.8|\r\n|gpt2-large|512|DP_4GPU|2303|0.1|2.0|\r\n|gpt2-large|512|DDP_4GPU|902|0.2|0.8|\r\n|gpt2-large|512|DDP_no_P2P_4GPU|531|0.4|0.5|\r\n| | | | | | |\r\n|gpt2-xl|128|1GPU|770|1.0|1.0|\r\n|gpt2-xl|128|DP_2GPU|6391|0.1|8.3|\r\n|gpt2-xl|128|DDP_2GPU|3396|0.1|4.4|\r\n|gpt2-xl|128|DDP_no_P2P_2GPU|751|0.5|1.0|\r\n|gpt2-xl|128|DP_4GPU|2588|0.1|3.4|\r\n|gpt2-xl|128|DDP_4GPU|1356|0.1|1.8|\r\n|gpt2-xl|128|DDP_no_P2P_4GPU|635|0.3|0.8|\r\n| | | | | | |\r\n|gpt2-xl|256|1GPU|1210|0.7|1.0|\r\n|gpt2-xl|256|DP_2GPU|6826|0.1|5.6|\r\n|gpt2-xl|256|DP_4GPU|3130|0.1|2.6|\r\n",
"Thank you for doing this immense work, @moyix!\r\n\r\nFrom a quick look it appears the model size doesn't matter, but the block-size makes a big difference to a faster outcome with the various DDP approaches - the larger the block the more benefits one gets, and for small blocks the performance is terrible.",
"@JJack0812, your issue report won't get addresses here as we are talking about a totally different topic in this thread - I'd say post a separate issue - may be under pytorch or transformers, but first study [existing tickets](https://www.google.com/search?q=RuntimeError%3A+NCCL+error+in%3A+%2Fpytorch%2Ftorch%2Flib%2Fc10d%2FProcessGroupNCCL.cpp), e.g.: [this one](https://github.com/pytorch/pytorch/issues/39388) ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I have met the same problem, thanks for the answers"
] | 1,609 | 1,694 | 1,614 | NONE | null | Summary: on a multi-GPU system, training GPT2 seems to scale poorly unless a very fast GPU-GPU interconnect like NVLink is available. In particular, without NVLink using two GPUs is *slower* than using just one GPU.
## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-5.8.0-rc7-custom-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201214+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No?
- Hardware: 2 x NVIDIA RTX 3090 w/NVLink
### Who can help
Maybe @LysandreJik or @patrickvonplaten ?
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The script is a pretty basic example of training a medium-size GPT2 from scratch. The script is here: https://panda.moyix.net/~moyix/train_csrc.py
The dataset and tokenized vocab:
* Dataset: https://panda.moyix.net/~moyix/plainsrc_all.txt.gz (718M, gzipped)
* Vocab: https://panda.moyix.net/~moyix/csrc_vocab.tar.gz
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Training a GPT2 language model on C source code.
## To reproduce
Run with only one GPU: `CUDA_VISIBLE_DEVICES=0 python train_csrc.py`
Run with two GPUs, NVLink disabled: `NCCL_P2P_DISABLE=1 python train_csrc.py`
Run with two GPUs and NVLink enabled: `python train_csrc.py`
Here is some benchmarking I did with my dataset on transformers 3.3.1 and 4.1.1 (note the difference in ETA is just because 3.3.1 only seems to report the ETA for the current epoch):
Version|NVLINK|GPUs|ETA|Perf
--------|--------|-----|-----|-----
4.1.1 | Yes | 2GPU | 419:52:28 | 1.94it/s
4.1.1 | No | 2GPU | 1025:06:27 | 1.26s/it
4.1.1 | N/A | 1GPU | 599:14:57 | 2.72it/s
3.3.1 | Yes | 2GPU | 83:46:51 | 1.94it/s
3.3.1 | No | 2GPU | 204:54:22 | 1.26s/it
3.3.1 | N/A | 1GPU | 119:02:34 | 2.73it/s
You can see that using two GPUs is actually slower than using a single GPU, unless NVLink is available (599 hours for 1 GPU vs 1025 hours for two GPUs). So presumably there is a large amount of GPU-GPU communication going on?
## Expected behavior
Scaling should be roughly linear with the number of GPUs. Unfortunately I am not very familiar with the implementation details of GPT2 in Huggingface, but others report roughly linear scaling with Transformer models like BERT so it should work here as well: https://towardsdatascience.com/training-bert-at-a-university-eedcf940c754
Although I have a system with NVLink at home, this issue is still affecting me because I would like to be able to run this on the university HPC cluster, where most nodes do not have NVLink. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9371/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 5
} | https://api.github.com/repos/huggingface/transformers/issues/9371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9370/comments | https://api.github.com/repos/huggingface/transformers/issues/9370/events | https://github.com/huggingface/transformers/issues/9370 | 776,942,988 | MDU6SXNzdWU3NzY5NDI5ODg= | 9,370 | Custom train/validation file not supported in run_qa.py | {
"login": "BatMrE",
"id": 48859022,
"node_id": "MDQ6VXNlcjQ4ODU5MDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/48859022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BatMrE",
"html_url": "https://github.com/BatMrE",
"followers_url": "https://api.github.com/users/BatMrE/followers",
"following_url": "https://api.github.com/users/BatMrE/following{/other_user}",
"gists_url": "https://api.github.com/users/BatMrE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BatMrE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BatMrE/subscriptions",
"organizations_url": "https://api.github.com/users/BatMrE/orgs",
"repos_url": "https://api.github.com/users/BatMrE/repos",
"events_url": "https://api.github.com/users/BatMrE/events{/privacy}",
"received_events_url": "https://api.github.com/users/BatMrE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @BatMrE \r\n\r\nThe syntax of the command is wrong, there should be no spaces around `=` or you can also just remove the `=`\r\n\r\nSo it should be either like this\r\n```bash\r\npython run_qa.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --train_file=train-v1.1.json \\\r\n --validation_file=dev-v1.1.json \\\r\n```\r\n\r\nor this\r\n```bash\r\npython run_qa.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --train_file train-v1.1.json \\\r\n --validation_file dev-v1.1.json \\\r\n```",
"removing the spaces worked for me, thoe I'm still not able to run that script getting:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_qa.py\", line 469, in <module>\r\n main()\r\n File \"run_qa.py\", line 252, in main\r\n answer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\nIndexError: list index out of range\r\n\r\n```\r\n\r\nNote: I am using official training and dev json file to run the script\r\nplease see if someone can help.\r\n@patrickvonplaten / @stas00 / @vasudevgupta7",
"@sgugger might know this.",
"I had the same problem, here's what I found. \r\n\r\nIf you read through the script, you'll see it uses the `datasets.load_dataset()` function to load your data (line 211). As commented in the script check out [https://huggingface.co/docs/datasets/loading_datasets.html](https://huggingface.co/docs/datasets/loading_datasets.html) to learn more. \r\n\r\nI noticed it doesn't natively support squad style json files. \r\nHowever you can: \r\n- Use one of the supported formats;\r\n- create your own dataset loading script or [adapt an existing loading script](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference);\r\n- or use the [squad.py loading script](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py).\r\n\r\nYou'll have to adapt the run_qa.py script a bit to use your loading script.\r\n",
"@Jos1988 \r\nI am bit confused in how to use squad.py file for conversion of data\r\nI have tried this \r\n`dataset = load_dataset('squad', ...)`\r\n",
"@BatMrE download the [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py) script and change the first few lines of _split_generators function(see the code below) to make `dl_manager` use your local QA dataset files instead of downloading the squad data. `self.config.data_files` uses the data_files you pass to `load_dataset` function.\r\n\r\n```python\r\n def _split_generators(self, dl_manager):\r\n if not self.config.data_files:\r\n raise ValueError(\r\n f\"At least one data file must be specified, but got data_files={self.config.data_files}\"\r\n )\r\n downloaded_files = dl_manager.download_and_extract(self.config.data_files)\r\n ......\r\n\r\n```\r\n\r\nAftter you do the above changes, just load your dataset using:\r\n\r\n```python\r\ndataset = load_dataset(<path to changed squad.py dataloader>, data_files={'train': <train-path>, 'validation': <validation-path>})\r\n```\r\nThe `data_files` contains the paths to your local train and dev QA datasets which are in squad format",
"I have made all the expected changes\r\n\r\n- Made changes in squad.py file\r\n- datasets = load_dataset('squad.py', data_files={'train': 'train_custom.json', 'validation': 'dev_custom.json'})\r\n\r\npassing my custom file (which is different from orignal squad v1 files)\r\n**Note : code hits the custom file as if I pass irrelevant name it will throw error of file not found**\r\n\r\nI am getting the expected results but it is exactly same as the result I get on running\r\n```\r\npython run_qa.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --dataset_name squad \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir /tmp/debug_squad/\r\n```\r\nmy current run script:\r\n```\r\n\r\npython run_qa.py \\\r\n --model_name_or_path bert-base-uncased \\\r\n --train_file=train_custom.json \\\r\n --validation_file=dev_custom.json \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 16 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir/tmp/debug_squad2/\r\n```\r\nalso my _split_generators function in squad.py :\r\n```\r\n def _split_generators(self, dl_manager):\r\n if not self.config.data_files:\r\n raise ValueError(\r\n f\"At least one data file must be specified, but got data_files={self.config.data_files}\"\r\n )\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\n ]\r\n```\r\n\r\nhow is it possible to get exactly same results for custom and official script, can someone please recommend something..\r\n\r\n@gowtham1997 @Jos1988",
"@BatMrE\r\nInstead of `downloaded_files = dl_manager.download_and_extract(_URLS)` , use `downloaded_files = dl_manager.download_and_extract(self.config.data_files)`\r\n\r\n`downloaded_files = dl_manager.download_and_extract(_URLS)` downloads the squad dataset from _URLS specified in the squad.py loader file. You should instead use the local dataset files passed with `config.data_files`\r\n",
"Thanks @gowtham1997 ,\r\nI have done some hardcoding in squad.py file to send my custom data files\r\n```\r\n_URLS = {\r\n \"train\": \"train_custom.json\",\r\n \"dev\": \"dev_custom.json\",\r\n}\r\n```\r\nJust one more thing.. I am able to use any custom data made on top of squad version 1, but I am not able to use squad version 2. As I am aware we need to use run_squad.py for squad version 2 and not run_qa, can some one add some comments on it.\r\n\r\n",
"@sgugger can you lend a hand here?\r\n\r\nI have ran into the same problem but with a different error\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 434, in incomplete_dir\r\n yield tmp_dir\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 553, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 897, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/home/abashir/.cache/huggingface/modules/datasets_modules/datasets/json/fb88b12bd94767cb0cc7eedcd82ea1f402d2162addc03a37e81d4f8dc7313ad9/json.py\", line 75, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/GW/Health-Corpus/work/UMLS/transformers/examples/question-answering/run_qa.py\", line 495, in <module>\r\n main()\r\n File \"/GW/Health-Corpus/work/UMLS/transformers/examples/question-answering/run_qa.py\", line 222, in main\r\n datasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 483, in download_and_prepare\r\n self._save_info()\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/site-packages/datasets/builder.py\", line 440, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/shutil.py\", line 498, in rmtree\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"/home/abashir/anaconda3/envs/mpi/lib/python3.7/shutil.py\", line 496, in rmtree\r\n os.rmdir(path)\r\nOSError: [Errno 39] Directory not empty: '/home/abashir/.cache/huggingface/datasets/json/default-43dfe5d134316dba/0.0.0/fb88b12bd94767cb0cc7eedcd82ea1f402d2162addc03a37e81d4f8dc7313ad9.incomplete'\r\n```\r\n\r\n\r\nWhen tried the above fixes. altering the datasers line with loading the `squad.py` altered script I run into\r\n```\r\n30a174f57e692deb3b377336683/squad.py\", line 106, in _split_generators\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\nKeyError: 'dev'\r\n\r\n```",
"@thomwolf ",
"@abdallah197 \r\nCan you please share the script you are trying, and squad.py file changes you have done ",
"After using modified `squad.py` and converting data to JSON. It loads the data without error but when it starts training I got the following error message. @gowtham1997 \r\n\r\n```\r\n[INFO|trainer.py:837] 2021-03-04 01:19:16,915 >> ***** Running training *****\r\n[INFO|trainer.py:838] 2021-03-04 01:19:16,915 >> Num examples = 14842\r\n[INFO|trainer.py:839] 2021-03-04 01:19:16,916 >> Num Epochs = 5\r\n[INFO|trainer.py:840] 2021-03-04 01:19:16,916 >> Instantaneous batch size per device = 16\r\n[INFO|trainer.py:841] 2021-03-04 01:19:16,916 >> Total train batch size (w. parallel, distributed & accumulation) = 48\r\n[INFO|trainer.py:842] 2021-03-04 01:19:16,916 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:843] 2021-03-04 01:19:16,916 >> Total optimization steps = 1550\r\n\r\n 0%| | 0/1550 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py\", line 507, in <module>\r\n main()\r\n File \"/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py\", line 481, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 940, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 1304, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 1334, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 161, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 171, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 86, in parallel_apply\r\n output.reraise()\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/_utils.py\", line 428, in reraise\r\n raise self.exc_type(msg)\r\nValueError: Caught ValueError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 61, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py\", line 1793, in forward\r\n start_logits, end_logits = logits.split(1, dim=-1)\r\nValueError: too many values to unpack (expected 2)\r\n```",
"> After using modified `squad.py` and converting data to JSON. It loads the data without error but when it starts training I got the following error message. @gowtham1997\r\n> \r\n> ```\r\n> [INFO|trainer.py:837] 2021-03-04 01:19:16,915 >> ***** Running training *****\r\n> [INFO|trainer.py:838] 2021-03-04 01:19:16,915 >> Num examples = 14842\r\n> [INFO|trainer.py:839] 2021-03-04 01:19:16,916 >> Num Epochs = 5\r\n> [INFO|trainer.py:840] 2021-03-04 01:19:16,916 >> Instantaneous batch size per device = 16\r\n> [INFO|trainer.py:841] 2021-03-04 01:19:16,916 >> Total train batch size (w. parallel, distributed & accumulation) = 48\r\n> [INFO|trainer.py:842] 2021-03-04 01:19:16,916 >> Gradient Accumulation steps = 1\r\n> [INFO|trainer.py:843] 2021-03-04 01:19:16,916 >> Total optimization steps = 1550\r\n> \r\n> 0%| | 0/1550 [00:00<?, ?it/s]Traceback (most recent call last):\r\n> File \"/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py\", line 507, in <module>\r\n> main()\r\n> File \"/okyanus/users/ctantug/transformers/examples/question-answering/run_qa.py\", line 481, in main\r\n> train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 940, in train\r\n> tr_loss += self.training_step(model, inputs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 1304, in training_step\r\n> loss = self.compute_loss(model, inputs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py\", line 1334, in compute_loss\r\n> outputs = model(**inputs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 161, in forward\r\n> outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 171, in parallel_apply\r\n> return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 86, in parallel_apply\r\n> output.reraise()\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/_utils.py\", line 428, in reraise\r\n> raise self.exc_type(msg)\r\n> ValueError: Caught ValueError in replica 0 on device 0.\r\n> Original Traceback (most recent call last):\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 61, in _worker\r\n> output = module(*input, **kwargs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py\", line 1793, in forward\r\n> start_logits, end_logits = logits.split(1, dim=-1)\r\n> ValueError: too many values to unpack (expected 2)\r\n> ```\r\n\r\nSolved it. Turns out I have to change the config file to have only two labels (one for the first sentence and one for the second).",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | **Environment info**
transformers version: 4.0.1
Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.10
Python version: 3.8.5
PyTorch version (GPU?): 1.7.1+cu110 (True)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: yes
I am trying to pass custom dataset or modified squad dataset (in valid squad format only) using parameters
--train_file = train-v1.1.json \
--validation_file = dev-v1.1.json \
but it does not work for me g
from the official documentation, **https://github.com/huggingface/transformers/tree/master/examples/question-answering**
this script runs fine:
```
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
but if I use the below script:
```
python run_qa.py \
--model_name_or_path bert-base-uncased \
--train_file = train-v1.1.json \
--validation_file = dev-v1.1.json \
--do_train \
--do_eval \
--per_device_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /data1/debug_squad1/
```
**for data** train-v1.1.json. dev-v1.1.json / train.csv, dev.csv
error:
```
2020-12-31 12:00:59.821145: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-12-31 12:00:59.821182: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "run_qa.py", line 469, in <module>
main()
File "run_qa.py", line 159, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/media/data2/anaconda/envs/bertQA-env/lib/python3.8/site-packages/transformers/hf_argparser.py", line 135, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 16, in __init__
File "run_qa.py", line 142, in __post_init__
assert extension in ["csv", "json"], "`train_file` should be a csv or a json file."
AssertionError: `train_file` should be a csv or a json file.
```
the train_file, validation_file is a valid parameter in run_qa.py file.
Can someone please help with how can we train on specific dataset? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9369/comments | https://api.github.com/repos/huggingface/transformers/issues/9369/events | https://github.com/huggingface/transformers/pull/9369 | 776,927,069 | MDExOlB1bGxSZXF1ZXN0NTQ3MzMyMjc3 | 9,369 | TF >= 2.3 cleaning | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
The minimal TF version has recently been fixed to >=2.3, this PR remove all the <2.3 calls, mostly replacing experimental features by their stable ones. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9369",
"html_url": "https://github.com/huggingface/transformers/pull/9369",
"diff_url": "https://github.com/huggingface/transformers/pull/9369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9369.patch",
"merged_at": 1609837106000
} |
https://api.github.com/repos/huggingface/transformers/issues/9368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9368/comments | https://api.github.com/repos/huggingface/transformers/issues/9368/events | https://github.com/huggingface/transformers/pull/9368 | 776,917,178 | MDExOlB1bGxSZXF1ZXN0NTQ3MzI0NDc4 | 9,368 | Fix utils on Windows | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes `check_repo` for Windows execution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9368",
"html_url": "https://github.com/huggingface/transformers/pull/9368",
"diff_url": "https://github.com/huggingface/transformers/pull/9368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9368.patch",
"merged_at": 1609773735000
} |
https://api.github.com/repos/huggingface/transformers/issues/9367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9367/comments | https://api.github.com/repos/huggingface/transformers/issues/9367/events | https://github.com/huggingface/transformers/pull/9367 | 776,881,276 | MDExOlB1bGxSZXF1ZXN0NTQ3Mjk2ODE2 | 9,367 | Add-support-for-examples-scripts-to-run-on-sagemaker | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@sgugger's 1st point is a good point and we should probably either:\r\n- recommend that users `git clone` the same version of `transformers` as is installed in the DLC image.\r\n- or even find a way to bundle the scripts themselves (or their capabilities) in the image itself, kinda like what I was suggesting before:\r\n\r\n```python\r\nestimator = HuggingFace(\r\n task_name=\"text-classification\",\r\n dataset=\"imdb\",\r\n from_model=\"distilbert-base-cased\",\r\n publish_model=\"my-fine-tuned-model\",\r\n huggingface_token=\"...\",\r\n)\r\n```\r\n\r\n(then there's not even a need to have a free-standing script. My question on whether this is SageMaker-idiomatic still stands)",
"That´s true both of you are right. We must be able to ensure that the correct script version is used for the correct transformers & datasets version within the container image. \r\n\r\nI would not bundle them into the official DLC container since there is always that need to have an `entry_point`. My idea is maybe we still could use `task_name=\"text-classification\"` as \"entry_point\" and in the background, we can clone/get the correct script using the transformers version and the Github tags. \r\n\r\nSo for this version, we could use the script from https://github.com/huggingface/transformers/tree/v4.1.1. \r\n\r\n\r\n",
"Closed by stale bot. If this shouldn't have been closed, let me know."
] | 1,609 | 1,615 | 1,614 | MEMBER | null | Hello Guys,
i am currently working on how we could edit/extend the fine-tuning scripts from `examples/` to work out-of-the-box within sagemaker. For that i adjusted the [`run_glue.py` script](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).
To test it I created a [custom huggingface extension for sagemaker](https://github.com/philschmid/sagemaker-sdk-huggingface) where I created a sagemaker compatible docker container and a huggingface estimator.
The container was build with the `transformers==4.1.1` and `datasets==1.1.3`. That is also the reason why I only adjusted the `run_glue.py` and not any other files. The `run_glue.py` can i dynamically pass into the Sagemaker Training Job, but when i would adjust any other files yet i would have to rebuild the container... . For all the functions, which would move to a different directory I added a comment `# Should be moved to path_to_file/filename.py`.
As an Example how you could use this to create a Sagemaker training job using the extension i build you would create an `HuggingFace()` Estimator and then call `.fit()`. The example i used is demonstrated below or you can find it in this [github repostiroy](https://github.com/philschmid/sagemaker-sdk-huggingface/blob/main/examples/06_transformers_existing_training_scripts/sagemaker-notebook.ipynb)
```python
from huggingface.estimator import HuggingFace
huggingface_estimator = HuggingFace(entry_point='run_glue.py',
source_dir='../../transformers/examples/text-classification',
sagemaker_session=sess,
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
framework_version={'transformers':'4.1.1','datasets':'1.1.3'},
py_version='py3',
hyperparameters = {
'model_name_or_path': 'distilbert-base-cased',
'task_name':'MRPC',
'do_train': True,
'do_eval': True,
'max_seq_length':'128',
'per_device_train_batch_size':32,
'learning_rate':2e-5,
'num_train_epochs': 3.0
})
huggingface_estimator.fit()
```
**_Note:_ Sagemaker Requirements**
In Sagemaker you can define Hyperparameters, which are getting passed into the training script within the `HuggingFace(hyperparameters={})` dictonary. This parameter will be then passed into the training script as named arguments. So the hyperparameters from the example are going to look like this when the training script is executed.
`--do_eval True --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path distilbert-base-cased --num_train_epochs 3.0 --output_dir Not defined sagemaker --per_device_train_batch_size 32 --task_name MRPC`.
### How I proceeded
1. I created a function `is_run_on_sagemaker()` to determine if the script is running in a Sagemaker Runtime environment. This function should be move to the `transformers/src/transformers/file_utils.py` file.
2. I had to adjust the `sys.argv` because:
1. `TrainingArguments` are expecting the parameter `output_dir`, but in a Sagemaker Runtime the output_dir is defined from the enviroment variable `SM_OUTPUT_DATA_DIR`.
2. `TrainingArguments` are expecting for boolean parameters not a `True` as value. If `--train_do` exist its `True` otherwise its `False`. In Sagemaker you cannot pass keys only so i removed all `True`s from the `sys.argv` at the beginning. A better solution could that we adjust the HfArgumentParser to accept `'True'` for boolean arguments.
3. Therefore i created an `parse_sagemaker_args()` function which:
- first adds the `--output_dir` with the correct value for Sagemaker
- Secound parses alle existing environment variables to check if the datasets are passed into training job. When you run a fine-tuning script in sagemaker you can pass data into `.fit()` which is on S3 and will be downloaded before the training starts. I added two options you can either add the the direct S3 uri to a file (e.g. `s3://my-data-bucket/path/to/my/training/data.csv`) or you can add a path (e.g. `s3://my-data-bucket/path/to/data`) and pass the file as hyperparameters `train_file`.
- Third I had to remove all `True`s from the `sys.argv` for the boolean parameters.
4. Adjusted all file saving and model saving section and added conditions if the script is run on Sagemaker.
#### Testing
I tested it using the jupyter notebook I provided at the top. The log of the training script is attached:
<details>
<summary>details:</summary>
```bash
2020-12-31 08:22:11 Starting - Starting the training job...
2020-12-31 08:22:34 Starting - Launching requested ML instancesProfilerReport-1609402930: InProgress
......
2020-12-31 08:23:35 Starting - Preparing the instances for training......
2020-12-31 08:24:36 Downloading - Downloading input data
2020-12-31 08:24:36 Training - Downloading the training image.....................
2020-12-31 08:28:12 Training - Training image download completed. Training in progress..bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2020-12-31 08:28:12,243 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training
2020-12-31 08:28:12,266 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.
2020-12-31 08:28:12,498 sagemaker_pytorch_container.training INFO Invoking user training script.
2020-12-31 08:28:12,878 sagemaker-training-toolkit INFO Installing dependencies from requirements.txt:
/opt/conda/bin/python -m pip install -r requirements.txt
Requirement already satisfied: datasets>=1.1.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (1.1.3)
Requirement already satisfied: protobuf in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (3.14.0)
Requirement already satisfied: multiprocess in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.70.11.1)
Requirement already satisfied: pandas in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.1.5)
Requirement already satisfied: tqdm<4.50.0,>=4.27 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (4.49.0)
Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.8)
Requirement already satisfied: requests>=2.19.0 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.25.1)
Requirement already satisfied: xxhash in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.0)
Requirement already satisfied: pyarrow>=0.17.1 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (2.0.0)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (1.19.1)
Requirement already satisfied: dill in /opt/conda/lib/python3.6/site-packages (from datasets>=1.1.3->-r requirements.txt (line 1)) (0.3.3)
Collecting sentencepiece!=0.1.92
Downloading sentencepiece-0.1.94-cp36-cp36m-manylinux2014_x86_64.whl (1.1 MB)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (1.25.11)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests>=2.19.0->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.12.5)
Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.6/site-packages (from protobuf->-r requirements.txt (line 3)) (1.15.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /opt/conda/lib/python3.6/site-packages (from pandas->datasets>=1.1.3->-r requirements.txt (line 1)) (2020.4)
Installing collected packages: sentencepiece
Successfully installed sentencepiece-0.1.94
2020-12-31 08:28:15,036 sagemaker-training-toolkit INFO Invoking user script
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"task_name": "MRPC",
"do_train": true,
"num_train_epochs": 3.0,
"do_eval": true,
"max_seq_length": "128",
"per_device_train_batch_size": 32,
"learning_rate": 2e-05,
"model_name_or_path": "distilbert-base-cased"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "huggingface-sdk-extension-2020-12-31-08-22-10-312",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz",
"module_name": "run_glue",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "run_glue.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"do_eval":true,"do_train":true,"learning_rate":2e-05,"max_seq_length":"128","model_name_or_path":"distilbert-base-cased","num_train_epochs":3.0,"per_device_train_batch_size":32,"task_name":"MRPC"}
SM_USER_ENTRY_POINT=run_glue.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=[]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=run_glue
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"do_eval":true,"do_train":true,"learning_rate":2e-05,"max_seq_length":"128","model_name_or_path":"distilbert-base-cased","num_train_epochs":3.0,"per_device_train_batch_size":32,"task_name":"MRPC"},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-sdk-extension-2020-12-31-08-22-10-312","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-eu-central-1-558105141721/huggingface-sdk-extension-2020-12-31-08-22-10-312/source/sourcedir.tar.gz","module_name":"run_glue","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_glue.py"}
SM_USER_ARGS=["--do_eval","True","--do_train","True","--learning_rate","2e-05","--max_seq_length","128","--model_name_or_path","distilbert-base-cased","--num_train_epochs","3.0","--per_device_train_batch_size","32","--task_name","MRPC"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_HP_TASK_NAME=MRPC
SM_HP_DO_TRAIN=true
SM_HP_NUM_TRAIN_EPOCHS=3.0
SM_HP_DO_EVAL=true
SM_HP_MAX_SEQ_LENGTH=128
SM_HP_PER_DEVICE_TRAIN_BATCH_SIZE=32
SM_HP_LEARNING_RATE=2e-05
SM_HP_MODEL_NAME_OR_PATH=distilbert-base-cased
PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
Invoking script with the following command:
/opt/conda/bin/python run_glue.py --do_eval True --do_train True --learning_rate 2e-05 --max_seq_length 128 --model_name_or_path distilbert-base-cased --num_train_epochs 3.0 --per_device_train_batch_size 32 --task_name MRPC
['run_glue.py', '--do_eval', '--do_train', '--learning_rate', '2e-05', '--max_seq_length', '128', '--model_name_or_path', 'distilbert-base-cased', '--num_train_epochs', '3.0', '--per_device_train_batch_size', '32', '--task_name', 'MRPC', '--output_dir', '/opt/ml/output/data']
Downloading and preparing dataset glue/mrpc (download: 1.43 MiB, generated: 1.43 MiB, post-processed: Unknown size, total: 2.85 MiB) to /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...
Dataset glue downloaded and prepared to /root/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4. Subsequent calls will reuse this data.
[2020-12-31 08:28:43.990 algo-1:31 INFO json_config.py:90] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.
[2020-12-31 08:28:43.991 algo-1:31 INFO hook.py:193] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.
[2020-12-31 08:28:43.991 algo-1:31 INFO hook.py:238] Saving to /opt/ml/output/tensors
[2020-12-31 08:28:43.991 algo-1:31 INFO state_store.py:67] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist.
[2020-12-31 08:28:44.017 algo-1:31 INFO hook.py:398] Monitoring the collections: losses
[2020-12-31 08:28:44.017 algo-1:31 INFO hook.py:461] Hook is writing from the hook with pid: 31
[2020-12-31 08:28:45.513 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:distilbert.transformer BaseModelOutput
[2020-12-31 08:28:45.514 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:distilbert BaseModelOutput
[2020-12-31 08:28:45.523 algo-1:31 WARNING hook.py:978] var is not Tensor or list or tuple of Tensors, module_name:DistilBertForSequenceClassification SequenceClassifierOutput
{'epoch': 3.0}
12/31/2020 08:28:19 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
12/31/2020 08:28:19 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/opt/ml/output/data', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=2e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec31_08-28-19_algo-1', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/opt/ml/output/data', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, fp16_backend='auto', sharded_ddp=False)
#015Downloading: 0%| | 0.00/8.68k [00:00<?, ?B/s]#015Downloading: 28.7kB [00:00, 16.1MB/s]
#015Downloading: 0%| | 0.00/4.97k [00:00<?, ?B/s]#015Downloading: 28.7kB [00:00, 19.9MB/s]
#015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 6.22kB [00:00, 3.90MB/s]
#015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 19.7kB [00:00, 106kB/s]#015Downloading: 54.5kB [00:00, 122kB/s]#015Downloading: 124kB [00:00, 152kB/s] #015Downloading: 280kB [00:00, 201kB/s]#015Downloading: 576kB [00:00, 273kB/s]#015Downloading: 959kB [00:01, 369kB/s]#015Downloading: 1.05MB [00:01, 928kB/s]
#015Downloading: 0.00B [00:00, ?B/s]#015Downloading: 19.4kB [00:00, 103kB/s]#015Downloading: 54.3kB [00:00, 119kB/s]#015Downloading: 124kB [00:00, 150kB/s] #015Downloading: 298kB [00:00, 200kB/s]#015Downloading: 441kB [00:00, 582kB/s]
#0150 examples [00:00, ? examples/s]#0151705 examples [00:00, 17044.33 examples/s]#0153300 examples [00:00, 16698.53 examples/s]#015 #015#0150 examples [00:00, ? examples/s]#015 #015#0150 examples [00:00, ? examples/s]#015 #01512/31/2020 08:28:28 - INFO - filelock - Lock 139800303634584 acquired on /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a.lock
[INFO|file_utils.py:1301] 2020-12-31 08:28:28,367 >> https://huggingface.co/distilbert-base-cased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmplyt9e_gw
#015Downloading: 0%| | 0.00/411 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 411/411 [00:00<00:00, 496kB/s]
[INFO|file_utils.py:1305] 2020-12-31 08:28:28,649 >> storing https://huggingface.co/distilbert-base-cased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
[INFO|file_utils.py:1308] 2020-12-31 08:28:28,649 >> creating metadata file for /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
2020-12-31 08:29:30,381 sagemaker-training-toolkit INFO Reporting training SUCCESS
12/31/2020 08:28:28 - INFO - filelock - Lock 139800303634584 released on /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a.lock
[INFO|configuration_utils.py:431] 2020-12-31 08:28:28,650 >> loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
[INFO|configuration_utils.py:467] 2020-12-31 08:28:28,651 >> Model config DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"finetuning_task": "mrpc",
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"vocab_size": 28996
}
[INFO|configuration_utils.py:431] 2020-12-31 08:28:28,933 >> loading configuration file https://huggingface.co/distilbert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/ebe1ea24d11aa664488b8de5b21e33989008ca78f207d4e30ec6350b693f073f.302bfd1b5e031cc1b17796e0b6e5b242ba2045d31d00f97589e12b458ebff27a
[INFO|configuration_utils.py:467] 2020-12-31 08:28:28,933 >> Model config DistilBertConfig {
"activation": "gelu",
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"vocab_size": 28996
}
12/31/2020 08:28:29 - INFO - filelock - Lock 139797608840104 acquired on /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791.lock
[INFO|file_utils.py:1301] 2020-12-31 08:28:29,217 >> https://huggingface.co/bert-base-cased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpvm6yksc0
#015Downloading: 0%| | 0.00/213k [00:00<?, ?B/s]#015Downloading: 17%|█▋ | 36.9k/213k [00:00<00:00, 212kB/s]#015Downloading: 94%|█████████▍| 201k/213k [00:00<00:00, 282kB/s] #015Downloading: 100%|██████████| 213k/213k [00:00<00:00, 604kB/s]
[INFO|file_utils.py:1305] 2020-12-31 08:28:29,855 >> storing https://huggingface.co/bert-base-cased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
[INFO|file_utils.py:1308] 2020-12-31 08:28:29,855 >> creating metadata file for /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
12/31/2020 08:28:29 - INFO - filelock - Lock 139797608840104 released on /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791.lock
12/31/2020 08:28:30 - INFO - filelock - Lock 139797608841112 acquired on /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6.lock
[INFO|file_utils.py:1301] 2020-12-31 08:28:30,143 >> https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp5vnay570
#015Downloading: 0%| | 0.00/436k [00:00<?, ?B/s]#015Downloading: 8%|▊ | 36.9k/436k [00:00<00:01, 214kB/s]#015Downloading: 46%|████▌ | 201k/436k [00:00<00:00, 284kB/s] #015Downloading: 100%|██████████| 436k/436k [00:00<00:00, 1.10MB/s]
[INFO|file_utils.py:1305] 2020-12-31 08:28:30,827 >> storing https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
[INFO|file_utils.py:1308] 2020-12-31 08:28:30,827 >> creating metadata file for /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
12/31/2020 08:28:30 - INFO - filelock - Lock 139797608841112 released on /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6.lock
[INFO|tokenization_utils_base.py:1802] 2020-12-31 08:28:30,827 >> loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791
[INFO|tokenization_utils_base.py:1802] 2020-12-31 08:28:30,827 >> loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6
12/31/2020 08:28:31 - INFO - filelock - Lock 139800303634584 acquired on /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19.lock
[INFO|file_utils.py:1301] 2020-12-31 08:28:31,151 >> https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpi2h8yubw
#015Downloading: 0%| | 0.00/263M [00:00<?, ?B/s]#015Downloading: 2%|▏ | 4.13M/263M [00:00<00:06, 41.3MB/s]#015Downloading: 3%|▎ | 8.25M/263M [00:00<00:06, 41.2MB/s]#015Downloading: 5%|▍ | 12.8M/263M [00:00<00:05, 42.4MB/s]#015Downloading: 7%|▋ | 17.5M/263M [00:00<00:05, 43.8MB/s]#015Downloading: 9%|▊ | 22.4M/263M [00:00<00:05, 45.2MB/s]#015Downloading: 10%|█ | 27.3M/263M [00:00<00:05, 46.2MB/s]#015Downloading: 12%|█▏ | 32.2M/263M [00:00<00:04, 47.2MB/s]#015Downloading: 14%|█▍ | 37.3M/263M [00:00<00:04, 48.1MB/s]#015Downloading: 16%|█▌ | 42.3M/263M [00:00<00:04, 48.7MB/s]#015Downloading: 18%|█▊ | 47.3M/263M [00:01<00:04, 49.1MB/s]#015Downloading: 20%|█▉ | 52.3M/263M [00:01<00:04, 49.4MB/s]#015Downloading: 22%|██▏ | 57.6M/263M [00:01<00:04, 50.4MB/s]#015Downloading: 24%|██▍ | 63.7M/263M [00:01<00:03, 53.3MB/s]#015Downloading: 27%|██▋ | 69.9M/263M [00:01<00:03, 55.6MB/s]#015Downloading: 29%|██▉ | 76.1M/263M [00:01<00:03, 57.3MB/s]#015Downloading: 31%|███▏ | 82.3M/263M [00:01<00:03, 58.6MB/s]#015Downloading: 33%|███▎ | 88.2M/263M [00:01<00:02, 58.6MB/s]#015Downloading: 36%|███▌ | 94.5M/263M [00:01<00:02, 59.8MB/s]#015Downloading: 38%|███▊ | 101M/263M [00:01<00:02, 60.7MB/s] #015Downloading: 41%|████ | 107M/263M [00:02<00:02, 57.8MB/s]#015Downloading: 43%|████▎ | 113M/263M [00:02<00:02, 55.2MB/s]#015Downloading: 45%|████▍ | 118M/263M [00:02<00:02, 52.6MB/s]#015Downloading: 47%|████▋ | 124M/263M [00:02<00:02, 51.7MB/s]#015Downloading: 49%|████▉ | 129M/263M [00:02<00:02, 51.1MB/s]#015Downloading: 51%|█████ | 134M/263M [00:02<00:02, 50.8MB/s]#015Downloading: 53%|█████▎ | 139M/263M [00:02<00:02, 50.7MB/s]#015Downloading: 55%|█████▍ | 144M/263M [00:02<00:02, 49.6MB/s]#015Downloading: 57%|█████▋ | 149M/263M [00:02<00:02, 49.7MB/s]#015Downloading: 59%|█████▊ | 154M/263M [00:02<00:02, 49.9MB/s]#015Downloading: 60%|██████ | 159M/263M [00:03<00:02, 49.9MB/s]#015Downloading: 62%|██████▏ | 164M/263M [00:03<00:01, 49.6MB/s]#015Downloading: 64%|██████▍ | 169M/263M [00:03<00:01, 49.7MB/s]#015Downloading: 66%|██████▌ | 174M/263M [00:03<00:01, 49.8MB/s]#015Downloading: 68%|██████▊ | 179M/263M [00:03<00:01, 49.9MB/s]#015Downloading: 70%|██████▉ | 184M/263M [00:03<00:01, 49.9MB/s]#015Downloading: 72%|███████▏ | 189M/263M [00:03<00:01, 50.0MB/s]#015Downloading: 74%|███████▍ | 194M/263M [00:03<00:01, 50.0MB/s]#015Downloading: 76%|███████▌ | 199M/263M [00:03<00:01, 50.1MB/s]#015Downloading: 78%|███████▊ | 205M/263M [00:03<00:01, 51.3MB/s]#015Downloading: 80%|████████ | 211M/263M [00:04<00:00, 53.9MB/s]#015Downloading: 82%|████████▏ | 217M/263M [00:04<00:00, 56.1MB/s]#015Downloading: 85%|████████▍ | 223M/263M [00:04<00:00, 57.3MB/s]#015Downloading: 87%|████████▋ | 229M/263M [00:04<00:00, 58.6MB/s]#015Downloading: 89%|████████▉ | 235M/263M [00:04<00:00, 59.7MB/s]#015Downloading: 92%|█████████▏| 241M/263M [00:04<00:00, 58.4MB/s]#015Downloading: 94%|█████████▍| 247M/263M [00:04<00:00, 52.6MB/s]#015Downloading: 96%|█████████▌| 253M/263M [00:04<00:00, 51.7MB/s]#015Downloading: 98%|█████████▊| 258M/263M [00:04<00:00, 50.8MB/s]#015Downloading: 100%|█████████▉| 263M/263M [00:05<00:00, 50.9MB/s]#015Downloading: 100%|██████████| 263M/263M [00:05<00:00, 52.2MB/s]
[INFO|file_utils.py:1305] 2020-12-31 08:28:36,253 >> storing https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19
[INFO|file_utils.py:1308] 2020-12-31 08:28:36,253 >> creating metadata file for /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19
12/31/2020 08:28:36 - INFO - filelock - Lock 139800303634584 released on /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19.lock
[INFO|modeling_utils.py:1024] 2020-12-31 08:28:36,253 >> loading weights file https://huggingface.co/distilbert-base-cased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c9f39769dba4c5fe379b4bc82973eb01297bd607954621434eb9f1bc85a23a0.06b428c87335c1bb22eae46fdab31c8286efa0aa09e898a7ac42ddf5c3f5dc19
[WARNING|modeling_utils.py:1132] 2020-12-31 08:28:38,515 >> Some weights of the model checkpoint at distilbert-base-cased were not used when initializing DistilBertForSequenceClassification: ['vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_layer_norm.bias', 'vocab_projector.weight', 'vocab_projector.bias']
- This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1143] 2020-12-31 08:28:38,515 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-cased and are newly initialized: ['pre_classifier.weight', 'pre_classifier.bias', 'classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
#015 0%| | 0/4 [00:00<?, ?ba/s]#015 25%|██▌ | 1/4 [00:00<00:00, 9.17ba/s]#015 75%|███████▌ | 3/4 [00:00<00:00, 10.17ba/s]#015100%|██████████| 4/4 [00:00<00:00, 13.12ba/s]
#015 0%| | 0/1 [00:00<?, ?ba/s]#015100%|██████████| 1/1 [00:00<00:00, 29.95ba/s]
#015 0%| | 0/2 [00:00<?, ?ba/s]#015100%|██████████| 2/2 [00:00<00:00, 14.81ba/s]#015100%|██████████| 2/2 [00:00<00:00, 14.77ba/s]
12/31/2020 08:28:39 - INFO - __main__ - Sample 2619 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 2916, 'input_ids': [101, 1109, 10830, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 3081, 5097, 1104, 4961, 1149, 13260, 9966, 1222, 1140, 119, 102, 20661, 1127, 1678, 1146, 1114, 24987, 1149, 13260, 1147, 1692, 1222, 7277, 2180, 5303, 117, 3455, 170, 3081, 118, 3674, 21100, 2998, 1106, 1103, 2175, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 1, 'sentence1': 'The proceedings were taken up with prosecutors outlining their case against Amrozi , reading 33 pages of documents outlining allegations against him .', 'sentence2': 'Proceedings were taken up with prosecutors outlining their case against Amrozi , reading a 33-page accusation letter to the court .'}.
12/31/2020 08:28:39 - INFO - __main__ - Sample 456 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 509, 'input_ids': [101, 20394, 11252, 1424, 3878, 1684, 1111, 1103, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 1397, 3625, 112, 188, 5200, 1728, 1107, 1594, 118, 7820, 20394, 11252, 15449, 119, 102, 9018, 1116, 1107, 20394, 11252, 15449, 112, 188, 4116, 118, 5534, 1433, 1132, 170, 6539, 4010, 1111, 9283, 117, 1105, 6646, 1110, 1919, 1344, 3075, 1104, 3625, 112, 188, 5200, 1728, 1107, 1103, 1594, 118, 187, 15677, 3660, 1805, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 1, 'sentence1': "Chechen officials working for the Moscow-backed government are a frequent target for rebels and tension is running high ahead of next Sunday 's presidential election in war-torn Chechnya .", 'sentence2': "Officials in Chechnya 's Moscow-backed government are a frequent target for rebels , and tension is running high ahead of Sunday 's presidential election in the war-ravaged region ."}.
12/31/2020 08:28:39 - INFO - __main__ - Sample 102 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'idx': 116, 'input_ids': [101, 6433, 111, 11767, 112, 188, 2260, 4482, 7448, 2174, 1116, 5799, 125, 119, 1969, 1827, 1106, 5103, 1495, 119, 1851, 117, 1229, 11896, 1116, 1810, 4426, 2174, 1116, 2204, 127, 119, 126, 1827, 1106, 122, 117, 20278, 119, 1851, 119, 102, 1109, 6433, 111, 11767, 112, 188, 2260, 10146, 1108, 1146, 122, 119, 3453, 1827, 117, 1137, 121, 119, 1407, 3029, 117, 1106, 5311, 1559, 119, 5599, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'label': 0, 'sentence1': "Standard & Poor 's 500 stock index futures declined 4.40 points to 983.50 , while Nasdaq futures fell 6.5 points to 1,206.50 .", 'sentence2': "The Standard & Poor 's 500 Index was up 1.75 points , or 0.18 percent , to 977.68 ."}.
#015Downloading: 0%| | 0.00/1.67k [00:00<?, ?B/s]#015Downloading: 4.39kB [00:00, 3.86MB/s]
[INFO|trainer.py:388] 2020-12-31 08:28:43,678 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1.
[INFO|trainer.py:388] 2020-12-31 08:28:43,678 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1.
[INFO|trainer.py:703] 2020-12-31 08:28:43,680 >> ***** Running training *****
[INFO|trainer.py:704] 2020-12-31 08:28:43,680 >> Num examples = 3668
[INFO|trainer.py:705] 2020-12-31 08:28:43,680 >> Num Epochs = 3
[INFO|trainer.py:706] 2020-12-31 08:28:43,680 >> Instantaneous batch size per device = 32
[INFO|trainer.py:707] 2020-12-31 08:28:43,680 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:708] 2020-12-31 08:28:43,680 >> Gradient Accumulation steps = 1
[INFO|trainer.py:709] 2020-12-31 08:28:43,680 >> Total optimization steps = 345
#015 0%| | 0/345 [00:00<?, ?it/s]#015 0%| | 1/345 [00:02<11:36, 2.03s/it]#015 1%| | 2/345 [00:02<08:19, 1.46s/it]#015 1%| | 3/345 [00:02<06:01, 1.06s/it]#015 1%| | 4/345 [00:02<04:24, 1.29it/s]#015 1%|▏ | 5/345 [00:02<03:17, 1.72it/s]#015 2%|▏ | 6/345 [00:02<02:30, 2.26it/s]#015 2%|▏ | 7/345 [00:02<01:57, 2.88it/s]#015 2%|▏ | 8/345 [00:02<01:34, 3.57it/s]#015 3%|▎ | 9/345 [00:03<01:18, 4.29it/s]#015 3%|▎ | 10/345 [00:03<01:07, 4.99it/s]#015 3%|▎ | 11/345 [00:03<00:59, 5.64it/s]#015 3%|▎ | 12/345 [00:03<00:53, 6.22it/s]#015 4%|▍ | 13/345 [00:03<00:49, 6.71it/s]#015 4%|▍ | 14/345 [00:03<00:46, 7.09it/s]#015 4%|▍ | 15/345 [00:03<00:44, 7.40it/s]#015 5%|▍ | 16/345 [00:03<00:43, 7.57it/s]#015 5%|▍ | 17/345 [00:03<00:42, 7.75it/s]#015 5%|▌ | 18/345 [00:04<00:41, 7.85it/s]#015 6%|▌ | 19/345 [00:04<00:40, 7.96it/s]#015 6%|▌ | 20/345 [00:04<00:40, 8.02it/s]#015 6%|▌ | 21/345 [00:04<00:40, 8.07it/s]#015 6%|▋ | 22/345 [00:04<00:40, 8.03it/s]#015 7%|▋ | 23/345 [00:04<00:40, 8.04it/s]#015 7%|▋ | 24/345 [00:04<00:39, 8.07it/s]#015 7%|▋ | 25/345 [00:04<00:39, 8.07it/s]#015 8%|▊ | 26/345 [00:05<00:39, 8.11it/s]#015 8%|▊ | 27/345 [00:05<00:39, 8.11it/s]#015 8%|▊ | 28/345 [00:05<00:38, 8.14it/s]#015 8%|▊ | 29/345 [00:05<00:39, 8.10it/s]#015 9%|▊ | 30/345 [00:05<00:39, 8.06it/s]#015 9%|▉ | 31/345 [00:05<00:38, 8.10it/s]#015 9%|▉ | 32/345 [00:05<00:38, 8.13it/s]#015 10%|▉ | 33/345 [00:05<00:38, 8.12it/s]#015 10%|▉ | 34/345 [00:06<00:38, 8.14it/s]#015 10%|█ | 35/345 [00:06<00:38, 8.12it/s]#015 10%|█ | 36/345 [00:06<00:38, 8.10it/s]#015 11%|█ | 37/345 [00:06<00:38, 8.10it/s]#015 11%|█ | 38/345 [00:06<00:37, 8.13it/s]#015 11%|█▏ | 39/345 [00:06<00:37, 8.10it/s]#015 12%|█▏ | 40/345 [00:06<00:37, 8.08it/s]#015 12%|█▏ | 41/345 [00:06<00:37, 8.09it/s]#015 12%|█▏ | 42/345 [00:07<00:37, 8.08it/s]#015 12%|█▏ | 43/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 44/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 45/345 [00:07<00:37, 8.09it/s]#015 13%|█▎ | 46/345 [00:07<00:37, 8.08it/s]#015 14%|█▎ | 47/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 48/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 49/345 [00:07<00:36, 8.08it/s]#015 14%|█▍ | 50/345 [00:08<00:36, 8.07it/s]#015 15%|█▍ | 51/345 [00:08<00:36, 8.08it/s]#015 15%|█▌ | 52/345 [00:08<00:36, 8.09it/s]#015 15%|█▌ | 53/345 [00:08<00:36, 8.10it/s]#015 16%|█▌ | 54/345 [00:08<00:35, 8.10it/s]#015 16%|█▌ | 55/345 [00:08<00:35, 8.09it/s]#015 16%|█▌ | 56/345 [00:08<00:35, 8.09it/s]#015 17%|█▋ | 57/345 [00:08<00:35, 8.08it/s]#015 17%|█▋ | 58/345 [00:09<00:35, 8.08it/s]#015 17%|█▋ | 59/345 [00:09<00:35, 8.02it/s]#015 17%|█▋ | 60/345 [00:09<00:35, 8.04it/s]#015 18%|█▊ | 61/345 [00:09<00:35, 7.95it/s]#015 18%|█▊ | 62/345 [00:09<00:35, 7.93it/s]#015 18%|█▊ | 63/345 [00:09<00:35, 7.97it/s]#015 19%|█▊ | 64/345 [00:09<00:35, 8.00it/s]#015 19%|█▉ | 65/345 [00:09<00:35, 7.99it/s]#015 19%|█▉ | 66/345 [00:10<00:34, 8.02it/s]#015 19%|█▉ | 67/345 [00:10<00:34, 8.04it/s]#015 20%|█▉ | 68/345 [00:10<00:34, 8.06it/s]#015 20%|██ | 69/345 [00:10<00:34, 8.08it/s]#015 20%|██ | 70/345 [00:10<00:34, 8.08it/s]#015 21%|██ | 71/345 [00:10<00:33, 8.07it/s]#015 21%|██ | 72/345 [00:10<00:33, 8.07it/s]#015 21%|██ | 73/345 [00:10<00:33, 8.03it/s]#015 21%|██▏ | 74/345 [00:11<00:33, 8.01it/s]#015 22%|██▏ | 75/345 [00:11<00:33, 8.03it/s]#015 22%|██▏ | 76/345 [00:11<00:33, 8.04it/s]#015 22%|██▏ | 77/345 [00:11<00:33, 8.05it/s]#015 23%|██▎ | 78/345 [00:11<00:33, 8.06it/s]#015 23%|██▎ | 79/345 [00:11<00:33, 8.06it/s]#015 23%|██▎ | 80/345 [00:11<00:32, 8.07it/s]#015 23%|██▎ | 81/345 [00:11<00:32, 8.07it/s]#015 24%|██▍ | 82/345 [00:12<00:32, 8.07it/s]#015 24%|██▍ | 83/345 [00:12<00:32, 8.08it/s]#015 24%|██▍ | 84/345 [00:12<00:32, 8.08it/s]#015 25%|██▍ | 85/345 [00:12<00:32, 8.09it/s]#015 25%|██▍ | 86/345 [00:12<00:32, 8.08it/s]#015 25%|██▌ | 87/345 [00:12<00:31, 8.08it/s]#015 26%|██▌ | 88/345 [00:12<00:31, 8.08it/s]#015 26%|██▌ | 89/345 [00:12<00:31, 8.10it/s]#015 26%|██▌ | 90/345 [00:13<00:31, 8.09it/s]#015 26%|██▋ | 91/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 92/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 93/345 [00:13<00:31, 8.08it/s]#015 27%|██▋ | 94/345 [00:13<00:31, 8.08it/s]#015 28%|██▊ | 95/345 [00:13<00:30, 8.09it/s]#015 28%|██▊ | 96/345 [00:13<00:30, 8.08it/s]#015 28%|██▊ | 97/345 [00:13<00:30, 8.09it/s]#015 28%|██▊ | 98/345 [00:14<00:30, 8.09it/s]#015 29%|██▊ | 99/345 [00:14<00:30, 8.08it/s]#015 29%|██▉ | 100/345 [00:14<00:30, 8.08it/s]#015 29%|██▉ | 101/345 [00:14<00:30, 8.09it/s]#015 30%|██▉ | 102/345 [00:14<00:30, 7.97it/s]#015 30%|██▉ | 103/345 [00:14<00:30, 7.98it/s]#015 30%|███ | 104/345 [00:14<00:30, 7.99it/s]#015 30%|███ | 105/345 [00:14<00:30, 7.99it/s]#015 31%|███ | 106/345 [00:15<00:29, 7.99it/s]#015 31%|███ | 107/345 [00:15<00:29, 8.00it/s]#015 31%|███▏ | 108/345 [00:15<00:29, 8.01it/s]#015 32%|███▏ | 109/345 [00:15<00:29, 8.02it/s]#015 32%|███▏ | 110/345 [00:15<00:29, 8.01it/s]#015 32%|███▏ | 111/345 [00:15<00:29, 8.00it/s]#015 32%|███▏ | 112/345 [00:15<00:29, 8.00it/s]#015 33%|███▎ | 113/345 [00:15<00:28, 8.00it/s]#015 33%|███▎ | 114/345 [00:16<00:28, 8.00it/s]#015 34%|███▎ | 116/345 [00:16<00:27, 8.39it/s]#015 34%|███▍ | 117/345 [00:16<00:27, 8.30it/s]#015 34%|███▍ | 118/345 [00:16<00:27, 8.24it/s]#015 34%|███▍ | 119/345 [00:16<00:27, 8.19it/s]#015 35%|███▍ | 120/345 [00:16<00:27, 8.12it/s]#015 35%|███▌ | 121/345 [00:16<00:27, 8.11it/s]#015 35%|███▌ | 122/345 [00:16<00:27, 8.10it/s]#015 36%|███▌ | 123/345 [00:17<00:27, 8.09it/s]#015 36%|███▌ | 124/345 [00:17<00:27, 8.09it/s]#015 36%|███▌ | 125/345 [00:17<00:27, 8.09it/s]#015 37%|███▋ | 126/345 [00:17<00:27, 8.10it/s]#015 37%|███▋ | 127/345 [00:17<00:26, 8.09it/s]#015 37%|███▋ | 128/345 [00:17<00:26, 8.04it/s]#015 37%|███▋ | 129/345 [00:17<00:26, 8.04it/s]#015 38%|███▊ | 130/345 [00:17<00:26, 8.05it/s]#015 38%|███▊ | 131/345 [00:18<00:26, 8.06it/s]#015 38%|███▊ | 132/345 [00:18<00:26, 8.06it/s]#015 39%|███▊ | 133/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 134/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 135/345 [00:18<00:26, 8.06it/s]#015 39%|███▉ | 136/345 [00:18<00:25, 8.07it/s]#015 40%|███▉ | 137/345 [00:18<00:25, 8.06it/s]#015 40%|████ | 138/345 [00:18<00:25, 8.06it/s]#015 40%|████ | 139/345 [00:19<00:25, 8.05it/s]#015 41%|████ | 140/345 [00:19<00:25, 8.07it/s]#015 41%|████ | 141/345 [00:19<00:25, 8.08it/s]#015 41%|████ | 142/345 [00:19<00:25, 8.09it/s]#015 41%|████▏ | 143/345 [00:19<00:24, 8.09it/s]#015 42%|████▏ | 144/345 [00:19<00:24, 8.10it/s]#015 42%|████▏ | 145/345 [00:19<00:24, 8.10it/s]#015 42%|████▏ | 146/345 [00:19<00:24, 8.10it/s]#015 43%|████▎ | 147/345 [00:20<00:24, 8.10it/s]#015 43%|████▎ | 148/345 [00:20<00:24, 8.11it/s]#015 43%|████▎ | 149/345 [00:20<00:24, 8.12it/s]#015 43%|████▎ | 150/345 [00:20<00:24, 8.12it/s]#015 44%|████▍ | 151/345 [00:20<00:23, 8.12it/s]#015 44%|████▍ | 152/345 [00:20<00:23, 8.13it/s]#015 44%|████▍ | 153/345 [00:20<00:23, 8.11it/s]#015 45%|████▍ | 154/345 [00:20<00:23, 8.11it/s]#015 45%|████▍ | 155/345 [00:21<00:23, 8.03it/s]#015 45%|████▌ | 156/345 [00:21<00:23, 8.05it/s]#015 46%|████▌ | 157/345 [00:21<00:23, 8.07it/s]#015 46%|████▌ | 158/345 [00:21<00:23, 8.08it/s]#015 46%|████▌ | 159/345 [00:21<00:22, 8.09it/s]#015 46%|████▋ | 160/345 [00:21<00:22, 8.10it/s]#015 47%|████▋ | 161/345 [00:21<00:22, 8.11it/s]#015 47%|████▋ | 162/345 [00:21<00:22, 8.10it/s]#015 47%|████▋ | 163/345 [00:22<00:22, 7.95it/s]#015 48%|████▊ | 164/345 [00:22<00:23, 7.75it/s]#015 48%|████▊ | 165/345 [00:22<00:23, 7.68it/s]#015 48%|████▊ | 166/345 [00:22<00:23, 7.74it/s]#015 48%|████▊ | 167/345 [00:22<00:22, 7.81it/s]#015 49%|████▊ | 168/345 [00:22<00:22, 7.86it/s]#015 49%|████▉ | 169/345 [00:22<00:22, 7.89it/s]#015 49%|████▉ | 170/345 [00:22<00:22, 7.93it/s]#015 50%|████▉ | 171/345 [00:23<00:21, 7.93it/s]#015 50%|████▉ | 172/345 [00:23<00:21, 7.98it/s]#015 50%|█████ | 173/345 [00:23<00:21, 8.03it/s]#015 50%|█████ | 174/345 [00:23<00:21, 8.05it/s]#015 51%|█████ | 175/345 [00:23<00:21, 8.08it/s]#015 51%|█████ | 176/345 [00:23<00:20, 8.09it/s]#015 51%|█████▏ | 177/345 [00:23<00:20, 8.10it/s]#015 52%|█████▏ | 178/345 [00:23<00:20, 8.10it/s]#015 52%|█████▏ | 179/345 [00:24<00:20, 8.09it/s]#015 52%|█████▏ | 180/345 [00:24<00:20, 8.10it/s]#015 52%|█████▏ | 181/345 [00:24<00:20, 8.10it/s]#015 53%|█████▎ | 182/345 [00:24<00:20, 8.09it/s]#015 53%|█████▎ | 183/345 [00:24<00:20, 8.07it/s]#015 53%|█████▎ | 184/345 [00:24<00:19, 8.07it/s]#015 54%|█████▎ | 185/345 [00:24<00:19, 8.07it/s]#015 54%|█████▍ | 186/345 [00:24<00:19, 8.07it/s]#015 54%|█████▍ | 187/345 [00:25<00:19, 8.07it/s]#015 54%|█████▍ | 188/345 [00:25<00:19, 8.07it/s]#015 55%|█████▍ | 189/345 [00:25<00:19, 8.07it/s]#015 55%|█████▌ | 190/345 [00:25<00:19, 8.06it/s]#015 55%|█████▌ | 191/345 [00:25<00:19, 8.07it/s]#015 56%|█████▌ | 192/345 [00:25<00:18, 8.07it/s]#015 56%|█████▌ | 193/345 [00:25<00:18, 8.07it/s]#015 56%|█████▌ | 194/345 [00:25<00:18, 8.07it/s]#015 57%|█████▋ | 195/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 196/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 197/345 [00:26<00:18, 8.07it/s]#015 57%|█████▋ | 198/345 [00:26<00:18, 8.06it/s]#015 58%|█████▊ | 199/345 [00:26<00:18, 8.06it/s]#015 58%|█████▊ | 200/345 [00:26<00:17, 8.07it/s]#015 58%|█████▊ | 201/345 [00:26<00:17, 8.08it/s]#015 59%|█████▊ | 202/345 [00:26<00:17, 8.08it/s]#015 59%|█████▉ | 203/345 [00:27<00:17, 8.07it/s]#015 59%|█████▉ | 204/345 [00:27<00:17, 8.06it/s]#015 59%|█████▉ | 205/345 [00:27<00:17, 8.07it/s]#015 60%|█████▉ | 206/345 [00:27<00:17, 8.06it/s]#015 60%|██████ | 207/345 [00:27<00:17, 8.05it/s]#015 60%|██████ | 208/345 [00:27<00:17, 8.06it/s]#015 61%|██████ | 209/345 [00:27<00:16, 8.06it/s]#015 61%|██████ | 210/345 [00:27<00:16, 8.06it/s]#015 61%|██████ | 211/345 [00:28<00:16, 8.06it/s]#015 61%|██████▏ | 212/345 [00:28<00:16, 8.05it/s]#015 62%|██████▏ | 213/345 [00:28<00:16, 8.06it/s]#015 62%|██████▏ | 214/345 [00:28<00:16, 8.06it/s]#015 62%|██████▏ | 215/345 [00:28<00:16, 8.07it/s]#015 63%|██████▎ | 216/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 217/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 218/345 [00:28<00:15, 8.07it/s]#015 63%|██████▎ | 219/345 [00:29<00:15, 8.08it/s]#015 64%|██████▍ | 220/345 [00:29<00:15, 8.01it/s]#015 64%|██████▍ | 221/345 [00:29<00:15, 8.02it/s]#015 64%|██████▍ | 222/345 [00:29<00:15, 8.04it/s]#015 65%|██████▍ | 223/345 [00:29<00:15, 8.04it/s]#015 65%|██████▍ | 224/345 [00:29<00:15, 8.05it/s]#015 65%|██████▌ | 225/345 [00:29<00:14, 8.05it/s]#015 66%|██████▌ | 226/345 [00:29<00:14, 8.04it/s]#015 66%|██████▌ | 227/345 [00:30<00:14, 8.04it/s]#015 66%|██████▌ | 228/345 [00:30<00:14, 8.03it/s]#015 66%|██████▋ | 229/345 [00:30<00:14, 7.98it/s]#015 67%|██████▋ | 231/345 [00:30<00:13, 8.38it/s]#015 67%|██████▋ | 232/345 [00:30<00:13, 8.27it/s]#015 68%|██████▊ | 233/345 [00:30<00:13, 8.20it/s]#015 68%|██████▊ | 234/345 [00:30<00:13, 8.15it/s]#015 68%|██████▊ | 235/345 [00:30<00:13, 8.11it/s]#015 68%|██████▊ | 236/345 [00:31<00:13, 8.09it/s]#015 69%|██████▊ | 237/345 [00:31<00:13, 8.07it/s]#015 69%|██████▉ | 238/345 [00:31<00:13, 8.07it/s]#015 69%|██████▉ | 239/345 [00:31<00:13, 8.06it/s]#015 70%|██████▉ | 240/345 [00:31<00:13, 8.05it/s]#015 70%|██████▉ | 241/345 [00:31<00:12, 8.06it/s]#015 70%|███████ | 242/345 [00:31<00:12, 8.05it/s]#015 70%|███████ | 243/345 [00:31<00:12, 8.05it/s]#015 71%|███████ | 244/345 [00:32<00:12, 8.05it/s]#015 71%|███████ | 245/345 [00:32<00:12, 8.05it/s]#015 71%|███████▏ | 246/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 247/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 248/345 [00:32<00:12, 8.04it/s]#015 72%|███████▏ | 249/345 [00:32<00:11, 8.04it/s]#015 72%|███████▏ | 250/345 [00:32<00:11, 8.03it/s]#015 73%|███████▎ | 251/345 [00:32<00:11, 8.04it/s]#015 73%|███████▎ | 252/345 [00:33<00:11, 8.04it/s]#015 73%|███████▎ | 253/345 [00:33<00:11, 8.05it/s]#015 74%|███████▎ | 254/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 255/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 256/345 [00:33<00:11, 8.05it/s]#015 74%|███████▍ | 257/345 [00:33<00:10, 8.05it/s]#015 75%|███████▍ | 258/345 [00:33<00:10, 8.05it/s]#015 75%|███████▌ | 259/345 [00:33<00:10, 8.04it/s]#015 75%|███████▌ | 260/345 [00:34<00:10, 8.04it/s]#015 76%|███████▌ | 261/345 [00:34<00:10, 7.98it/s]#015 76%|███████▌ | 262/345 [00:34<00:10, 7.99it/s]#015 76%|███████▌ | 263/345 [00:34<00:10, 8.00it/s]#015 77%|███████▋ | 264/345 [00:34<00:10, 8.01it/s]#015 77%|███████▋ | 265/345 [00:34<00:10, 7.91it/s]#015 77%|███████▋ | 266/345 [00:34<00:10, 7.88it/s]#015 77%|███████▋ | 267/345 [00:34<00:09, 7.94it/s]#015 78%|███████▊ | 268/345 [00:35<00:09, 7.99it/s]#015 78%|███████▊ | 269/345 [00:35<00:09, 7.96it/s]#015 78%|███████▊ | 270/345 [00:35<00:09, 8.00it/s]#015 79%|███████▊ | 271/345 [00:35<00:09, 8.02it/s]#015 79%|███████▉ | 272/345 [00:35<00:09, 8.03it/s]#015 79%|███████▉ | 273/345 [00:35<00:08, 8.05it/s]#015 79%|███████▉ | 274/345 [00:35<00:08, 8.07it/s]#015 80%|███████▉ | 275/345 [00:35<00:08, 8.09it/s]#015 80%|████████ | 276/345 [00:36<00:08, 8.11it/s]#015 80%|████████ | 277/345 [00:36<00:08, 8.11it/s]#015 81%|████████ | 278/345 [00:36<00:08, 8.09it/s]#015 81%|████████ | 279/345 [00:36<00:08, 8.10it/s]#015 81%|████████ | 280/345 [00:36<00:08, 8.09it/s]#015 81%|████████▏ | 281/345 [00:36<00:07, 8.09it/s]#015 82%|████
████▏ | 282/345 [00:36<00:07, 8.09it/s]#015 82%|████████▏ | 283/345 [00:36<00:07, 8.10it/s]#015 82%|████████▏ | 284/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 285/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 286/345 [00:37<00:07, 8.11it/s]#015 83%|████████▎ | 287/345 [00:37<00:07, 8.12it/s]#015 83%|████████▎ | 288/345 [00:37<00:07, 8.11it/s]#015 84%|████████▍ | 289/345 [00:37<00:06, 8.11it/s]#015 84%|████████▍ | 290/345 [00:37<00:06, 8.12it/s]#015 84%|████████▍ | 291/345 [00:37<00:06, 8.11it/s]#015 85%|████████▍ | 292/345 [00:38<00:06, 8.11it/s]#015 85%|████████▍ | 293/345 [00:38<00:06, 8.12it/s]#015 85%|████████▌ | 294/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 295/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 296/345 [00:38<00:06, 8.10it/s]#015 86%|████████▌ | 297/345 [00:38<00:05, 8.11it/s]#015 86%|████████▋ | 298/345 [00:38<00:05, 8.12it/s]#015 87%|████████▋ | 299/345 [00:38<00:05, 8.11it/s]#015 87%|████████▋ | 300/345 [00:39<00:05, 8.11it/s]#015 87%|████████▋ | 301/345 [00:39<00:05, 8.11it/s]#015 88%|████████▊ | 302/345 [00:39<00:05, 8.09it/s]#015 88%|████████▊ | 303/345 [00:39<00:05, 7.98it/s]#015 88%|████████▊ | 304/345 [00:39<00:05, 8.01it/s]#015 88%|████████▊ | 305/345 [00:39<00:04, 8.04it/s]#015 89%|████████▊ | 306/345 [00:39<00:04, 7.92it/s]#015 89%|████████▉ | 307/345 [00:39<00:04, 7.97it/s]#015 89%|████████▉ | 308/345 [00:40<00:04, 8.00it/s]#015 90%|████████▉ | 309/345 [00:40<00:04, 8.03it/s]#015 90%|████████▉ | 310/345 [00:40<00:04, 8.04it/s]#015 90%|█████████ | 311/345 [00:40<00:04, 8.05it/s]#015 90%|█████████ | 312/345 [00:40<00:04, 8.05it/s]#015 91%|█████████ | 313/345 [00:40<00:04, 7.98it/s]#015 91%|█████████ | 314/345 [00:40<00:03, 8.01it/s]#015 91%|█████████▏| 315/345 [00:40<00:03, 8.02it/s]#015 92%|█████████▏| 316/345 [00:41<00:03, 8.04it/s]#015 92%|█████████▏| 317/345 [00:41<00:03, 8.05it/s]#015 92%|█████████▏| 318/345 [00:41<00:03, 8.00it/s]#015 92%|█████████▏| 319/345 [00:41<00:03, 8.03it/s]#015 93%|█████████▎| 320/345 [00:41<00:03, 8.04it/s]#015 93%|█████████▎| 321/345 [00:41<00:02, 8.06it/s]#015 93%|█████████▎| 322/345 [00:41<00:02, 8.07it/s]#015 94%|█████████▎| 323/345 [00:41<00:02, 8.05it/s]#015 94%|█████████▍| 324/345 [00:42<00:02, 8.06it/s]#015 94%|█████████▍| 325/345 [00:42<00:02, 8.08it/s]#015 94%|█████████▍| 326/345 [00:42<00:02, 8.07it/s]#015 95%|█████████▍| 327/345 [00:42<00:02, 8.03it/s]#015 95%|█████████▌| 328/345 [00:42<00:02, 8.05it/s]#015 95%|█████████▌| 329/345 [00:42<00:01, 8.07it/s]#015 96%|█████████▌| 330/345 [00:42<00:01, 8.09it/s]#015 96%|█████████▌| 331/345 [00:42<00:01, 8.09it/s]#015 96%|█████████▌| 332/345 [00:43<00:01, 8.09it/s]#015 97%|█████████▋| 333/345 [00:43<00:01, 8.09it/s]#015 97%|█████████▋| 334/345 [00:43<00:01, 8.10it/s]#015 97%|█████████▋| 335/345 [00:43<00:01, 8.05it/s]#015 97%|█████████▋| 336/345 [00:43<00:01, 8.03it/s]#015 98%|█████████▊| 337/345 [00:43<00:00, 8.03it/s]#015 98%|█████████▊| 338/345 [00:43<00:00, 8.04it/s]#015 98%|█████████▊| 339/345 [00:43<00:00, 8.04it/s]#015 99%|█████████▊| 340/345 [00:44<00:00, 8.04it/s]#015 99%|█████████▉| 341/345 [00:44<00:00, 8.04it/s]#015 99%|█████████▉| 342/345 [00:44<00:00, 8.02it/s]#015 99%|█████████▉| 343/345 [00:44<00:00, 8.01it/s]#015100%|█████████▉| 344/345 [00:44<00:00, 8.01it/s][INFO|trainer.py:862] 2020-12-31 08:29:28,297 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
#015 #015#015100%|██████████| 345/345 [00:44<00:00, 8.01it/s]#015100%|██████████| 345/345 [00:44<00:00, 7.73it/s]
[INFO|trainer.py:1226] 2020-12-31 08:29:28,298 >> Saving model checkpoint to /opt/ml/model
[INFO|configuration_utils.py:289] 2020-12-31 08:29:28,300 >> Configuration saved in /opt/ml/model/config.json
[INFO|modeling_utils.py:814] 2020-12-31 08:29:28,950 >> Model weights saved in /opt/ml/model/pytorch_model.bin
12/31/2020 08:29:28 - INFO - __main__ - ***** Train results *****
12/31/2020 08:29:28 - INFO - __main__ - global_step = 345
12/31/2020 08:29:28 - INFO - __main__ - training_loss = 0.4789575106855752
12/31/2020 08:29:28 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:388] 2020-12-31 08:29:28,986 >> The following columns in the evaluation set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: sentence2, idx, sentence1.
[INFO|trainer.py:1412] 2020-12-31 08:29:28,987 >> ***** Running Evaluation *****
[INFO|trainer.py:1413] 2020-12-31 08:29:28,987 >> Num examples = 408
[INFO|trainer.py:1414] 2020-12-31 08:29:28,987 >> Batch size = 8
#015 0%| | 0/51 [00:00<?, ?it/s]#015 18%|█▊ | 9/51 [00:00<00:00, 80.14it/s]#015 33%|███▎ | 17/51 [00:00<00:00, 77.98it/s]#015 49%|████▉ | 25/51 [00:00<00:00, 76.58it/s]#015 65%|██████▍ | 33/51 [00:00<00:00, 75.53it/s]#015 80%|████████ | 41/51 [00:00<00:00, 74.76it/s]#015 96%|█████████▌| 49/51 [00:00<00:00, 74.40it/s]12/31/2020 08:29:29 - INFO - /opt/conda/lib/python3.6/site-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow
#015100%|██████████| 51/51 [00:00<00:00, 72.39it/s]
12/31/2020 08:29:29 - INFO - __main__ - ***** Eval results mrpc *****
12/31/2020 08:29:29 - INFO - __main__ - epoch = 3.0
12/31/2020 08:29:29 - INFO - __main__ - eval_accuracy = 0.7892156862745098
12/31/2020 08:29:29 - INFO - __main__ - eval_combined_score = 0.8183667083854819
12/31/2020 08:29:29 - INFO - __main__ - eval_f1 = 0.847517730496454
12/31/2020 08:29:29 - INFO - __main__ - eval_loss = 0.4569968283176422
2020-12-31 08:29:40 Uploading - Uploading generated training model
2020-12-31 08:30:16 Completed - Training job completed
Training seconds: 357
Billable seconds: 357
```
</details>
For local testing you can ran this script. It will add all the required Sagemaker environment variables to the script.
```bash
export TASK_NAME=mrpc
export SM_CHANNELS=["test","train"]
export SM_OUTPUT_DATA_DIR=/opt/ml/output/data
export SM_MODEL_DIR=/opt/ml/model
export M_CHANNEL_TEST=/opt/ml/input/data/test
export SM_CHANNEL_TRAIN=/opt/ml/input/data/train
python ../../transformers/examples/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train True \
--do_eval True \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
```
I would love to receive suggestions for improvement.
If it looks okay for you I would move the `is_run_on_sagemaker()` to the correct path and we could merge it.
~~P.S. i also added a fix for the `train_result.metrics` https://discuss.huggingface.co/t/attributeerror-trainoutput-object-has-no-attribute-metrics-when-finetune-custom-dataset/2970~~ mistake from my site
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9367/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9367/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9367",
"html_url": "https://github.com/huggingface/transformers/pull/9367",
"diff_url": "https://github.com/huggingface/transformers/pull/9367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9367.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9366/comments | https://api.github.com/repos/huggingface/transformers/issues/9366/events | https://github.com/huggingface/transformers/issues/9366 | 776,810,175 | MDU6SXNzdWU3NzY4MTAxNzU= | 9,366 | How to implement seq2seq attention mask conviniently? | {
"login": "zhizeng8",
"id": 49787234,
"node_id": "MDQ6VXNlcjQ5Nzg3MjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/49787234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhizeng8",
"html_url": "https://github.com/zhizeng8",
"followers_url": "https://api.github.com/users/zhizeng8/followers",
"following_url": "https://api.github.com/users/zhizeng8/following{/other_user}",
"gists_url": "https://api.github.com/users/zhizeng8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhizeng8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhizeng8/subscriptions",
"organizations_url": "https://api.github.com/users/zhizeng8/orgs",
"repos_url": "https://api.github.com/users/zhizeng8/repos",
"events_url": "https://api.github.com/users/zhizeng8/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhizeng8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The image is also on this link.\r\nhttps://img-blog.csdnimg.cn/20191025102941935.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9qYWNrY3VpLmJsb2cuY3Nkbi5uZXQ=,size_16,color_FFFFFF,t_70",
"Hey @zhizeng8, \r\n\r\nIn the future, it would be nice if such questions are posted in the forum: https://discuss.huggingface.co/ as it is not about a bug. \r\nTo answer your question I'd use something like the following\r\n\r\n```python\r\n import torch\r\n\r\n tgt_len = 5\r\n # make causal mask\r\n mask = torch.full((tgt_len, tgt_len), float(\"-inf\"))\r\n mask_cond = torch.arange(mask.size(-1))\r\n mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)\r\n # attend to encoder part\r\n mask[:, :3] = 0\r\n```\r\n\r\nThis mask can however not just be input as an `attention_mask` to transformer models. Because BERT accepts 3d masks however with the 0-th index being the batch_size the above mask could be extended for all batches and input to BERT:\r\n\r\n```python\r\n3d_attention_mask = mask[None, :, :]\r\n\r\nbert = BertModel.from_pretrained(...)\r\nbert(input_ids, attention_mask=3d_attention_mask)\r\n```",
"Thank you very much!\r\nI think in my case the 3d_attention_mask is different for each instance in a batch, due to the different length of source and target sequence.",
"I have been reading this article recently. BertModel accepts 3d masks of dimensions [batch_size, from_seq_length, to_seq_length]. \r\n```\r\ntext='I love you' \r\nattention_mask = [[[1,0,0],[1,1,0],[1,1,1]]]\r\n```\r\nMaybe can help you.\r\nAlso I have a question.\r\n```\r\ntext='I love you' \r\nattention_mask = [[[1,1,1],[1,1,1],[1,1,1]]]\r\n```\r\n I found tensor of the last word 'you' are not same, I don't know the reason.",
"I know. Last word's tensor in first attention layer is same, but there are 12 attention layer. Hiddenstate may change.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,609 | 1,619 | 1,619 | NONE | null | BERT's attention mask is square, GPT's attention mask is triangular. How to implement seq2seq attention mask with transformers package conviniently? like the one appears in UniLM, a triangle concatenates a rectangle.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9365/comments | https://api.github.com/repos/huggingface/transformers/issues/9365/events | https://github.com/huggingface/transformers/issues/9365 | 776,702,511 | MDU6SXNzdWU3NzY3MDI1MTE= | 9,365 | Multi turn conversation with Blender Bot | {
"login": "mailong25",
"id": 12481660,
"node_id": "MDQ6VXNlcjEyNDgxNjYw",
"avatar_url": "https://avatars.githubusercontent.com/u/12481660?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mailong25",
"html_url": "https://github.com/mailong25",
"followers_url": "https://api.github.com/users/mailong25/followers",
"following_url": "https://api.github.com/users/mailong25/following{/other_user}",
"gists_url": "https://api.github.com/users/mailong25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mailong25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mailong25/subscriptions",
"organizations_url": "https://api.github.com/users/mailong25/orgs",
"repos_url": "https://api.github.com/users/mailong25/repos",
"events_url": "https://api.github.com/users/mailong25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mailong25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @mailong25 for blenderbot the dialogs are separated by the newline `\\n`.\r\n\r\nSo the text should be \r\n`I am from Vietnam\\nI've never been there, but I've always wanted to go. How do you like it?\\npretty good actually , where you are from ?`\r\n\r\nwhich model are you using, 90M or 3B?\r\n\r\nAlso, could you post the `parlai` command that you used?",
"Thanks for a quick response. I use the `facebook/blenderbot-1B-distill` model\r\n\r\nFor parlai, I use the cmd:\r\n\r\n`python parlai/scripts/interactive.py -t blended_skill_talk -mf zoo:blender/blender_1Bdistill/model --include_personas=False`",
"I tried to use the `'\\n'` separator with `blenderbot-1B` and `blender_1Bdistill` and the results are still the same with ` </sep>`, which are different than parlai version.\r\n\r\n\r\nAlso, when I tried to move the model and the input sentence to \"cuda\", the following errors occur:\r\n\r\n```\r\nimport os\r\nos.environ[\"TRANSFORMERS_CACHE\"] = '/mnt/disks/blender/'\r\nfrom transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration\r\nmname = 'facebook/blenderbot-3B'\r\nmodel = BlenderbotForConditionalGeneration.from_pretrained(mname)\r\ntokenizer = BlenderbotTokenizer.from_pretrained(mname)\r\nmodel.to('cuda')\r\n\r\nimport torch\r\nwith torch.no_grad():\r\n UTTERANCE = []\r\n UTTERANCE.append(\"I am from Vietnam\")\r\n UTTERANCE.append(\"I've never been there, but I've always wanted to go. How do you like it?\")\r\n UTTERANCE.append(\"pretty good actually , where you are from ?\")\r\n \r\n UTTERANCE = '\\n'.join(UTTERANCE)\r\n print(UTTERANCE)\r\n inputs = tokenizer([UTTERANCE], return_tensors='pt')\r\n reply_ids = model.generate(**inputs)\r\n print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids])\r\n\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-9-dff8c43ffc48> in <module>\r\n 10 print(UTTERANCE)\r\n 11 inputs = tokenizer([UTTERANCE], return_tensors='pt')\r\n---> 12 reply_ids = model.generate(**inputs)\r\n 13 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids])\r\n\r\n~/.local/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)\r\n 13 def decorate_context(*args, **kwargs):\r\n 14 with self:\r\n---> 15 return func(*args, **kwargs)\r\n 16 return decorate_context\r\n 17 \r\n\r\n~/.local/lib/python3.7/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, **model_kwargs)\r\n 501 if self.config.is_encoder_decoder:\r\n 502 # add encoder_outputs to model_kwargs\r\n--> 503 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)\r\n 504 \r\n 505 # set input_ids as decoder_input_ids\r\n\r\n~/.local/lib/python3.7/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)\r\n 84 argument: value for argument, value in model_kwargs.items() if not argument.startswith(\"decoder_\")\r\n 85 }\r\n---> 86 model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)\r\n 87 return model_kwargs\r\n 88 \r\n\r\n~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n~/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)\r\n 750 \r\n 751 if inputs_embeds is None:\r\n--> 752 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale\r\n 753 \r\n 754 embed_pos = self.embed_positions(input_shape)\r\n\r\n~/.local/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n~/.local/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 124 return F.embedding(\r\n 125 input, self.weight, self.padding_idx, self.max_norm,\r\n--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n 127 \r\n 128 def extra_repr(self) -> str:\r\n\r\n~/.local/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1812 # remove once script supports set_grad_enabled\r\n 1813 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1815 \r\n 1816 \r\n\r\nRuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select\r\n```",
"The error occurs because the `inputs` is not on GPU, putting `inputs` on GPU should fix the error. ",
"Hi. I tried to use the blenderbot example code from huggingface. I just copy pasted it in colab and ran it. But it is showing me the following error : \n\nTypeError: forward() got an unexpected keyword argument 'token_type_ids'\n\nPlease help me out. What can be done to solve this?",
"According to the Parlai [documentation](https://parl.ai/docs/tutorial_task.html), the Parlai format is to use a `<\\t>` or 4 spaces as a separator of turns in a conversation.\r\nI ran your example using 4 spaces and got the Parlai response\r\n```\r\nfrom transformers import BlenderbotForConditionalGeneration, BlenderbotTokenizer\r\n\r\nMODEL_ID = \"facebook/blenderbot-400M-distill\"\r\nmodel = BlenderbotForConditionalGeneration.from_pretrained(MODEL_ID)\r\ntokenizer = BlenderbotTokenizer.from_pretrained(MODEL_ID)\r\n\r\ntext = [\"I am from Vietnam I've never been there, but I've always wanted to go. How do you like it? pretty good actually , where you are from ?\"]\r\n\r\ninputs = tokenizer(text, return_tensors='pt')\r\nres = model.generate(inputs['input_ids'])\r\ntokenizer.batch_decode(res)\r\n\r\n#[\"<s> I'm from the United States. I've heard it's a beautiful place to visit. </s>\"]\r\n```\r\n\r\n",
"> According to the Parlai [documentation](https://parl.ai/docs/tutorial_task.html), the Parlai format is to use a `<\\t>` or 4 spaces as a separator of turns in a conversation.\r\n> I ran your example using 4 spaces and got the Parlai response\r\n> \r\n> ```\r\n> from transformers import BlenderbotForConditionalGeneration, BlenderbotTokenizer\r\n> \r\n> MODEL_ID = \"facebook/blenderbot-400M-distill\"\r\n> model = BlenderbotForConditionalGeneration.from_pretrained(MODEL_ID)\r\n> tokenizer = BlenderbotTokenizer.from_pretrained(MODEL_ID)\r\n> \r\n> text = [\"I am from Vietnam I've never been there, but I've always wanted to go. How do you like it? pretty good actually , where you are from ?\"]\r\n> \r\n> inputs = tokenizer(text, return_tensors='pt')\r\n> res = model.generate(inputs['input_ids'])\r\n> tokenizer.batch_decode(res)\r\n> \r\n> #[\"<s> I'm from the United States. I've heard it's a beautiful place to visit. </s>\"]\r\n> ```\r\n\r\nI did a whole bunch of testing with different turn separation tokens, and the only one that consistently separated the turns and created outputs that made sense was 4 spaces. \r\n\r\nEverything else would sometimes separate a turn, but sometimes not (tab, new line, `</s> <s>`, two spaces)\r\n\r\nThe documentation there mentioned that tabs are rendered as 4 spaces in the browser, but it still should be tab as a separator? \r\n\r\nAny new insights into why?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Was there a finalized way to construct inputs for Blenderbot? According to this https://github.com/huggingface/transformers/blob/v4.5.1/src/transformers/models/blenderbot/modeling_blenderbot.py#L499 should be using `</s> <s>` not sure if I should be following this or `\\n` as indicated in parlai",
"I have tried four spaces, \\n and \\t. Four spaces gave the best results.\r\n",
"Should the first sentence in the encoder input be prepended with an extra space ' '? Because I note that the first token generated by the decoder has the space prefix (e.g., ' I' or ' yes').",
"I noticed the space prefix in the generation too. But I didn't check if adding extra space to encoder input gives better results.",
"Check out this example. This example is crafted in such a way that it could be a 1 turn or 2 turn depend on how you separate it. I am using the 3B model.\r\n```\r\nUsing \\n as seperator\r\nUTTERANCE = \"I am from tokyo\\nwhere you are from?\"\r\ninputs = tokenizer([UTTERANCE], return_tensors=\"pt\")\r\nreply_ids = model.generate(**inputs)\r\nprint(tokenizer.batch_decode(reply_ids,skip_special_tokens=True))\r\n#[' I was born and raised in Tokyo, the capital of Japan. How about you?']\r\n\r\n\r\nUsing four spaces as seperator\r\nUTTERANCE = \"I am from tokyo where you are from?\"\r\ninputs = tokenizer([UTTERANCE], return_tensors=\"pt\")\r\nreply_ids = model.generate(**inputs)\r\nprint(tokenizer.batch_decode(reply_ids,skip_special_tokens=True))\r\n#[\" I'm from Tokyo. It's the capital of Japan. It's a big city\"]\r\n\r\nUsing <t> as seperator\r\nUTTERANCE = \"I am from tokyo<t>where you are from?\"\r\ninputs = tokenizer([UTTERANCE], return_tensors=\"pt\")\r\nreply_ids = model.generate(**inputs)\r\nprint(tokenizer.batch_decode(reply_ids,skip_special_tokens=True))\r\n#[\" I'm from the United States. I've never been to Tokyo, but I've always wanted to go.\"]\r\n \r\n\r\n# Using </s> <s> as seperator\r\nUTTERANCE = \"I am from tokyo</s> <s>where you are from?\"\r\ninputs = tokenizer([UTTERANCE], return_tensors=\"pt\")\r\nreply_ids = model.generate(**inputs)\r\nprint(tokenizer.batch_decode(reply_ids,skip_special_tokens=True))\r\n#[' I am also from Tokyo, the capital and most populous metropolitan area in Japan.']\r\n\r\n```\r\n\r\nIt looks like using ```\\n``` and four spaces, the model interpret it as 2 turn but 1 turn for using ```\\<t> and \\</s> \\<s>```"
] | 1,609 | 1,689 | 1,620 | NONE | null | # 🚀 Feature request
Hi there. is there any way to predict a response using multi-turn dialog context for the Blender Bot Model. From your [example](https://huggingface.co/transformers/model_doc/blenderbot.html) I saw that it only use single-turn context.
I tried to use a ` </sep>` token to separate human/bot turn like the following example:
```
Human: I am from Vietnam
Bot: I've never been there, but I've always wanted to go. How do you like it?
Human: pretty good actually , where you are from ?
```
Concatenate input:
`I am from Vietnam</sep> I've never been there, but I've always wanted to go. How do you like it?</sep> pretty good actually , where you are from ?`
huggingface's model response: `I am from the United States. I have never been to Vietnam, but I have always wanted to go.`
Facebook_ParlAI's model response: `I'm from the United States. I've heard it's a great place to visit, though.`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9365/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9365/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9364/comments | https://api.github.com/repos/huggingface/transformers/issues/9364/events | https://github.com/huggingface/transformers/issues/9364 | 776,630,873 | MDU6SXNzdWU3NzY2MzA4NzM= | 9,364 | Finetune mbart rouge score difference between training and evaluation part | {
"login": "eymenkagantaspinar",
"id": 58509497,
"node_id": "MDQ6VXNlcjU4NTA5NDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/58509497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eymenkagantaspinar",
"html_url": "https://github.com/eymenkagantaspinar",
"followers_url": "https://api.github.com/users/eymenkagantaspinar/followers",
"following_url": "https://api.github.com/users/eymenkagantaspinar/following{/other_user}",
"gists_url": "https://api.github.com/users/eymenkagantaspinar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eymenkagantaspinar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eymenkagantaspinar/subscriptions",
"organizations_url": "https://api.github.com/users/eymenkagantaspinar/orgs",
"repos_url": "https://api.github.com/users/eymenkagantaspinar/repos",
"events_url": "https://api.github.com/users/eymenkagantaspinar/events{/privacy}",
"received_events_url": "https://api.github.com/users/eymenkagantaspinar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @Eymen3455 \r\n\r\nIn the `run_eval` command I see that you are setting `n_obs` to 100 while the `finetune_trainer` uses all test examples, could you maybe run eval again with all test examples and see if you get close results?",
"First of all, thank you very much @patil-suraj for your reply. I did as you said and used all the test dataset for `n_obs`, but the result remained unchanged. In addition, when we examine the texts produced in `finetune_trainer.py` and `run_eval.py`, the texts produced in `finetune_trainer.py` are acceptable. While the texts produced in `run_eval.py` are different, it is worse and the rouge score is less than half of obtained scores in `finetune_trainer.py`.\r\n\r\nThen when I saw this issue (https://github.com/huggingface/transformers/issues/9236), I used Xsum dataset as dataset and rouge score increased in `run_eval.py`. Just changing the dataset caused to get logical text in `run_eval.py`, but still I could not understand the difference between the rouge score and generated texts in `finetune_trainer.py` and `run_eval.py`.\r\n\r\nI do not understand what has affected the dataset so much. When I use the mbart model, why can't I get the same success in `finetune_train.py` and `run_eval.py`, what could be the reason for this? By the way, when I try the `sshleifer/student_cnn_12_6` model instead of the mbart model, I can achieve exactly the same success in `finetune_trainer.py` and `run_eval.py`. I would appreciate if you could help.\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I ran into the same problem and positioned it to the max_length parameters in Seq2SeqTrainer.evaluate() . Whether to set max_length parameters will result in different results.",
"I have encountered the same issue:\r\n\r\nval_rouge2 obtained during training is different from scores I got with \"run_eval.py\" script\r\n\r\ndo you have any suggestions ? @patil-suraj "
] | 1,609 | 1,626 | 1,614 | NONE | null | ### Environment info
- transformers from source
- google colab
### Information
When I use the `sshleifer/student_cnn_12_6` model for `finetune_trainer.py` and then I run finetuned model in `run_eval.py`, I can get close and high rouge scores.However when I give `facebook/mbart-large-cc25` as model and tokenizer to `finetune_trainer.py` and also, I used the same `LID` for both eos and bos , the generated text and rouge score produced by the `finetune_trainer.py` evaluation and prediction section are not good, and when I run the fine-tuned model with the `run_eval.py` rouge scores are very close to 0. What could be reason for low rouge scores and particularly the rouge score difference between training and evaluation when using mbart model.
`Finetune_trainer.py` arguments
>!python /content/transformers/examples/seq2seq/finetune_trainer.py
--model_name_or_path facebook/mbart-large-cc25 \
--tokenizer_name facebook/mbart-large-cc25 \
--data_dir /content/transformers/cnn_dm_tr \
--output_dir finetuned_model --overwrite_output_dir \
--learning_rate=3e-5 \
--warmup_steps 500 --sortish_sampler \
--fp16 \
--n_val 500 \
--freeze_encoder --freeze_embeds \
--src_lang tr_TR --tgt_lang tr_TR \
--gradient_accumulation_steps=1 \
--per_device_train_batch_size=4 --per_device_eval_batch_size=4 \
--num_train_epochs=2 \
--save_steps 3000 --eval_steps 3000 \
--logging_first_step \
--max_target_length 56 --val_max_target_length 142 --test_max_target_length 142 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate --sortish_sampler \
"$@"
`Finetune_trainer.py` results
>test_loss = 7.9716
test_rouge1 = 5.6445
test_rouge2 = 1.6458
test_rougeL = 4.8763
test_rougeLsum = 5.3712
>val_loss = 7.9894
val_rouge1 = 5.0368
val_rouge2 = 1.6249
val_rougeL = 4.1304
val_rougeLsum = 4.6041
`Run_eval.py` arguments
>!python /content/transformers/examples/seq2seq/run_eval.py
/content/finetuned_model \
/content/transformers/cnn_dm_tr/test.source \
dbart_cnn_12_6_test_gens.txt \
--reference_path /content/transformers/cnn_dm_tr/test.target \
--score_path dbart_cnn_12_6_test_rouge.json \
--n_obs 100 \
--task summarization --bs 2 --fp16
`Run_eval.py` results
>{'rouge1': 0.1091, 'rouge2': 0.0, 'rougeL': 0.1091, 'rougeLsum': 0.1091, 'n_obs': 50, 'runtime': 601, 'seconds_per_sample': 12.02}
### Who can help
@patil-suraj @sshleifer
### Expected behavior
Be able to get high and close rouge scores for `run_eval.py` and `finetune_trainer.py` when I use `mbart-large-cc25` as model and tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9363/comments | https://api.github.com/repos/huggingface/transformers/issues/9363/events | https://github.com/huggingface/transformers/pull/9363 | 776,608,062 | MDExOlB1bGxSZXF1ZXN0NTQ3MDY0MjE3 | 9,363 | Make sure to use return dict for the encoder call inside RagTokenForGeneration | {
"login": "dblakely",
"id": 20539855,
"node_id": "MDQ6VXNlcjIwNTM5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20539855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dblakely",
"html_url": "https://github.com/dblakely",
"followers_url": "https://api.github.com/users/dblakely/followers",
"following_url": "https://api.github.com/users/dblakely/following{/other_user}",
"gists_url": "https://api.github.com/users/dblakely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dblakely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dblakely/subscriptions",
"organizations_url": "https://api.github.com/users/dblakely/orgs",
"repos_url": "https://api.github.com/users/dblakely/repos",
"events_url": "https://api.github.com/users/dblakely/events{/privacy}",
"received_events_url": "https://api.github.com/users/dblakely/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | ## What does this PR do?
At some point, `return_dict` was set to be `False` by default inside BART. However, this created a type error inside `RagTokenForGeneration`, which was still written with the expectation that `return_dict=True` by default. This PR simply adds `return_dict=True` to the call to the BART encoder inside the `RagTokenForGeneration` code.
## Tests
I did not create new tests because this change is very minor. I did run all the existing tests and they pass.
## Who can review?
Anyone can review, but I tagged these two because this involves RAG and BART:
@patrickvonplaten @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9363",
"html_url": "https://github.com/huggingface/transformers/pull/9363",
"diff_url": "https://github.com/huggingface/transformers/pull/9363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9363.patch",
"merged_at": 1609587555000
} |
https://api.github.com/repos/huggingface/transformers/issues/9362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9362/comments | https://api.github.com/repos/huggingface/transformers/issues/9362/events | https://github.com/huggingface/transformers/issues/9362 | 776,524,792 | MDU6SXNzdWU3NzY1MjQ3OTI= | 9,362 | Jupyter Notebook Kernel crashes when tokenizing large dataset | {
"login": "lthiet",
"id": 26815719,
"node_id": "MDQ6VXNlcjI2ODE1NzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/26815719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lthiet",
"html_url": "https://github.com/lthiet",
"followers_url": "https://api.github.com/users/lthiet/followers",
"following_url": "https://api.github.com/users/lthiet/following{/other_user}",
"gists_url": "https://api.github.com/users/lthiet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lthiet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lthiet/subscriptions",
"organizations_url": "https://api.github.com/users/lthiet/orgs",
"repos_url": "https://api.github.com/users/lthiet/repos",
"events_url": "https://api.github.com/users/lthiet/events{/privacy}",
"received_events_url": "https://api.github.com/users/lthiet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"i still see it's a problem.. running tokenizer",
"We can't really help you debug your script, you should probably ask for help on[ the forum ](https://discuss.huggingface.co/)"
] | 1,609 | 1,692 | 1,614 | NONE | null | ## Environment info
I am using 2 setups, my personal laptop and a cluster.
My laptop has this environment :
`transformers` version: 4.1.1
- Platform: Darwin-20.2.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
The cluster has this :
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-1160.6.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.1
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): DistilBERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
I followed [this example](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20datasets), but I modified the dataset part to include the one that I am using, which is described below.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
http://help.sentiment140.com/for-students
## To reproduce
Steps to reproduce the behavior:
1. Use this script
```python
from transformers import DistilBertTokenizerFast
from sklearn.model_selection import train_test_split
import pandas as pd
# Run these first :
# $ wget http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip
# $ unzip trainingandtestdata.zip -d ./data
# $ rm trainingandtestdata.zip
def get_data(path):
# Read the dataset
df = pd.read_csv(path, encoding='ISO-8859-1', header=None, nrows=None)
# Keep only the label and the text, replace 4 with 1
# Note: there are actually no neutral labels in the train dataset
df = df[[0, 5]].replace(2, 1).replace(4, 1)
# Rename
df = df.rename(columns={0: "label", 5: "text"})
return df
dftrain = get_data('data/training.1600000.processed.noemoticon.csv')
dftest = get_data('data/testdata.manual.2009.06.14.csv')
X_train = dftrain['text'].to_list()
y_train = dftrain['label'].to_list()
X_test = dftest['text'].to_list()
y_test = dftest['label'].to_list()
# Comment this to use full dataset
# _,X_train, _, y_train = train_test_split(X_train, y_train, test_size=0.05, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.25, random_state=1)
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(X_train, truncation=True, padding=True)
# To check the memory used
with open('output.txt', 'w') as f:
print(str(train_encodings))
```
2. python tokenize_test.py
## Description
In total, I tried this script in 4 different settings :
1. Personal laptop as a python script
2. Personal laptop in a Jupyter Notebook
3. Cluster as a python script
4. Cluster in a Jupyter Book
In case 2, 4, the kernel died. I assume it is because of memory error since case 3 was killed as a process because I exceeded the memory usage. However, case 1 worked flawlessly. The training dataset is about 90 MB in total and weighs 1.6 GB after tokenization. My personal laptop has 16 GB of RAM and I reserve 4 GB of RAM in the cluster. The memory that ought to be used is clearly below the limit yet I still get memory issues. Maybe is there a memory leak in a specific version somewhere?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9361/comments | https://api.github.com/repos/huggingface/transformers/issues/9361/events | https://github.com/huggingface/transformers/issues/9361 | 776,508,406 | MDU6SXNzdWU3NzY1MDg0MDY= | 9,361 | DeBERTa in TF (TFAutoModel): unrecognized configuration class | {
"login": "ck37",
"id": 50770,
"node_id": "MDQ6VXNlcjUwNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ck37",
"html_url": "https://github.com/ck37",
"followers_url": "https://api.github.com/users/ck37/followers",
"following_url": "https://api.github.com/users/ck37/following{/other_user}",
"gists_url": "https://api.github.com/users/ck37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ck37/subscriptions",
"organizations_url": "https://api.github.com/users/ck37/orgs",
"repos_url": "https://api.github.com/users/ck37/repos",
"events_url": "https://api.github.com/users/ck37/events{/privacy}",
"received_events_url": "https://api.github.com/users/ck37/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @ck37 \r\n\r\n`DeBERTa` is currently PyTorch-only so it can't be loaded with `TFAutoModel`. The table on the doc's [homepage](https://huggingface.co/transformers/) shows whether the models have support in PyTorch, TensorFlow, and/or Flax.",
"Gotcha, thanks for the fast response. Do you think the TF side will be implemented at some point? It seems like there will be more interest in DeBERTa with it taking the lead in [SuperGLUE](https://super.gluebenchmark.com/leaderboard).",
"Yeah, it's pretty exciting! @patrickvonplaten might be able to give you eta of `TFDeberta`",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-957.21.3.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using: DeBERTa
The problem arises when using:
* [ x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Use TFAutoModel to import deberta-large
```python
from transformers import TFAutoModel
model = TFAutoModel.from_pretrained("microsoft/deberta-large")
```
Error:
```python
---------------------------------------------------------------------------
ValueError
<ipython-input-2-416d7de4fc12> in <module>
----> 1 model = TFAutoModel.from_pretrained("microsoft/deberta-large")
~/miniconda3/envs/hate2/lib/python3.6/site-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
583 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n"
584 "Model type should be one of {}.".format(
--> 585 config.__class__, cls.__name__, ", ".join(c.__name__ for c in TF_MODEL_MAPPING.keys())
586 )
587 )
ValueError: Unrecognized configuration class <class 'transformers.models.deberta.configuration_deberta.DebertaConfig'> for this kind of TFAutoModel: TFAutoModel.
Model type should be one of LxmertConfig, MT5Config, T5Config, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, ElectraConfig, FunnelConfig, DPRConfig, MPNetConfig.
```
## Expected behavior
I should be able to import deberta-large and deberta-base using TFAutoModel, or the documentation should be updated to clarify that they are pytorch only.
Thanks as always for the amazing software, and please let me know if I should provide any other details or otherwise help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9361/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9360/comments | https://api.github.com/repos/huggingface/transformers/issues/9360/events | https://github.com/huggingface/transformers/issues/9360 | 776,494,110 | MDU6SXNzdWU3NzY0OTQxMTA= | 9,360 | Loading a set of tokenized files for training | {
"login": "nlpravi",
"id": 1936777,
"node_id": "MDQ6VXNlcjE5MzY3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1936777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nlpravi",
"html_url": "https://github.com/nlpravi",
"followers_url": "https://api.github.com/users/nlpravi/followers",
"following_url": "https://api.github.com/users/nlpravi/following{/other_user}",
"gists_url": "https://api.github.com/users/nlpravi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nlpravi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nlpravi/subscriptions",
"organizations_url": "https://api.github.com/users/nlpravi/orgs",
"repos_url": "https://api.github.com/users/nlpravi/repos",
"events_url": "https://api.github.com/users/nlpravi/events{/privacy}",
"received_events_url": "https://api.github.com/users/nlpravi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"In this case, you could write a custom `dataset` that will read your pickle files and return the examples from `__getitem__` method.\r\n\r\nIf you are looking for an efficient way of pre-tokenizing the dataset, saving/caching it for future use, and loading it for training then I would recommend you to take a look at [datasets](https://github.com/huggingface/datasets) library. It takes care of caching your pre-processed data and loading it efficiently (lazy loading, so memory won't blow up)",
"Thanks Suraj ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | I have a directory of files that are already tokenized using a pretrained tokenizer. Each file is a pickle file containing a list of objects where each object corresponds to a text sequence containing input_ids and attention_masks. The directory has thousands of files. I'm looking for an efficient way to load the data for training using Trainer. Do I have to write my own Dataloader or do I create a custom dataset using Datasets?
Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9359/comments | https://api.github.com/repos/huggingface/transformers/issues/9359/events | https://github.com/huggingface/transformers/issues/9359 | 776,434,266 | MDU6SXNzdWU3NzY0MzQyNjY= | 9,359 | Training loss not getting logged | {
"login": "kunalpagarey",
"id": 38290549,
"node_id": "MDQ6VXNlcjM4MjkwNTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/38290549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kunalpagarey",
"html_url": "https://github.com/kunalpagarey",
"followers_url": "https://api.github.com/users/kunalpagarey/followers",
"following_url": "https://api.github.com/users/kunalpagarey/following{/other_user}",
"gists_url": "https://api.github.com/users/kunalpagarey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kunalpagarey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kunalpagarey/subscriptions",
"organizations_url": "https://api.github.com/users/kunalpagarey/orgs",
"repos_url": "https://api.github.com/users/kunalpagarey/repos",
"events_url": "https://api.github.com/users/kunalpagarey/events{/privacy}",
"received_events_url": "https://api.github.com/users/kunalpagarey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@sgugger is the best suited to answer you",
"This option is not implemented in Trainer.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | While training GPT2 using run_clm.py I wanted to track the training loss as well but could not find a way to do that with evaluation strategy = epoch. So I tried to look deeper into the code and found that may be adding `control.should_log = True` after line referred below will start logging training loss after every epoch.
https://github.com/huggingface/transformers/blob/ae333d04b29a25be1a70eaccd6260c294c243c5b/src/transformers/trainer_callback.py#L422
Please correct me if I am wrong and suggest how should I track training loss per epoch?
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9359/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9358/comments | https://api.github.com/repos/huggingface/transformers/issues/9358/events | https://github.com/huggingface/transformers/issues/9358 | 776,411,771 | MDU6SXNzdWU3NzY0MTE3NzE= | 9,358 | error while finetuning for Regression task. | {
"login": "SAIVENKATARAJU",
"id": 46083296,
"node_id": "MDQ6VXNlcjQ2MDgzMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46083296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SAIVENKATARAJU",
"html_url": "https://github.com/SAIVENKATARAJU",
"followers_url": "https://api.github.com/users/SAIVENKATARAJU/followers",
"following_url": "https://api.github.com/users/SAIVENKATARAJU/following{/other_user}",
"gists_url": "https://api.github.com/users/SAIVENKATARAJU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SAIVENKATARAJU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SAIVENKATARAJU/subscriptions",
"organizations_url": "https://api.github.com/users/SAIVENKATARAJU/orgs",
"repos_url": "https://api.github.com/users/SAIVENKATARAJU/repos",
"events_url": "https://api.github.com/users/SAIVENKATARAJU/events{/privacy}",
"received_events_url": "https://api.github.com/users/SAIVENKATARAJU/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"Was this solved somehow?\r\n",
"Ping @Rocketknight1 ",
"Hi, the problem here is that our models have more than one output, and therefore don't work that well inside `Sequential`. You can do this with the [Keras functional API](https://keras.io/guides/functional_api/), or by [overriding `train_step`](https://keras.io/guides/customizing_what_happens_in_fit/) or just writing eager TF code.\r\n\r\nHowever, you might not need to do any of that, as our `SequenceClassification` models actually already support regression! If you set `num_labels=1`, we assume you want to do regression instead. So then the above code would just become:\r\n\r\n```\r\nfrom transformers import TFBertForSequenceClassification\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=1)\r\noptimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4)\r\n\r\nmodel.compile(loss='mse',\r\n optimizer=optimizer,\r\n metrics=['mae', 'mse'])\r\nmodel.fit(train_seq,train_labels,epochs=10)\r\n```\r\n\r\nYou could also try replacing the optimizer with `tf.keras.optimizers.Adam(learning_rate=2e-5)`, as we find Adam usually works a bit better than RMSprop in practice on Transformer models.",
"Hi @Rocketknight1 \r\nI was preforming a classification task. The code is\r\n`model = TFAutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=TOTAL_LABELS)\r\nfor layer in model.layers:\r\n layer.trainable= True\r\n\r\nfor layer in model.layers[:int(len(model.layers)*0.9) ]:\r\n layer.trainable= False`\r\n\r\nSaved the model as \r\n`tf.keras.models.save_model(model, PATH, overwrite=True, include_optimizer=True, save_format=\"tf\")`\r\n\r\nAnd then got an error while loading\r\n`model= tf.keras.models.load_model(PATH)`\r\n\r\nAlso can you please provide a link to the docs to set configs to mute the multiple outputs and get only logits as output from a BERT in tensorflow so that I can build a functional API and build layers on top of BERT. Thank you!",
"> Hi, the problem here is that our models have more than one output, and therefore don't work that well inside `Sequential`. You can do this with the [Keras functional API](https://keras.io/guides/functional_api/), or by [overriding `train_step`](https://keras.io/guides/customizing_what_happens_in_fit/) or just writing eager TF code.\r\n> \r\n> However, you might not need to do any of that, as our `SequenceClassification` models actually already support regression! If you set `num_labels=1`, we assume you want to do regression instead. So then the above code would just become:\r\n> \r\n> ```\r\n> from transformers import TFBertForSequenceClassification\r\n> model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=1)\r\n> optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4)\r\n> \r\n> model.compile(loss='mse',\r\n> optimizer=optimizer,\r\n> metrics=['mae', 'mse'])\r\n> model.fit(train_seq,train_labels,epochs=10)\r\n> ```\r\n> \r\n> You could also try replacing the optimizer with `tf.keras.optimizers.Adam(learning_rate=2e-5)`, as we find Adam usually works a bit better than RMSprop in practice on Transformer models.\r\n\r\nI'm trying something very similar to this and getting:\r\n\r\n> ValueError: Failed to find data adapter that can handle input: <class 'transformers.tokenization_utils_base.BatchEncoding'>, (<class 'list'> containing values of types {\"<class 'float'>\"}\r\n\r\nHere is the code:\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1)\r\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nmax_length = 64\r\nX_train, X_test, y_train, y_test = train_test_split(df[\"Clean\"].tolist(), df[\"Y\"].tolist(), test_size=0.2)\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length)\r\nvalid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length)\r\n\r\nmodel.fit(train_encodings, y_train, epochs=3)\r\n```\r\n\r\nAny suggestions?",
"Hi @jhogg11 - the error happens because our tokenizers output `BatchEncoding` objects, not dicts, and Keras doesn't know what to do with them! It's also good practice to convert your labels to an array rather than passing them as a list. Try the following right before `model.fit()`:\r\n\r\n```\r\nX_train = dict(X_train)\r\ny_train = np.array(y_train)\r\n```",
"> Hi @jhogg11 - the error happens because our tokenizers output `BatchEncoding` objects, not dicts, and Keras doesn't know what to do with them! It's also good practice to convert your labels to an array rather than passing them as a list. Try the following right before `model.fit()`:\r\n> \r\n> ```\r\n> X_train = dict(X_train)\r\n> y_train = np.array(y_train)\r\n> ```\r\n\r\nTrying `X_train = dict(X_train)` gives me this error (since X_train is a list):\r\n\r\n> ValueError: dictionary update sequence element #0 has length 117; 2 is required\r\n\r\nI thought you might have meant `dict(train_encodings)` so I tried that, but it gives a similar error as the previous example.",
"Ah, I'm sorry, you're right! I meant to type `train_encodings`. And now you mention it, I realize the problem is actually twofold. Try replacing this:\r\n\r\n```\r\nmax_length = 64\r\nX_train, X_test, y_train, y_test = train_test_split(df[\"Clean\"].tolist(), df[\"Y\"].tolist(), test_size=0.2)\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length)\r\nvalid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length)\r\n```\r\n\r\nwith this:\r\n\r\n```\r\nmax_length = 64\r\nX_train, X_test, y_train, y_test = train_test_split(df[\"Clean\"].tolist(), df[\"Y\"].tolist(), test_size=0.2)\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\nvalid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\n\r\ntrain_encodings = dict(train_encodings)\r\nvalid_encodings = dict(valid_encodings)\r\n\r\n```\r\n\r\nThe cause of the problem is two things - firstly the `BatchEncoding` output by the tokenizer needs to be converted to a `dict`, and secondly the individual arrays output by the tokenizer need to be converted to an array format (either NumPy or TF) that Keras can understand. The `return_tensors` argument to the tokenizer will take care of that part.",
"Still not working. Here's the full code as of now:\r\n\r\nEDIT: updated this to be fully reproducible by pulling some random online text data into a dataframe.\r\n\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\nimport tensorflow as tf\r\nfrom transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimport requests\r\n\r\ndata = requests.get(\"https://example-files.online-convert.com/document/txt/example.txt\")\r\ndata = [d for d in data.text.split(\"\\n\") if d != \"\"]\r\n\r\ndf = pd.DataFrame(data, columns=[\"Clean\"])\r\ndf[\"Y\"] = np.random.normal(0,1, df.shape[0])\r\n\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1)\r\n# loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nmax_length = 64\r\nX_train, X_test, y_train, y_test = train_test_split(df[\"Clean\"].tolist(), df[\"Y\"].tolist(), test_size=0.2)\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\nvalid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\n\r\ntrain_encodings = dict(train_encodings)\r\nvalid_encodings = dict(valid_encodings)\r\n\r\nmodel.fit(\r\n train_encodings,\r\n y_train,\r\n epochs=3,\r\n)\r\n```\r\n\r\n\r\nThe error message is:\r\n\r\n```\r\nValueError: Failed to find data adapter that can handle input: (<class 'dict'> containing {\"<class 'str'>\"} keys and {\"<class 'numpy.ndarray'>\"} values), (<class 'list'> containing values of types {\"<class 'float'>\"})\r\n```\r\nI had also looked at: https://huggingface.co/docs/transformers/v4.27.2/en/quicktour#train-with-tensorflow and https://huggingface.co/docs/transformers/v4.27.2/en/training#prepare-a-dataset, but all of the examples that I could find involve preloaded datasets.\r\n\r\nIs there a way to efficiently go from a dataframe or list to a `Dataset` object?",
"@jhogg11 Thanks for sharing a fully reproducible example. I believe the issue is still arising as `y_train` being passed to the model is a list. Running the following should work: \r\n\r\n```py\r\nmodel.fit(train_encodings, tf.convert_to_tensor(y_train), epochs=3)\r\n```",
"@amyeroberts\r\n\r\nGetting basically the same error:\r\n```\r\nValueError: Failed to find data adapter that can handle input: <class 'tensorflow.python.framework.ops.EagerTensor'>, (<class 'list'> containing values of types {\"<class 'float'>\"})\r\n````\r\nDoes the code work for you? I recently had to re-install Miniconda and I'm also on an M1 Mac, which can create difficulties, so I'm wondering if it's something on my end. However, I did test a basic TF model (using random numbers) just to make sure that everything is working and it trained without issue.",
"@jhogg11 Yes, it works for me. When I run: \r\n```py\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\nimport tensorflow as tf\r\nfrom transformers import TFAutoModelForSequenceClassification, AutoTokenizer, pipeline\r\nfrom sklearn.model_selection import train_test_split\r\nimport requests\r\n\r\ndata = requests.get(\"https://example-files.online-convert.com/document/txt/example.txt\")\r\ndata = [d for d in data.text.split(\"\\n\") if d != \"\"]\r\n\r\ndf = pd.DataFrame(data, columns=[\"Clean\"])\r\ndf[\"Y\"] = np.random.normal(0,1, df.shape[0])\r\n\r\nmodel_name = \"bert-base-cased\"\r\nmodel = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1)\r\n# loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nmax_length = 64\r\nX_train, X_test, y_train, y_test = train_test_split(df[\"Clean\"].tolist(), df[\"Y\"].tolist(), test_size=0.2)\r\ntrain_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\nvalid_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_length, return_tensors=\"np\")\r\n\r\ntrain_encodings = dict(train_encodings)\r\nvalid_encodings = dict(valid_encodings)\r\n\r\nmodel.fit(train_encodings, tf.convert_to_tensor(y_train), epochs=3)\r\n```\r\n\r\nThis was running on an M1 with \r\n```\r\ntransformers 4.28.0.dev0\r\ntensorflow-macos 2.10.0\r\ntensorflow-metal 0.6.0\r\n````\r\n\r\nWhich versions of transformers and tensorflow are you using? ",
"@amyeroberts\r\n\r\nI just restarted the kernel and ran with your exact code and it worked! I think I might have hastily wrapped `train_encodings` in `tf.convert_to_tensor` rather than `y_train`.\r\n\r\nI really appreciate the help."
] | 1,609 | 1,679 | 1,614 | NONE | null | Hi I was trying to perform finetuning regression task . below is my network.
```
from transformers import TFBertForSequenceClassification
# model initialization
base_model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
base_model.bert.trainable=False
model=tf.keras.Sequential(base_model)
model.add(tf.keras.Input(shape=[720895,7],name='Input_1'))
model.add(tf.keras.layers.Dense(1,activation='linear'))
optimizer = tf.keras.optimizers.RMSprop(learning_rate=1e-4)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
model.fit(train_seq,train_labels,epochs=10)
```
error is
`
TypeError: Failed to convert 'TFSequenceClassifierOutput(loss=None, logits=TensorShape([None, 2]), hidden_states=None, attentions=None)' to a shape: ''logits''could not be converted to a dimension. A shape should either be single dimension (e.g. 10), or an iterable of dimensions (e.g. [1, 10, None]).`
Can you please help me with this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9357/comments | https://api.github.com/repos/huggingface/transformers/issues/9357/events | https://github.com/huggingface/transformers/issues/9357 | 776,375,240 | MDU6SXNzdWU3NzYzNzUyNDA= | 9,357 | Blenderbot-3B config seems to be a little wrong | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been stale for 1 month.",
"Closing this, blenderbot 90M is very different in Arch as other variants, so it will receive less love (it's not that powerful compared to the others anyway).\r\n\r\nAlso a lot of work was done here : https://github.com/huggingface/transformers/pull/10002"
] | 1,609 | 1,615 | 1,615 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?):-
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
It seems the current Config of `Blenderbot-3B` is a bit broken, (`Blenderbot-90M` and distill versions seem fine).
```python
tokenizer = AutoTokenizer.from_pretrained('facebook/blenderbot-90M')
tokenizer.decode(tokenizer.encode("Hey there"))
# 'hey there' so working fine
tokenizer.decode(tokenizer.encode("Hey there"))
# '<unk> y <unk> e' obvious error as the tokens as 'ĠHey' exists in the vocab. Error is possibly linked to '@@' string terminator config
----
# Other example that's probably linked but that originally triggered the issue so we need to make sure it's fixed too
nlp = pipeline('text-generation', model='blenderbot-3B')
nlp("Hey there")
# {"generated_text": "'ĠHi, Ġhow Ġare Ġyou Ġtoday? ĠI Ġjust Ġgot Ġback Ġfrom Ġa Ġwalk, Ġit Ġwas Ġnice."}
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@patrickvonplaten @patil-suraj
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## Expected behavior
The tokenization should be better at encoding for 3B. And the pipeline should not output garbage Ġ everywhere.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9357/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9356/comments | https://api.github.com/repos/huggingface/transformers/issues/9356/events | https://github.com/huggingface/transformers/pull/9356 | 776,299,030 | MDExOlB1bGxSZXF1ZXN0NTQ2ODE5ODU2 | 9,356 | [examples/language-modeling] Add dataset download instructions | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Isn’t this one simply on the [HuggingFace datasets hub](https://huggingface.co/datasets) by the way?\r\n\r\nOp wo 30 dec. 2020 om 07:50 schreef Stas Bekman <[email protected]>\r\n\r\n> I had to hunt for instructions to get the dataset used in this set of\r\n> examples, so this PR proposes to add them to README.md.\r\n>\r\n> @patrickvonplaten <https://github.com/patrickvonplaten>\r\n> ------------------------------\r\n> You can view, comment on, or merge this pull request online at:\r\n>\r\n> https://github.com/huggingface/transformers/pull/9356\r\n> Commit Summary\r\n>\r\n> - [examples/language-modeling] Add dataset download instructions\r\n>\r\n> File Changes\r\n>\r\n> - *M* examples/language-modeling/README.md\r\n> <https://github.com/huggingface/transformers/pull/9356/files#diff-28c51ae2110e09a5e495a1748a8ecc3c2e3cb2f7a244c000c67a9d8c4c37adf6>\r\n> (9)\r\n>\r\n> Patch Links:\r\n>\r\n> - https://github.com/huggingface/transformers/pull/9356.patch\r\n> - https://github.com/huggingface/transformers/pull/9356.diff\r\n>\r\n> —\r\n> You are receiving this because you are subscribed to this thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/9356>, or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHJOJFTITANOLIIFNY3SXLESTANCNFSM4VOADSUA>\r\n> .\r\n>\r\n",
"Sure, let's have the equivalent instructions to retrieve that from HF `datasets`. It doesn't really matter where it comes from as long as it doesn't require the user to go and search for it. \r\n\r\nFWIW, I went to https://huggingface.co/datasets and:\r\n1. couldn't find it. That is I did find `wikitext`, but how do I know that it's the same as `wikitext-2-raw-v1` that the script expects - it seems to be very specific.\r\n2. it gives me no instructions on how to download it in the format the script expects it in.\r\n\r\np.s. it looks like Email replies do not support Markdown.\r\n",
"There is no need to download the data manually with the new scripts, it is done automatically by the datasets library. So this should not be added in my opinion.",
"oh, ok, I guess I didn't pay attention to the command line being changed and assumed that I needed to get the dataset first.\r\n\r\nI stand corrected. Thank you for your feedback."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | I had to hunt for instructions to get the dataset used in this set of examples, so this PR proposes to add them to README.md.
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9356",
"html_url": "https://github.com/huggingface/transformers/pull/9356",
"diff_url": "https://github.com/huggingface/transformers/pull/9356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9356.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9355/comments | https://api.github.com/repos/huggingface/transformers/issues/9355/events | https://github.com/huggingface/transformers/pull/9355 | 776,271,952 | MDExOlB1bGxSZXF1ZXN0NTQ2Nzk3NTEx | 9,355 | Fix typos in README and bugs in RAG example code for end-to-end evaluation and finetuning | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"html_url": "https://github.com/yoshitomo-matsubara",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @patrickvonplaten,\r\n\r\nThank you for reviewing this PR!\r\nAs commented above, the argument `num_retrieval_workers` in `add_ray_specific_args` is duplicate ([first defined in `add_retriever_specific_args`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L490-L508)) and causes an error.",
"Great work @yoshitomo-matsubara ",
"Thank you for reviewing PR @patrickvonplaten @lhoestq !"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR fixes bugs in RAG example code for [end-to-end evaluation](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#end-to-end-evaluation) and [finetuning](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#finetuning).
## 1. Follow the file paths of reorganized examples
Also, the file paths for example code in README are updated (`example/rag/` -> `example/research_projects/rag/`)
## 2. End-to-end evaluation
```
python examples/research_projects/rag/eval_rag.py \
--model_name_or_path facebook/rag-sequence-nq \
--model_type rag_sequence \
--evaluation_set path/to/dev.source \
--gold_data_path path/to/dev.gold_data \ # parsed `biencoder-nq-dev.json` following `qa` format
--predictions_path path/to/e2e_preds.txt \
--eval_mode e2e \
--gold_data_mode qa \
--n_docs 5 \ # You can experiment with retrieving different number of documents at evaluation time
--print_predictions \
--recalculate
```
With the above command, I encountered a few errors:
1. an unexpected keyword argument 'clean_up_tokenization'
```
Some weights of RagSequenceForGeneration were not initialized from the model checkpoint at facebook/rag-sequence-nq and are newly initialized: ['rag.generator.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
initializing retrieval
Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4
7it [00:00, 54.03it/s]
Traceback (most recent call last):
File "examples/research_projects/rag/eval_rag.py", line 314, in <module>
main(args)
File "examples/research_projects/rag/eval_rag.py", line 300, in main
answers = evaluate_batch_fn(args, model, questions)
File "examples/research_projects/rag/eval_rag.py", line 134, in evaluate_batch_e2e
print_docs=args.print_docs,
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/models/rag/modeling_rag.py", line 923, in generate
**model_kwargs,
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 503, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 86, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'clean_up_tokenization'
```
2. another unexpected keyword argument 'print_docs'
```
Some weights of RagSequenceForGeneration were not initialized from the model checkpoint at facebook/rag-sequence-nq and are newly initialized: ['rag.generator.lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
initializing retrieval
Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4
7it [00:00, 45.43it/s]
Traceback (most recent call last):
File "examples/research_projects/rag/eval_rag.py", line 314, in <module>
main(args)
File "examples/research_projects/rag/eval_rag.py", line 300, in main
answers = evaluate_batch_fn(args, model, questions)
File "examples/research_projects/rag/eval_rag.py", line 134, in evaluate_batch_e2e
print_docs=args.print_docs,
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/models/rag/modeling_rag.py", line 923, in generate
**model_kwargs,
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 503, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/ubuntu/workspace/transformers/src/transformers/generation_utils.py", line 86, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/ubuntu/.local/share/virtualenvs/transformers-zPEj0XTF/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'print_docs'
```
## 3. Finetuning
```
python examples/research_projects/rag/finetune_rag.py \
--data_dir $DATA_DIR \
--output_dir $OUTPUT_DIR \
--model_name_or_path $MODEL_NAME_OR_PATH \
--model_type rag_sequence \
--fp16 \
--gpus 8
```
With the above command, I found two easy bugs to be fixed:
1. [missing `return parser`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L498) returns None to `parser` and crashes [here](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L528-L531)
2. [duplicated argument with `num_retrieval_workers`](https://github.com/huggingface/transformers/blob/8217d4e37fce48490a68af7e8ce902af16318132/examples/research_projects/rag/finetune_rag.py#L490-L508) is also a blocker when using `finetune_rag.py`
## Environments
- Ubuntu 18.04 LTS
- Python 3.7.7
- transformers (I tried both 4.1.1 from pip and from repo https://github.com/huggingface/transformers/commit/912f6881d2b69f180522172a5283702bd8c41d9c)
- torch: 1.7.1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
@patrickvonplaten @lhoestq
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9355",
"html_url": "https://github.com/huggingface/transformers/pull/9355",
"diff_url": "https://github.com/huggingface/transformers/pull/9355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9355.patch",
"merged_at": 1609686030000
} |
https://api.github.com/repos/huggingface/transformers/issues/9354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9354/comments | https://api.github.com/repos/huggingface/transformers/issues/9354/events | https://github.com/huggingface/transformers/pull/9354 | 776,154,845 | MDExOlB1bGxSZXF1ZXN0NTQ2NjkwOTYz | 9,354 | [test_model_parallelization] multiple fixes | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
}
] | closed | false | null | [] | [
"Thank you for fixing!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | There are multiple issues with the current common `test_model_parallelization` test.
The main issue is that it uses `nvidia-smi` to take memory snapshots. This leads to 2 potential problems:
1. these tests must not be run with pytest distributed, as they rely on all the GPUs being unused - and with `-n 2` or higher it's likely to break, since `nvidia-smi` would be indiscriminately reporting memory used by other pytest workers.
I fixed this problem first by creating a new `@require_no_pytest_distributed` decorator at https://gist.github.com/stas00/5d58c606dbdcb82e019d6b0674f8b42a - but once the 2nd problem was fixed it no longer was needed so I removed it. I don't think we currently have any tests that must be run without `pytest-xdist`, but if any come in the future we can merge that skip decorator too.
2. this implementation can easily return incorrect info if CUDA device order doesn't match nvidia-smi device order (my case and this test fails for me) - so one has to use `CUDA_VISIBLE_DEVICES` to match CUDA device order to nvidia-smi's for this test to pass.
Switching to `torch.cuda.memory_allocated` fixes both problems as it measures memory usage for the current process only and in the correct order - i.e. `to(0)` always matches `memory_allocated(0)` device-wise. (the weird multi-line implementation has to do with https://github.com/pytorch/pytorch/issues/49952)
BTW, I first thought of using `pynvml`, but it would have had the same issue.` nvidia-smi` is just another front-end to `nvml`.
Other fixes:
* removes hardcoded gpt2 config
* adds `gc.collect`. One can't rely on exact memory measurements w/o manual `gc.collect` - since it gets triggered automatically at certain times as explained in its docs, which is often too late for what's being measured. Most of the time when you `del foo` it doesn't get reclaimed by `gc` right away. So the correct sequence when exact memory measurements are desired is:
```
del some_variable
gc.collect()
torch.cuda.empty_cache()
# now can measure memory
```
* last sub-test adjusted to measure against the memory snapshot before that sub-test and not at the beginning of the whole test.
`get_current_gpu_memory_use` might go into testing or benchmarking utils and perhaps need to change its name to match that it returns MBs, but it's good enough for now.
@alexorona, please let me know if it's of interest to you for the tweaks I've been proposing - please let me know if you'd like me to tag you on these.
@patrickvonplaten, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9354",
"html_url": "https://github.com/huggingface/transformers/pull/9354",
"diff_url": "https://github.com/huggingface/transformers/pull/9354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9354.patch",
"merged_at": 1609790952000
} |
https://api.github.com/repos/huggingface/transformers/issues/9353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9353/comments | https://api.github.com/repos/huggingface/transformers/issues/9353/events | https://github.com/huggingface/transformers/pull/9353 | 776,141,112 | MDExOlB1bGxSZXF1ZXN0NTQ2NjgwMTM3 | 9,353 | Fixes crash when `compute_metrics` is not passed to `Trainer` in run_mlm example | {
"login": "galtay",
"id": 663051,
"node_id": "MDQ6VXNlcjY2MzA1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galtay",
"html_url": "https://github.com/galtay",
"followers_url": "https://api.github.com/users/galtay/followers",
"following_url": "https://api.github.com/users/galtay/following{/other_user}",
"gists_url": "https://api.github.com/users/galtay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galtay/subscriptions",
"organizations_url": "https://api.github.com/users/galtay/orgs",
"repos_url": "https://api.github.com/users/galtay/repos",
"events_url": "https://api.github.com/users/galtay/events{/privacy}",
"received_events_url": "https://api.github.com/users/galtay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In master `train` always returns `metrics`.\r\n\r\nIs it possible that you are running the script from master but loading `transformers` that is pre-installed and it is not master? This metrics was added just recently.\r\n\r\nDo you still get the error if you do:\r\n```\r\ngit clone https://github.com/huggingface/transformers/\r\ncd transformers\r\nPYTHONPATH=src examples/language-modeling/run_mlm.py ...\r\n```\r\nThis ensures that you're using the master version in the script.\r\n\r\nOr alternatively if you tend to use the master a lot, install it with `pip install -e .[dev]` which allows you to `git pull` and not needing to reinstall anything.\r\n\r\nTo verify that there is no problem in master I have just run:\r\n\r\n```\r\npython run_mlm.py --model_name_or_path roberta-base --dataset_name wikitext \\\r\n--dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-mlm\r\n```\r\nand got:\r\n```\r\nINFO|trainer.py:1248] 2020-12-29 22:49:16,276 >> Saving model checkpoint to /tmp/test-mlm\r\n[INFO|configuration_utils.py:289] 2020-12-29 22:49:16,277 >> Configuration saved in /tmp/test-mlm/config.json\r\n[INFO|modeling_utils.py:814] 2020-12-29 22:49:16,818 >> Model weights saved in /tmp/test-mlm/pytorch_model.bin\r\n12/29/2020 22:49:16 - INFO - __main__ - ***** Train results *****\r\n12/29/2020 22:49:16 - INFO - __main__ - epoch = 3.0\r\n12/29/2020 22:49:16 - INFO - __main__ - train_runtime = 383.9452\r\n12/29/2020 22:49:16 - INFO - __main__ - train_samples_per_second = 4.688\r\n```\r\n\r\nSo all seems to be in norm.\r\n",
"Thanks for taking a look @stas00 and for the examples ! You are correct, I had transformers 4.1.1. Looks fine when run on master. I'll close this PR "
] | 1,609 | 1,609 | 1,609 | NONE | null | # What does this PR do?
If I'm understanding the example run_mlm code correctly, the `metrics` attribute will not be present on the `train_result` object if `compute_metrics` is not passed to `Trainer`. This edit prevents the script from attempting to write the metrics to file if they don't exist.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
@stas00
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9353",
"html_url": "https://github.com/huggingface/transformers/pull/9353",
"diff_url": "https://github.com/huggingface/transformers/pull/9353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9353.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9352/comments | https://api.github.com/repos/huggingface/transformers/issues/9352/events | https://github.com/huggingface/transformers/pull/9352 | 776,123,334 | MDExOlB1bGxSZXF1ZXN0NTQ2NjY1OTQy | 9,352 | [trainer] parametrize default output_dir | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR:
* fixes trainer to have the logger agree with the actual default `output_dir`, by setting it in one place and passing it as an argument to both places. The current logger falsely informs the user that `output_dir` is the current path, while using `tmp_trainer` as the path.
@patrickvonplaten, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9352",
"html_url": "https://github.com/huggingface/transformers/pull/9352",
"diff_url": "https://github.com/huggingface/transformers/pull/9352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9352.patch",
"merged_at": 1609773273000
} |
https://api.github.com/repos/huggingface/transformers/issues/9351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9351/comments | https://api.github.com/repos/huggingface/transformers/issues/9351/events | https://github.com/huggingface/transformers/issues/9351 | 776,113,709 | MDU6SXNzdWU3NzYxMTM3MDk= | 9,351 | XLNet evaluation on SQuAD | {
"login": "slvcsl",
"id": 25265140,
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slvcsl",
"html_url": "https://github.com/slvcsl",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [
"Pinging @sgugger here. Think he has more knowledge about the training script than I do.",
"This is linked to [this issue](https://github.com/huggingface/tokenizers/issues/552) in the tokenizers repo. Until this is solved, the script `run_qa` does not work properly with XLNet (the offset mappings computed are incorrect). You can use `run_qa_beam_search` with the XLNet model while waiting for the issue to be solved.",
"Hi @sgugger, thanks for your answer. However, I'm trying to do a (fair) comparison between models, so using beam search is not an option. I might install another package version that works well with XLNet on SQuAD (I've seen, for example, that v. 3.10 also has some problems in evaluation). Do you know if any previous version is ok, at the moment?",
"You can always use the [legacy script](https://github.com/huggingface/transformers/blob/master/examples/legacy/question-answering/run_squad.py) if you can't wait for the fix.",
"Thank you very much, I was unaware of legacy scripts. \r\n\r\nDo I need a particular transformers version to run them? When I run run_squad.py at the moment I get (errors in bolds)\r\n\r\n01/05/2021 15:51:31 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False\r\n[INFO|configuration_utils.py:431] 2021-01-05 15:51:31,306 >> loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346\r\n[INFO|configuration_utils.py:467] 2021-01-05 15:51:31,307 >> Model config XLNetConfig {\r\n \"architectures\": [\r\n \"XLNetLMHeadModel\"\r\n ],\r\n \"attn_type\": \"bi\",\r\n \"bi_data\": false,\r\n \"bos_token_id\": 1,\r\n \"clamp_len\": -1,\r\n \"d_head\": 64,\r\n \"d_inner\": 3072,\r\n \"d_model\": 768,\r\n \"dropout\": 0.1,\r\n \"end_n_top\": 5,\r\n \"eos_token_id\": 2,\r\n \"ff_activation\": \"gelu\",\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_eps\": 1e-12,\r\n \"mem_len\": null,\r\n \"model_type\": \"xlnet\",\r\n \"n_head\": 12,\r\n \"n_layer\": 12,\r\n \"pad_token_id\": 5,\r\n \"reuse_len\": null,\r\n \"same_length\": false,\r\n \"start_n_top\": 5,\r\n \"summary_activation\": \"tanh\",\r\n \"summary_last_dropout\": 0.1,\r\n \"summary_type\": \"last\",\r\n \"summary_use_proj\": true,\r\n \"task_specific_params\": {\r\n \"text-generation\": {\r\n \"do_sample\": true,\r\n \"max_length\": 250\r\n }\r\n },\r\n \"untie_r\": true,\r\n \"use_mems_eval\": true,\r\n \"use_mems_train\": false,\r\n \"vocab_size\": 32000\r\n}\r\n\r\n[INFO|configuration_utils.py:431] 2021-01-05 15:51:31,607 >> loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346\r\n[INFO|configuration_utils.py:467] 2021-01-05 15:51:31,608 >> Model config XLNetConfig {\r\n \"architectures\": [\r\n \"XLNetLMHeadModel\"\r\n ],\r\n \"attn_type\": \"bi\",\r\n \"bi_data\": false,\r\n \"bos_token_id\": 1,\r\n \"clamp_len\": -1,\r\n \"d_head\": 64,\r\n \"d_inner\": 3072,\r\n \"d_model\": 768,\r\n \"dropout\": 0.1,\r\n \"end_n_top\": 5,\r\n \"eos_token_id\": 2,\r\n \"ff_activation\": \"gelu\",\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_eps\": 1e-12,\r\n \"mem_len\": null,\r\n \"model_type\": \"xlnet\",\r\n \"n_head\": 12,\r\n \"n_layer\": 12,\r\n \"pad_token_id\": 5,\r\n \"reuse_len\": null,\r\n \"same_length\": false,\r\n \"start_n_top\": 5,\r\n \"summary_activation\": \"tanh\",\r\n \"summary_last_dropout\": 0.1,\r\n \"summary_type\": \"last\",\r\n \"summary_use_proj\": true,\r\n \"task_specific_params\": {\r\n \"text-generation\": {\r\n \"do_sample\": true,\r\n \"max_length\": 250\r\n }\r\n },\r\n \"untie_r\": true,\r\n \"use_mems_eval\": true,\r\n \"use_mems_train\": false,\r\n \"vocab_size\": 32000\r\n}\r\n\r\n[INFO|tokenization_utils_base.py:1802] 2021-01-05 15:51:32,221 >> loading file https://huggingface.co/xlnet-base-cased/resolve/main/spiece.model from cache at /home/scasola/.cache/huggingface/transformers/df73bc9f8d13bf2ea4dab95624895e45a550a0f0a825e41fc25440bf367ee3c8.d93497120e3a865e2970f26abdf7bf375896f97fde8b874b70909592a6c785c9\r\n[INFO|tokenization_utils_base.py:1802] 2021-01-05 15:51:32,222 >> loading file https://huggingface.co/xlnet-base-cased/resolve/main/tokenizer.json from cache at /home/scasola/.cache/huggingface/transformers/46f47734f3dcaef7e236b9a3e887f27814e18836a8db7e6a49148000058a1a54.2a683f915238b4f560dab0c724066cf0a7de9a851e96b0fb3a1e7f0881552f53\r\n[INFO|modeling_utils.py:1024] 2021-01-05 15:51:32,564 >> loading weights file https://huggingface.co/xlnet-base-cased/resolve/main/pytorch_model.bin from cache at /home/scasola/.cache/huggingface/transformers/9461853998373b0b2f8ef8011a13b62a2c5f540b2c535ef3ea46ed8a062b16a9.3e214f11a50e9e03eb47535b58522fc3cc11ac67c120a9450f6276de151af987\r\n[WARNING|modeling_utils.py:1132] 2021-01-05 15:51:35,070 >> Some weights of the model checkpoint at xlnet-base-cased were not used when initializing XLNetForQuestionAnsweringSimple: ['lm_loss.weight', 'lm_loss.bias']\r\n...\r\n01/05/2021 15:51:37 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='../../../../../squad_data', device=device(type='cuda'), do_eval=True, do_lower_case=False, \r\ndo_train=True, doc_stride=128, eval_all_checkpoints=True, evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=4, lang_id=0, learning_rate=0.001, local_rank=-1, logging_steps=500, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='xlnet-base-cased', model_type='xlnet', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=10.0, \r\noutput_dir='../../../../squad_results/XLNet/1e-3/1', overwrite_cache=True, overwrite_output_dir=False, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=8, predict_file=None, save_steps=4132, seed=1, server_ip='', server_port='', threads=1, tokenizer_name='', train_file=None, verbose_logging=False, version_2_with_negative=True, warmup_steps=4132, weight_decay=0.0)\r\n01/05/2021 15:51:37 - INFO - __main__ - Creating features from dataset file at ../../../../../squad_data\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 442/442 [00:39<00:00, 11.33it/s]convert squad examples to features: 0%| | 0/130319 [00:00<?, ?it/s]multiprocessing.pool.RemoteTraceback: \r\n\"\"\"\r\n**Traceback (most recent call last):**\r\n File \"/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 44, in mapstar\r\n return list(map(*args))\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 189, in squad_convert_example_to_features\r\n return_token_type_ids=True,\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/tokenization_utils_base.py\", line 2462, in encode_plus\r\n **kwargs,\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/**tokenization_utils_fast.py**\", line 465, in _encode_plus\r\n **kwargs,\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/**tokenization_utils_fast.py**\", line 378, in _batch_encode_plus\r\n is_pretokenized=is_split_into_words,\r\nTypeError: TextInputSequence must be str\r\n\"\"\"\r\n\r\n**The above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):**\r\n File \"run_squad.py\", line 833, in <module>\r\n main()\r\n File \"run_squad.py\", line 772, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)\r\n File \"run_squad.py\", line 461, in load_and_cache_examples\r\n threads=args.threads,\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/transformers/data/processors/squad.py\", line 382, in squad_convert_examples_to_features\r\n disable=not tqdm_enabled,\r\n File \"/home/scasola/survey/squad/mypython/lib/python3.7/site-packages/tqdm/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 325, in <genexpr>\r\n return (item for chunk in result for item in chunk)\r\n File \"/home/scasola/anaconda3/lib/python3.7/multiprocessing/pool.py\", line 748, in next\r\n raise value\r\nTypeError: TextInputSequence must be str\r\n\r\nThis might be related to the tokenizer, as in #7735 . \r\nHowever, the used tokenizer should not be fast (see code snippet) even if it seems from the traceback that the fast tokenizer is actually called. Any workaround?\r\n` tokenizer = AutoTokenizer.from_pretrained(\r\n args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,\r\n do_lower_case=args.do_lower_case,\r\n cache_dir=args.cache_dir if args.cache_dir else None,\r\n use_fast=False, # SquadDataset is not compatible with Fast tokenizers which have a smarter overflow handeling\r\n )`",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"am having the same issue and a fix would be really nice...",
"Thank you for opening an issue - Unfortunately, we're limited on bandwidth and fixing QA for XLNet is quite low on our priority list. If you would like to go ahead and fix this issue, we would love to review a PR, but we won't find the time to get to it right away."
] | 1,609 | 1,633 | 1,614 | NONE | null | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
XLNet @LysandreJik
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
* [x] the official example scripts: **run_qa.py**
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: **squad v2**
* [ ] my own task or dataset: (give details below)
## To reproduce
I installed the transformer package from source, as required.
When I try to evaluate XLNet on the SQUAD dataset, however, I get a problem.
In particular, I run the official script as:
```
python run_qa.py \
--model_name_or_path xlnet-base-cased \
--dataset_name squad_v2 \
--do_eval \
--version_2_with_negative \
--learning_rate 1e-4 \
--per_device_eval_batch_size=1 \
--seed 1 \
--output_dir ../../../../squad_results
```
This is the whole output, most of which is probably non relevant, for reference (error in bold)
12/29/2020 22:41:21 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2distributed training: False, 16-bits training: False
12/29/2020 22:41:21 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=../../../../squad_results, overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, model_parallel=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=1, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=1e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Dec29_22-41-21_HLTNLP-GPU-B, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level=O1, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=../../../../squad_results, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, fp16_backend=auto, sharded_ddp=False, label_smoothing_factor=0.0, adafactor=False)
Reusing dataset squad_v2 (/home/scasola/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/0e44b51f4035c15e218d53dc9eea5fe7123341982e524818b8500e4094fffb7b)
loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346
Model config XLNetConfig {
"architectures": [
"XLNetLMHeadModel"
],
"attn_type": "bi",
"bi_data": false,
"bos_token_id": 1,
"clamp_len": -1,
"d_head": 64,
"d_inner": 3072,
"d_model": 768,
"dropout": 0.1,
"end_n_top": 5,
"eos_token_id": 2,
"ff_activation": "gelu",
"initializer_range": 0.02,
"layer_norm_eps": 1e-12,
"mem_len": null,
"model_type": "xlnet",
"n_head": 12,
"n_layer": 12,
"pad_token_id": 5,
"reuse_len": null,
"same_length": false,
"start_n_top": 5,
"summary_activation": "tanh",
"summary_last_dropout": 0.1,
"summary_type": "last",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 250
}
},
"untie_r": true,
"use_mems_eval": true,
"use_mems_train": false,
"vocab_size": 32000
}
loading configuration file https://huggingface.co/xlnet-base-cased/resolve/main/config.json from cache at /home/scasola/.cache/huggingface/transformers/06bdb0f5882dbb833618c81c3b4c996a0c79422fa2c95ffea3827f92fc2dba6b.da982e2e596ec73828dbae86525a1870e513bd63aae5a2dc773ccc840ac5c346
Model config XLNetConfig {
"architectures": [
"XLNetLMHeadModel"
],
"attn_type": "bi",
"bi_data": false,
"bos_token_id": 1,
"clamp_len": -1,
"d_head": 64,
"d_inner": 3072,
"d_model": 768,
"dropout": 0.1,
"end_n_top": 5,
"eos_token_id": 2,
"ff_activation": "gelu",
"initializer_range": 0.02,
"layer_norm_eps": 1e-12,
"mem_len": null,
"model_type": "xlnet",
"n_head": 12,
"n_layer": 12,
"pad_token_id": 5,
"reuse_len": null,
"same_length": false,
"start_n_top": 5,
"summary_activation": "tanh",
"summary_last_dropout": 0.1,
"summary_type": "last",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 250
}
},
"untie_r": true,
"use_mems_eval": true,
"use_mems_train": false,
"vocab_size": 32000
}
loading file https://huggingface.co/xlnet-base-cased/resolve/main/spiece.model from cache at /home/scasola/.cache/huggingface/transformers/df73bc9f8d13bf2ea4dab95624895e45a550a0f0a825e41fc25440bf367ee3c8.d93497120e3a865e2970f26abdf7bf375896f97fde8b874b70909592a6c785c9
loading file https://huggingface.co/xlnet-base-cased/resolve/main/tokenizer.json from cache at /home/scasola/.cache/huggingface/transformers/46f47734f3dcaef7e236b9a3e887f27814e18836a8db7e6a49148000058a1a54.2a683f915238b4f560dab0c724066cf0a7de9a851e96b0fb3a1e7f0881552f53
loading weights file https://huggingface.co/xlnet-base-cased/resolve/main/pytorch_model.bin from cache at /home/scasola/.cache/huggingface/transformers/9461853998373b0b2f8ef8011a13b62a2c5f540b2c535ef3ea46ed8a062b16a9.3e214f11a50e9e03eb47535b58522fc3cc11ac67c120a9450f6276de151af987
Some weights of the model checkpoint at xlnet-base-cased were not used when initializing XLNetForQuestionAnsweringSimple: ['lm_loss.weight', 'lm_loss.bias']
- This IS expected if you are initializing XLNetForQuestionAnsweringSimple from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLNetForQuestionAnsweringSimple from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of XLNetForQuestionAnsweringSimple were not initialized from the model checkpoint at xlnet-base-cased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading cached processed dataset at /home/scasola/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/0e44b51f4035c15e218d53dc9eea5fe7123341982e524818b8500e4094fffb7b/cache-c46fe459ef8061d5.arrow
The following columns in the evaluation set don't have a corresponding argument in `XLNetForQuestionAnsweringSimple.forward` and have been ignored: example_id, offset_mapping.
12/29/2020 22:41:30 - INFO - __main__ - *** Evaluate ***
The following columns in the evaluation set don't have a corresponding argument in `XLNetForQuestionAnsweringSimple.forward` and have been ignored: example_id, offset_mapping.
***** Running Evaluation *****
Num examples = 12231
Batch size = 2
█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6116/6116 [38:14<00:00, 3.32it/s]12/29/2020 23:19:57 - INFO - utils_qa - Post-processing 11873 example predictions split into 12231 features.
0%| | 0/11873 [00:00<?, ?it/s]**Traceback (most recent call last): | 0/11873 [00:00<?, ?it/s] File "run_qa.py", line 480, in <module>
main()
File "run_qa.py", line 461, in main
results = trainer.evaluate()
File "/home/scasola/survey/squad/xlnet/transformers/examples/question-answering/trainer_qa.py", line 62, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions)
File "run_qa.py", line 407, in post_processing_function
is_world_process_zero=trainer.is_world_process_zero(),
File "/home/scasola/survey/squad/xlnet/transformers/examples/question-answering/utils_qa.py", line 195, in postprocess_qa_predictions
while predictions[i]["text"] == "":
IndexError: list index out of range**
## Expected behavior
Evalaution of the model saved in the output dir | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9350/comments | https://api.github.com/repos/huggingface/transformers/issues/9350/events | https://github.com/huggingface/transformers/pull/9350 | 776,101,060 | MDExOlB1bGxSZXF1ZXN0NTQ2NjQ3Nzg1 | 9,350 | [apex.normalizations.FusedLayerNorm] torch.cuda.is_available() is redundant as apex handles that internally | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for digging into this @stas00 "
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR is a follow up to https://github.com/huggingface/transformers/issues/9338
According to https://github.com/huggingface/transformers/issues/9338#issuecomment-752242098 we can just remove the `torch.cuda.is_available()` check before importing `apex.normalizations.FusedLayerNorm` and the multiprocess problem will go away.
Fixes #9338
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9350",
"html_url": "https://github.com/huggingface/transformers/pull/9350",
"diff_url": "https://github.com/huggingface/transformers/pull/9350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9350.patch",
"merged_at": 1609319392000
} |
https://api.github.com/repos/huggingface/transformers/issues/9349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9349/comments | https://api.github.com/repos/huggingface/transformers/issues/9349/events | https://github.com/huggingface/transformers/pull/9349 | 776,060,307 | MDExOlB1bGxSZXF1ZXN0NTQ2NjE0MDY4 | 9,349 | [prophetnet] wrong import | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | ```
python -c "from apex.normalization import FusedProphetNetLayerNorm"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'FusedProphetNetLayerNorm' from 'apex.normalization' (/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/apex/normalization/__init__.py)
```
It looks like this code has never been tested, so it silently fails inside try/except.
Discovered this by accident in https://github.com/huggingface/transformers/issues/9338#issuecomment-752217708
@patrickvonplaten, @LysandreJik
note, prophetnet is missing from .github/PULL_REQUEST_TEMPLATE.md, .github/ISSUE_TEMPLATE/bug-report.md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9349",
"html_url": "https://github.com/huggingface/transformers/pull/9349",
"diff_url": "https://github.com/huggingface/transformers/pull/9349.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9349.patch",
"merged_at": 1609277527000
} |
https://api.github.com/repos/huggingface/transformers/issues/9348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9348/comments | https://api.github.com/repos/huggingface/transformers/issues/9348/events | https://github.com/huggingface/transformers/pull/9348 | 776,056,672 | MDExOlB1bGxSZXF1ZXN0NTQ2NjExMDQ2 | 9,348 | Fix TF Longformer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have already ran the slow tests as well and they all pass!"
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to fix the TF Longformer version in order to make it graph compliant. As seen offline with @patrickvonplaten `all_global_attentions` now is added in the output when `output_attentions=True`. The global attentions are filled with zeros in case `is_global_attn` is False (see line 897 in `TFLongformerSelfAttention`.
# Fix issue
#9333
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9348",
"html_url": "https://github.com/huggingface/transformers/pull/9348",
"diff_url": "https://github.com/huggingface/transformers/pull/9348.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9348.patch",
"merged_at": 1609836595000
} |
https://api.github.com/repos/huggingface/transformers/issues/9347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9347/comments | https://api.github.com/repos/huggingface/transformers/issues/9347/events | https://github.com/huggingface/transformers/pull/9347 | 776,045,660 | MDExOlB1bGxSZXF1ZXN0NTQ2NjAyMjE3 | 9,347 | [trainer] --model_parallel hasn't been implemented for most models | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2627272588,
"node_id": "MDU6TGFiZWwyNjI3MjcyNTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel",
"name": "Model Parallel",
"color": "8B66A5",
"default": false,
"description": "Model Parallelilsm Implementations"
}
] | closed | false | null | [] | [
"@alexorona proposed to have the `model_parallel` method in `PreTrainedModel`, https://github.com/huggingface/transformers/pull/9323#issuecomment-752352280 which then would break this code as it'd be then present in all models.\r\n\r\nI see this PR as a quick band-aid since we released the new cl arg w/o checking that it always works. And then we will surely improve it as we generalize MP and not leave it this way. This is definitely not how it'll remain in the long run.",
"So should we merge this one as a hot-fix?\r\n\r\n-------------\r\n\r\nAn absolute yes to `PreTrainedModel.parallelizable` accessor - default `False`, then a `True` override for each specific model head that implements it - better than checking arch which doesn't guarantee that it'll have all heads parallelizable. \r\n\r\nAnd also what do you think about tests? Currently we hardcore a list of parallelizable models:\r\n\r\nhttps://github.com/huggingface/transformers/blob/086718ac6e20ca2e2cfa3aa0f6da9dc7ee34f6c6/tests/test_modeling_t5.py#L491\r\n\r\nshould it remain this way or should we automatically derive those from the model by iterating over `all_model_classes`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/086718ac6e20ca2e2cfa3aa0f6da9dc7ee34f6c6/tests/test_modeling_t5.py#L489\r\n\r\nand automatically deriving which are parallelizable. Less code to write in the future.\r\n",
"I'd rather merge as a hotfix the proper check and then worry about the tests in a follow up PR (I think we should have a combination of a flag (like for pruning) and checking the models having the attributes there).",
"It no longer will be hot, but yes, I will code that ;) thank you for the feedback, @sgugger \r\n\r\n> I think we should have a combination of a flag (like for pruning) and checking the models having the attributes there).\r\n\r\nI'm not sure what you mean here. An example would be helpful to understand what you propose.",
"The class `ModelTesterMixin` has a few attributes that control what common tests to apply. I just realized while reading it that it already has the `test_model_parallel` flag so this part is done already. All that is left is just to infer the models to test from the presence of the right attribute :-)",
"OK, I added `model.is_parallelizable` property - let me know if this looks good, or whether you prefer not using a property. \r\n\r\nif you prefer w/o `is_` or not have it a property please let me know.",
"> I'm fine with this design but it differs from what we were talking about, so we should check the others are fine with it too before merging.\r\n\r\nYes, of course.\r\n\r\nthat's why it is no longer a hotfix, but it seems to be fine - only one user has filed an issue about using a non-working `--model_parallel` so far.",
"So since the only change I proposed is from `parallelizable` to `is_parallelizable`, do you still think we ought to re-validate with @LysandreJik?",
"Yes, let's wait for him to review this tomorrow morning (he's on European time for the next month or so)."
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | Apparently we unleashed `--model_parallel` in trainer w/o checking if the model supports MP (most don't). This PR:
* [x] checks whether the model supports MP and asserts otherwise
* [x] fixes the cl arg help to note that the flag will only work if the model supports MP
As we are gradually starting to build MP-support a cleaner solution will be made in the future, but for now this is good enough to prevent misleading false expectations as reported in https://github.com/huggingface/transformers/issues/9336
(Also for the future, I'm not sure whether it'd be better to check `model.config.architectures`, which would be more precise than checking `model_type` since it's the `architectures` that may or may not support MP within the same `model_type` - but that's a different discussion).
Fixes: #9336
@patrickvonplaten, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9347/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9347",
"html_url": "https://github.com/huggingface/transformers/pull/9347",
"diff_url": "https://github.com/huggingface/transformers/pull/9347.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9347.patch",
"merged_at": 1609837290000
} |
https://api.github.com/repos/huggingface/transformers/issues/9346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9346/comments | https://api.github.com/repos/huggingface/transformers/issues/9346/events | https://github.com/huggingface/transformers/pull/9346 | 776,027,621 | MDExOlB1bGxSZXF1ZXN0NTQ2NTg4MTI3 | 9,346 | [Seq2Seq Templates] Add forgotten imports to templates | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I accidentally forgot to add this import in #9342.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9346",
"html_url": "https://github.com/huggingface/transformers/pull/9346",
"diff_url": "https://github.com/huggingface/transformers/pull/9346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9346.patch",
"merged_at": 1609266906000
} |
https://api.github.com/repos/huggingface/transformers/issues/9345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9345/comments | https://api.github.com/repos/huggingface/transformers/issues/9345/events | https://github.com/huggingface/transformers/issues/9345 | 776,012,959 | MDU6SXNzdWU3NzYwMTI5NTk= | 9,345 | Training of BART slow on TPU - aten ops investigation | {
"login": "phtephanx",
"id": 24647404,
"node_id": "MDQ6VXNlcjI0NjQ3NDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/24647404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phtephanx",
"html_url": "https://github.com/phtephanx",
"followers_url": "https://api.github.com/users/phtephanx/followers",
"following_url": "https://api.github.com/users/phtephanx/following{/other_user}",
"gists_url": "https://api.github.com/users/phtephanx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phtephanx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phtephanx/subscriptions",
"organizations_url": "https://api.github.com/users/phtephanx/orgs",
"repos_url": "https://api.github.com/users/phtephanx/repos",
"events_url": "https://api.github.com/users/phtephanx/events{/privacy}",
"received_events_url": "https://api.github.com/users/phtephanx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @phtephanx \r\n\r\nThank you for troubleshooting the problem. I haven't really dived into this problem yet.\r\n\r\nMaybe @LysandreJik (our TPU expert ) will be able to help here once he's back from vacation ;)\r\n\r\nAlso cc @patrickvonplaten ",
"Hey @phtephanx,\r\n\r\nThanks for opening this issue! I'm actually very interested in solving this problem as well...did you try executing Bart on TPU with just a simple training loop to see if the slow-down persists? \r\n\r\nMaybe we can work together in a colab to solve this problem. Feel free to dive a bit more into the problem if you feel like it (I'd suggest on a google colab with TPU) and I'm happy to guide you along the way. Else I hope to find some time in mid-January to tackle the problem :-) ",
"@patrickvonplaten Sorry for the late reply. \r\n\r\n> Thanks for opening this issue! I'm actually very interested in solving this problem as well...did you try executing Bart on TPU with just a simple training loop to see if the slow-down persists?\r\nMaybe we can work together in a colab to solve this problem. Feel free to dive a bit more into the problem if you feel like it (I'd suggest on a google colab with TPU) and I'm happy to guide you along the way. Else I hope to find some time in mid-January to tackle the problem :-)\r\n\r\nWould be great if we could cooperate on this because I'm stuck with this for a while! ;-)\r\n\r\nI created two minimalistic training notebooks:\r\n* **(I)** [BART on TPU w/o Trainer](https://colab.research.google.com/drive/10crQewhWImt9vHD1UJo-HnzzSgbwwFyA?usp=sharing)\r\n* **(II)** [BART on TPU w/ Trainer](https://colab.research.google.com/drive/1C_8EmDmnisYPLfIkwL7tu-_iVWoPuu1I?usp=sharingv)\r\n\r\n**Settings:**\r\n* `BART-base`\r\n* `batch_size=64`\r\n* `gradient_accumulation_steps=1`\r\n\r\n**Observations:**\r\n* **(I)** and **(II)** run almost equally fast. Their graph seems to stabilize after approx. 4 steps and subsequently runs at constant throughput of 0.43 [it/s] (~ 2.34 [s/it])\r\n* **(I)** and **(II)** exhibit (almost) the same count of `aten::isnan` and `aten::_local_scalar_dense`. Thus, no significant number of ops (only 1) w/o XLA lowering is introduced by the `Trainer`\r\n\r\n**Conclusions:**\r\n* The throughput of both is really decent which indicates IMO that everything is probably ok with the TPU adaptation of `BART` and `Trainer` even if these two ops w/o XLA lowering occur\r\n* (I couldn't, however, reproduce this throughput on a private GCE VM, so far at all. If you're still interested, I'll take the exact same script and report!)\r\n\r\nBTW: I also tried out the training loop of (I) **without** wrapping it into a function and calling `xmp.spawn` on it. The throughput is very low and the graph never stabilizes which was suggested by a `CompileTime` of more than 20 [min].",
"**Reproducing (II) on GCE VM:**\r\nI conducted a run for **(II)** on **1** core of TPUv3. Apart from the `CompileTime` being considerably larger, which is expected because Colab somehow works instantaneously, the execution time is similar ([metrics_report.txt](https://github.com/huggingface/transformers/files/5762134/tpu_bart_w_trainer_1_core.txt)). It was actually a bit faster: 6 [s] on GCE VM vs. 12 [s] on Colab. Furthermore, I observed that the `CompileTime` is even noticeably larger when using **8** cores.\r\n\r\n(It might be that during my actual targeted training of `BART-large` on 8 cores, the stabilization phase of the graph simply takes much longer than for `BART-base` and I never arrived at a stable graph).",
"Hmm, so it works as expected on a google colab, but not on a private machine? The behavior you described for (I) and (II) seems reasonable to me. It's normal that compilation time is quite high for PyTorch/XLA IMO",
"I did some runs for `BART-large` on GCE TPU v3-8 with different settings using `Trainer`:\r\n\r\n| batch-size | lengths | num-cores | grad-acc | initial-speed [s/it] | final-speed [s/it] | final speed at step |\r\n|------------|---------|-----------|----------|----------------------|--------------------|---------------------|\r\n| 1 | 128 | 1 | 1 | 120 | 1 | ~7 |\r\n| 32 | 128 | 1 | 1 | 318 | 2.3 | ~6 |\r\n| 32 | 128 | 8 | 1 | 300 | 3 | ~12 |\r\n| 32 | 128 | 8 | 4 | 440 | 14.2 | ~20 |\r\n\r\nExtrapolating these numbers to the hparams used by the authors (batch size of 8000) results in the \"slow\" throughput due to which I opened the issue. I think, everything is fine - one just needs a bigger device like TPU v3-128. Out of scope for me. \r\n\r\n@patrickvonplaten Feel free to close unless there's something else we can discuss / tune.",
"Hey @phtephanx,\r\n\r\nThanks a lot for posting this, it's very useful! Yeah, I think for now I don't see a big issue either",
"> Hey @phtephanx,\r\n> \r\n> Thanks a lot for posting this, it's very useful! Yeah, I think for now I don't see a big issue either\r\n\r\nYou're welcome ;)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | Referencing: https://github.com/huggingface/transformers/issues/8339
Versions:
* `transformers==4.0.1`
* `pytorch==1.7.0`
* `pytorch_xla==1.7.0`
### problem
I've been trying to find out why training of `BartForConditionalGeneration` with `Trainer` is so **slow on TPU**. With slow I mean >30 min/batch (8000 samples) on 8 cores of TPUv3. I can very likely exclude slowdowns on behalf of the host machine.
Following the [xla troubleshooting guide](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md), I made sure that the **input tensors have fixed shape**. Furthermore, I created a `metrics` report which reveals that there occur many context switches between the XLA device and CPU due to:
* aten::_local_scalar_dense
* aten::isnan
I tried to localize the culprits via **debugging** on 1 TPU core and printing the metrics report at each breakpoint.
`aten::isnan` is obviously caused by `torch.isnan` in [L362 of BertEncoderLayer](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L362). I don't know any fix but I'd just turn the condition off for my training.
### questions
1. According to this [issue](https://github.com/pytorch/xla/issues/909), the counter for `aten::_local_scalar_dense` increases "every time the Python code switches from tensor context" to Python scalar context". I couldn't really pinpoint, however, which line causes this. I'm aware that the printing of the metrics report at each breakpoint does so but there are others. Any idea?
2. Just to be on the safe side: `pytorch_xla`'s wrapper `ParallelLoader` for the `DataLoader` doesn't have to do anything with TPU-related slowdown or context-switch in e.g. the `Dataset` instance because the loading still takes place on the host machine and the tensors are put into TPU-queues _afterwards_ as far as I read the code, correct?
3. Is training of `BART` (and probably similar models) running fast on TPU for anyone?
AFAIU @patil-suraj knows more. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9345/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9344/comments | https://api.github.com/repos/huggingface/transformers/issues/9344/events | https://github.com/huggingface/transformers/issues/9344 | 775,980,263 | MDU6SXNzdWU3NzU5ODAyNjM= | 9,344 | MBart prepare_seq2seq_batch | {
"login": "Chiyu-Song",
"id": 66252554,
"node_id": "MDQ6VXNlcjY2MjUyNTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/66252554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chiyu-Song",
"html_url": "https://github.com/Chiyu-Song",
"followers_url": "https://api.github.com/users/Chiyu-Song/followers",
"following_url": "https://api.github.com/users/Chiyu-Song/following{/other_user}",
"gists_url": "https://api.github.com/users/Chiyu-Song/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chiyu-Song/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chiyu-Song/subscriptions",
"organizations_url": "https://api.github.com/users/Chiyu-Song/orgs",
"repos_url": "https://api.github.com/users/Chiyu-Song/repos",
"events_url": "https://api.github.com/users/Chiyu-Song/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chiyu-Song/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @Chiyu-Song \r\n\r\nNot sure what exactly is the issue here. What do you mean by \r\n> encoded target sequence from \"prepare_seq2seq_batch\" is inconsistent with its description\r\n\r\nAlso would be nice if the code snippet is formatted, bit hard to read :)",
"Thank you for your reply @patrickvonplaten, let me rephrase the issue a bit.\r\n\r\n\r\n\r\nThe screenshot above is from the [MBart official documentation](https://huggingface.co/transformers/model_doc/mbart.html), introducing how to use \"prepare_seq2seq_batch()\" to encode input sequences for fine-tuning. However, after running the example code on the screenshot, I got something unexpected:\r\n\r\n1. \"prepare_seq2seq_batch()\" returns a dict with three keys \"input_ids\", \"attention_mask\" and \"labels\". The value of \"labels\" is actually the encoder_input_ids, so I think this key name is a bit confusing.\r\n\r\n2. The returned \"labels\"(encoder_input_ids) has a format \"X [eos, tgt_lang_code]\", but according to the description on the screenshot, it supposes to be \"[tgt_lang_code] X [eos]\".\r\n\r\n3. On the last line of the code snippet, \"labels\" doesn't seem to be the appropriate param for the model input, I believe \"encoder_input_ids\" should be used instead for fine-tuning.\r\n\r\nOpinions are my own, plz feel free to correct any of my misunderstandings.",
"1. `labels` is the correct name. `labels` are the tokens/output we expect the model (specifically the `decoder` to generate). The `MbartForConditionalGeneration` model prepares the `decoder_input_ids` (which are fed as input to the decoder) using the `labels`. It's a convention to use the name `labels` for the output of models.\r\n\r\nhere `input_ids` is the input to the `encoder`. We don't use the name `encoder_input_ids`\r\n\r\n2. As said above the model prepares the `decoder_input_ids` from `labels`. It does so by shifting the `labels` to the right and wrapping around the last token.\r\n\r\nso if the `labels` are \r\n`X [eos, tgt_lang_code]`\r\n\r\nthen `decoder_input_ids` are prepared as follows\r\n`[tgt_lang_code] X [eos]`\r\n\r\ni.e shift to the right and wrap around the last token. which is the target format expected by the model.\r\n\r\n3. `labels` is not the name for model input, it's the output name used by all library models. `input_ids` is the model input.\r\n\r\n \r\nAnd yes, you are right. The doc is a little confusing. In the doc target text actually refers to the decoder input which is prepared using `labels`. Feel free to raise a PR to fix the doc :) ",
"Hi Suraj,\r\n\r\nFirst of all, for the sake of clarity, \r\nI'd use the name \"tokenizer.labels\" to represent the prepared decoder_input_ids returned by \"prepare_seq2seq_batch()\".\r\nAnd use \"model.labels\" to represent the input param for \"BartForConditionalGeneration\" model. According to the documentation, this param is used for computing the MLM loss and should has nothing to with the decoder_input_ids.\r\n\r\nI double checked the source code of Bart model, in \"BartForConditionalGeneration\" it has a line of code like this:\r\n `decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)` \r\nwhich means it gets decoder_input_ids by shifting model.labels. \r\nIt indeed fixes the thrid point I mentioned in the previous comment, but also breaks the designed functionality of model.labels.\r\nTo me, it seems like using a bug to cover another, and I really believe someone confused tokenizer.labels with model.labels during implementation.\r\n\r\n-Chiyu",
"> I'd use the name \"tokenizer.labels\r\n\r\nIt's already called `labels`. `prepare_seq2seq_batch` returns `input_ids`, `attention_mask`, `labels` and `decoder_attentoion_mask`\r\n\r\n> but also breaks the designed functionality of model.labels\r\n\r\nAFAIk it doesn't break any functionality. Could you show an example where it breaks ? \r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | - `transformers` version: 4.1.1
- mBART: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): mBART
The problem arises when using:
* [x] the official example scripts: (give details below)
`example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
batch = tokenizer.prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian, return_tensors="pt")
model(input_ids=batch['input_ids'], labels=batch['labels']) # forward pass`
The encoded target sequence from "prepare_seq2seq_batch" is inconsistent with its description "[tgt_lang_code] X [eos]".
Moreover, "labels" doesn't seem to be the appropriate param for the model input.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9344/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9343/comments | https://api.github.com/repos/huggingface/transformers/issues/9343/events | https://github.com/huggingface/transformers/pull/9343 | 775,963,195 | MDExOlB1bGxSZXF1ZXN0NTQ2NTQwNTI1 | 9,343 | [PyTorch Bart] Split Bart into different models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One important comment I forgot to add to my review: I don't think we should adapt the `research_project` to the new structure as it has been pinned to an earlier version of transformers (3.5.1). So apart from the duplicate file deleted, the other changes should be reverted IMO."
] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
This PR splits all Bart-like models into their own respective classes for PyTorch models only. This is more in line with the general philosophy of the library to have self-contained model files.
As discussed with @jplu, the TF models will be separated in a future PR as there are still some issues and improvements (TF serving) blocking the separation - see https://github.com/huggingface/transformers/issues/9313.
In short, after this PR all those "model-specific" config parameters are removed from all Bart-like configs:
- `extra_pos_embeddings`
- `normalize_embedding`
- `add_final_layer_norm`
- `normalize_before`
- `do_blenderbot_90_layernorm`
- `static_position_embeddings`
- `add_bias_logits`
- `force_bos_token_to_be_generated` (this one has to be kept for Bart though)
and each "bart" model (Pegasus, Bart, MBart, Marian, Blenderbot, BlenderbotSmall) will get its own `modeling_....py` file.
At the moment the models have the following configurations:
| | `extra_pos_embeddings` | `normalize_before` | `add_final_layer_norm` | `do_blenderbot_90_layernorm` | `normalize_embedding` | `static_position_embeddings` | `add_bias_logits` | `force_bos_token_to_be_generated` |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| `bart` | 2 | ❌ | ❌ | ❌ | ✔️ | ❌ | ❌ | ✔️ |
| `mbart` | 2 | ✔️ | ✔️ | ❌ | ✔️ | ❌ | ❌ | ❌ |
| `marian` | ❌ | ❌ | ❌ | ❌ | ❌ | ✔️ | ❌ | ❌ |
| `pegasus` | ❌ | ✔️ | ✔️ | ❌ | ❌ | ✔️ | ❌ | ❌ |
| `blenderbot90M (BlenderbotSmall)` | 0 | ❌ | ❌ | ✔️ | ✔️ | ❌ | ❌ | ❌ |
| `blenderbot3B + rest (Blenderbot)` | 0 | ✔️ | ✔️ | ✔️ | ❌ | ❌ | ❌ | ❌ |
We can see that `add_bias_logits` is actually never used, so I think the best option is to just delete the functionality. Also, one can see that no two models have the exact same usage of the above params, so we'll make 6 different modeling_....py files.
## Resulting Improvements:
- The model files are much more readable and should be much easier to navigate for the user. No difficult config parameters anymore where the user doesn't know what to set anyways, such as `normalize_before`.
- All irrelevant Bart-like features for other models are removed. Those features are a) never mentioned in the paper, b) don't make any sense since the model wasn't trained with those features, so that the usage of those features leads to non-sense outputs. *E.g.* Marian was never supposed to be a "mask-filling" model, yet it has "mask-filling" functionality, when doing:
```python
marian = MarianMTModel.from_pretrained(...)
marian(input_ids) # no decoder_input_ids for mask filling like tasks such as in Bart
# => output makes 0 sense
```
The big gain here is that users are better guided on how to use the model and wonder less about whether the model is used correctly & whether there is a bug in the model.
- Docstrings are improved with more Model-specific examples and fewer comparisons to Bart. *E.g.* Pegasus, Marian, and Blenderbot never really mention BART in their paper and have no direct relation to BART IMO => these models should not be compared to BART in the docs -> it's confusing for the user
- Some small improvements, memory is slightly improved for beam search and gradient checkpointing is added.
- All previous tests are copied + some additional tests are added for each model
## Possible drawback
- The drawback as expected is code duplication. This is remedied to some extent by using the # Copied from ... safety features
- Some breaking changes as explained further below
- Models might now diverge easier in the future which could make it harder to have the same API for training. This is however also prevented by some function signature tests that are already implemented.
## Breaking changes
🚨🚨 **Important: We cannot keep 100% backward compatibility here or the PR won't make much sense** 🚨🚨
- Since all models were packed into a single model file a lot of different model design are at the moment possible. E.g.
Pegasus was only ever used with Sinusoidal position embeddings (as mentioned in the paper) but since it's merged into `modeling_bart.py`, one could theoretically use Pegasus with Learned position embeddings. This is not done in any config on the model hub however and will not be possible anymore after the PR. Also, Marian's model design has never normalized the word embeddings, but it could be possible with the current design. But again no config in the model hub does that, so this will also not be possible anymore after the PR. **In short: All model designs that were never foreseen in the original model and that are never used on the model hub at the moment won't be allowed anymore after the PR**.
If we would not make this change, it would mean that we would have to keep all those `normalize_before` configs, which in return would mean that the modeling code of all Bart-like models would be the same again.
- Blenderbot needs to be divided into two models IMO. Blenderbot 90M not only has a very different architecture (see table above), but also uses a different tokenizer. I created a new `BlenderbotSmallModel` class. Thus I need to update one Blenderbot config online, changing it's class. This means that from this PR onward the following is not supported anymore:
```python
from transformers import BlenderbotForConditionalGeneration
model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-90M")
# => this is a wrong model. It should be
model = BlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot-90M")
```
That's a big breaking change, but I don't see another way. If we keep the small blenderbot in the "normal" blenderbot, we have to keep the config params `normalize_before` which I really don't want to do.... I think the best option here is to add a warning (or even an error) by overwriting `from_pretrained(...)` in `BlenderbotForConditionalGeneration` so that
```python
model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-90M")
```
will throw an error or give a warning. There are no fine-tuning blenderbot models on the hub and this is the only config. I think it's the right approach to separate the model here
- Barthez has essentially a `mbart` architecture, but has `bart` defined as its `model_type` in the configs. Here I'd also like to change the configs online to make sure the correct model is loaded when using `AutoModelForSeq2SeqLM`. I should also contact the author here.
- Bart allowed to automatically create `decoder_input_ids` by shifting the `input_ids` to the right. Thus, in Bart one can do the following:
```python
bart = BartForConditionalGeneration(...)
bart(input_ids) # not that no decoder_input_ids are passed here
```
This is a very special case and should only be used for Bart-like denoising pre-training or mask-filling. The only models that were trained in this fashion and thus can do mask-filling are Bart and MBart. All other models cannot do mask-filling so that `decoder_input_ids` should never be created from shifting `input_ids`. => this feature is removed therefore from Pegasus, Marian, Blenderbot, and BlenderbotSmall
Those are all breaking changes. Blenderbot is the big one, the other one should be fine. To be sure, I wrote some scripts that verify that no model on the model hub that contains one of the keywords `bart`, `mbart`, `pegasus`, `blenderbot`, `opus-mt`, `barthez` has incorrect/unexpected parameter settings after the PR.
## TODO:
- [x] Create Bart model file & pass all tests
- [x] Create MBart model file & pass all tests
- [x] Greate Pegasus model file & pass all tests
- [x] Create Marian model file & pass all tests
- [x] Create Blenderbot model file & pass all tests
- [x] Create BlenderbotSmall model file & pass all tests
- [x] Clean PR (delete all helper files)
- [x] Clean docs
- [x] Add #Copied From statements
- [x] To a very in-detail review of own PR to make sure no hidden bugs were introduced.
- [x] Correct configs of barthez online to be of type `mbart` instead of `bart`.
- [x] Correct config of https://huggingface.co/facebook/blenderbot-90M online.
## Future TODO:
- [ ] Communitace about this PR on the forum
- [ ] Add Copied From statements to seq2seq bart model templates
- [ ] Add Copied From statements to LED
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9343/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9343/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9343",
"html_url": "https://github.com/huggingface/transformers/pull/9343",
"diff_url": "https://github.com/huggingface/transformers/pull/9343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9343.patch",
"merged_at": 1609880406000
} |
https://api.github.com/repos/huggingface/transformers/issues/9342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9342/comments | https://api.github.com/repos/huggingface/transformers/issues/9342/events | https://github.com/huggingface/transformers/pull/9342 | 775,958,486 | MDExOlB1bGxSZXF1ZXN0NTQ2NTM3MDY1 | 9,342 | [Seq2Seq Templates] Add embedding scale to templates | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The config.embed_scale parameter is too heavily used in Bart-like models to delete it in future leaner bart versions.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9342/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9342",
"html_url": "https://github.com/huggingface/transformers/pull/9342",
"diff_url": "https://github.com/huggingface/transformers/pull/9342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9342.patch",
"merged_at": 1609256924000
} |
https://api.github.com/repos/huggingface/transformers/issues/9341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9341/comments | https://api.github.com/repos/huggingface/transformers/issues/9341/events | https://github.com/huggingface/transformers/pull/9341 | 775,954,409 | MDExOlB1bGxSZXF1ZXN0NTQ2NTM0MDA0 | 9,341 | [PyTorch Bart] Split Bart | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9341",
"html_url": "https://github.com/huggingface/transformers/pull/9341",
"diff_url": "https://github.com/huggingface/transformers/pull/9341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9341.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9340/comments | https://api.github.com/repos/huggingface/transformers/issues/9340/events | https://github.com/huggingface/transformers/issues/9340 | 775,912,595 | MDU6SXNzdWU3NzU5MTI1OTU= | 9,340 | Possible bug in `train_batch_size` | {
"login": "EmilyAlsentzer",
"id": 7334040,
"node_id": "MDQ6VXNlcjczMzQwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7334040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EmilyAlsentzer",
"html_url": "https://github.com/EmilyAlsentzer",
"followers_url": "https://api.github.com/users/EmilyAlsentzer/followers",
"following_url": "https://api.github.com/users/EmilyAlsentzer/following{/other_user}",
"gists_url": "https://api.github.com/users/EmilyAlsentzer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EmilyAlsentzer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EmilyAlsentzer/subscriptions",
"organizations_url": "https://api.github.com/users/EmilyAlsentzer/orgs",
"repos_url": "https://api.github.com/users/EmilyAlsentzer/repos",
"events_url": "https://api.github.com/users/EmilyAlsentzer/events{/privacy}",
"received_events_url": "https://api.github.com/users/EmilyAlsentzer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Think @sgugger can best answer here when he's back from holiday :-) ",
"You misunderstand the flag `model_parallel`, it's not there to enable the use of several GPUs as this is done automatically by the `Trainer` (you have to set `CUDA_VISIBLE_DEVICES` to just one GPU if you don't want the Trainer to use them all). That flag is there to split the model layers on the various GPUs available (only available for a few models).",
"Got it, I didn't realize that the Trainer automatically uses multiple GPUs if visible. Thanks! "
] | 1,609 | 1,609 | 1,609 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.4.0-62-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
BERT
The problem arises when using:
* [X ] my own modified scripts: (give details below)
I'm running a model on a toy dataset with only 2 examples and a batch size of 2. In trainer, `num_examples` is 2, but `total_train_batch_size` is 12 even though I do not have the `model_parallel` flag set to `True` (Note I do have 6 GPUs available on the machine). This doesn't seem to impact my code because `train_dataset_is_sized=True`, but it seems strange.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
toy classification jsonl dataset with 2 examples
## To reproduce
I think that [this line](https://github.com/huggingface/transformers/blob/64103fb6beac8cc865945d3956266fd80b44f18f/src/transformers/training_args.py#L454) has an unnecessary `not`. Should this be `if self.model_parallel` instead of `if not self.model_parallel`? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9339/comments | https://api.github.com/repos/huggingface/transformers/issues/9339/events | https://github.com/huggingface/transformers/issues/9339 | 775,887,376 | MDU6SXNzdWU3NzU4ODczNzY= | 9,339 | Arrow file is too large when saving vector data | {
"login": "weiwangorg",
"id": 22360336,
"node_id": "MDQ6VXNlcjIyMzYwMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/22360336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiwangorg",
"html_url": "https://github.com/weiwangorg",
"followers_url": "https://api.github.com/users/weiwangorg/followers",
"following_url": "https://api.github.com/users/weiwangorg/following{/other_user}",
"gists_url": "https://api.github.com/users/weiwangorg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiwangorg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiwangorg/subscriptions",
"organizations_url": "https://api.github.com/users/weiwangorg/orgs",
"repos_url": "https://api.github.com/users/weiwangorg/repos",
"events_url": "https://api.github.com/users/weiwangorg/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiwangorg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think it's expected because with `bert-base` each token will be embedded as a 768-dimensional vector. So if an example has n tokens then the size of embedding will be `n*768` and these are all 32-bits floating-point numbers.",
"Yes. I use datasets and I think this is a question about datasets, how to save vector data in a compressed format to reduce the size of the file. So I close this issue."
] | 1,609 | 1,610 | 1,610 | NONE | null | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9339/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9338/comments | https://api.github.com/repos/huggingface/transformers/issues/9338/events | https://github.com/huggingface/transformers/issues/9338 | 775,839,513 | MDU6SXNzdWU3NzU4Mzk1MTM= | 9,338 | Multiprocessing CUDA issues when importing transformers | {
"login": "maxjeblick",
"id": 24281881,
"node_id": "MDQ6VXNlcjI0MjgxODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/24281881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxjeblick",
"html_url": "https://github.com/maxjeblick",
"followers_url": "https://api.github.com/users/maxjeblick/followers",
"following_url": "https://api.github.com/users/maxjeblick/following{/other_user}",
"gists_url": "https://api.github.com/users/maxjeblick/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxjeblick/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxjeblick/subscriptions",
"organizations_url": "https://api.github.com/users/maxjeblick/orgs",
"repos_url": "https://api.github.com/users/maxjeblick/repos",
"events_url": "https://api.github.com/users/maxjeblick/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxjeblick/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @maxjeblick,\r\n\r\nWe do need to be able to call `torch.cuda` in our model code. We cannot \"forbid\" calls to `torch.cuda` in order to allow quite specific use cases where `.cuda()` is mixed with multiprocessing. IMO, using multiprocessing in combination with `.cuda()` is quite an edge case. Are you trying to run your model on multiple GPUs? \r\n\r\nI'd suggest to fork the repo and delete this line if you really need this feature.",
"Hey @patrickvonplaten thanks for the fast response! \r\n\r\nI agree that using multiprocessing with `.cuda` calls is not that common, one usecase would be multi-gpu inference without DDP. The `torch.cuda.is_available()` call in `modeling_fsmt.py` is currently the only place which causes the `RuntimeError`; all other `.cuda` calls are either fine (e.g. `from torch.cuda.amp import autocast` in `trainer.py`) or not executed during the `import transformers` statement.",
"It affects bart as well. \r\n\r\n\r\nI see ProphetNetLayerNorm solved it by using a runtime wrapper.\r\n\r\nhttps://github.com/huggingface/transformers/blob/912f6881d2b69f180522172a5283702bd8c41d9c/src/transformers/models/prophetnet/modeling_prophetnet.py#L513-L521\r\n\r\nSo it's modeling_bart and modeling_fsmt are the only 2 that have this check at import time\r\n\r\nI wonder if we can just remove this check altogether. Won't `from apex.normalization import FusedLayerNorm` fail w/o cuda? And then we actually don't need that check. We need to verify that.",
"heh, it looks like `FusedProphetNetLayerNorm` importing is silently failing, since it doesn't exist in `apex` ;) looks like untested code. Fix proposed at https://github.com/huggingface/transformers/pull/9349\r\n",
"Hmm, it works just fine without cuda:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=\"\" python -c \"from apex.normalization import FusedLayerNorm; print(FusedLayerNorm(10))\"\r\nFusedLayerNorm(torch.Size([10]), eps=1e-05, elementwise_affine=True)\r\n```\r\n\r\nDo you know why did we need that check in first place?\r\n\r\nThe doc page https://nvidia.github.io/apex/layernorm.html doesn't say anything about needing cuda.",
"@t-vi figured it out - `apex.normalization.FusedLayerNorm` falls back on to non-cuda gracefully: https://github.com/NVIDIA/apex/blob/master/apex/normalization/fused_layer_norm.py#L154 so the `if torch.cuda.is_available()` check is not needed in first place.",
"> needed\r\n\r\nNice, so we could actually remove this import statement then from all `FusedLayerNorm` classes no? ",
"yes, working on this. - just 3 classes\r\n\r\nDone: https://github.com/huggingface/transformers/pull/9350",
"> so we could actually remove this import statement then from all FusedLayerNorm classes no?\r\n\r\nbut, wait, what import statement are you referring to? \r\n\r\nand I guess `modeling_fsmt` is waiting for the refactoring, right? So since Bart I see has this import-time check removed already, so it'd follow suit anyway.\r\n\r\n",
"maxjeblick, while we are sorting out the nuances you can just remove that check so that you could move forward. No matter the outcome that particular call that was getting in your way won't be there once the dust settles.\r\n\r\n",
"Related: https://github.com/huggingface/transformers/issues/9227\r\n\r\nI found that the changes proposed above weren't enough for cuda multiprocessing.",
"@jethrokuan - I didn't get a chance to try to reproduce/investigate this deeper so my commentary is as good as the discussions I read about it - I assume you tried the proposed by pytorch developers to switch to `torch.multiprocessing.set_start_method('spawn')` and it either didn't help or it works but you can't use it? https://github.com/pytorch/pytorch/issues/40403 \r\n\r\nFWIW, we started discussing postponing the loading of 3rd party modules here https://github.com/huggingface/transformers/issues/8733 and @sgugger came up with Optuna-like solution here https://github.com/sgugger/lazy_init - perhaps it can be applied to everything",
"The `spawn` method isn't supported by the web microframework we use, so that's not really an option.\r\n\r\nSo the option I went with for my use-case was deferring the loading of `transformers` itself. My cursory investigation was that `import transformers` already initializes cuda, which wasn't the case some versions of transformers ago (3.1, I believe was fine).",
"Thank you for clarifying that `spawn` is not an option, @jethrokuan.\r\n\r\nPerhaps `transformers` needs an option to defer its loading for such cases.\r\n\r\nI think @sgugger may have some insights when he is back next week as he invested his time into looking into deferral in general.",
"Fixed by https://github.com/huggingface/transformers/commit/ae333d04b29a25be1a70eaccd6260c294c243c5b thanks a lot!"
] | 1,609 | 1,610 | 1,610 | NONE | null | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-4.15.0-128-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Partly
### Who can help
@stas00
## Information
When using multiprocessing, importing the transformers package causes
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method`
The problem arises when using:
* `import transformers` is used in the main process.
This is due to the [following line](https://github.com/huggingface/transformers/blob/master/src/transformers/models/fsmt/modeling_fsmt.py#L268) in `modeling_fsmt.py`, removing `torch.cuda.is_available()` call resolves the issue.
## To reproduce
```
import multiprocessing
import transformers # NOQA
# You can also call torch.cuda instead of import transformers to get the same error
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
def forward(self, x):
return self.linear(x)
def to_cuda(i):
net = Net().cuda()
print(f'Called {i} process')
try:
cpus = multiprocessing.cpu_count()
except NotImplementedError:
cpus = 2 # arbitrary default
pool = multiprocessing.Pool(processes=cpus)
pool.map(to_cuda, range(10))
```
## Expected behavior
The code snippet above runs without issues.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9337/comments | https://api.github.com/repos/huggingface/transformers/issues/9337/events | https://github.com/huggingface/transformers/pull/9337 | 775,814,190 | MDExOlB1bGxSZXF1ZXN0NTQ2NDIxODEx | 9,337 | [WIP][Research projects] Add folder for Music AI / Music Transformers | {
"login": "asigalov61",
"id": 56325539,
"node_id": "MDQ6VXNlcjU2MzI1NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/56325539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asigalov61",
"html_url": "https://github.com/asigalov61",
"followers_url": "https://api.github.com/users/asigalov61/followers",
"following_url": "https://api.github.com/users/asigalov61/following{/other_user}",
"gists_url": "https://api.github.com/users/asigalov61/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asigalov61/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asigalov61/subscriptions",
"organizations_url": "https://api.github.com/users/asigalov61/orgs",
"repos_url": "https://api.github.com/users/asigalov61/repos",
"events_url": "https://api.github.com/users/asigalov61/events{/privacy}",
"received_events_url": "https://api.github.com/users/asigalov61/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @asigalov61, \r\n\r\nThe folder should be under `examples/research_projects` => so at `examples/research_projects/music_transformers`. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case.",
"Got it!\r\n\r\nThank you for the advice 🙂\r\n\r\nDo I have to use Hugginface transformers to post in that dir? Or I can use my implementations also?\r\n\r\nI also mostly work in Google Colabs, so is it ok to post colabs?\r\n\r\nIs the API a requirement to post? I do not always convert colabs to Python so I need to know this to figure out what to give priority to.\r\n\r\nThank you.\r\n________________________________\r\nFrom: Patrick von Platen <[email protected]>\r\nSent: Tuesday, December 29, 2020 4:44 AM\r\nTo: huggingface/transformers <[email protected]>\r\nCc: Alex <[email protected]>; Mention <[email protected]>\r\nSubject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337)\r\n\r\n\r\nHey @asigalov61<https://github.com/asigalov61>,\r\n\r\nThe folder should be under examples/research_projects => so at examples/research_projects/music_transformers. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case.\r\n\r\n—\r\nYou are receiving this because you were mentioned.\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/9337#issuecomment-752062712>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI6A3S2M2GFCU2LLHXTSXHFMLANCNFSM4VNBN4YQ>.\r\n",
"> Got it! Thank you for the advice Do I have to use Hugginface transformers to post in that dir? Or I can use my implementations also? I also mostly work in Google Colabs, so is it ok to post colabs? Is the API a requirement to post? I do not always convert colabs to Python so I need to know this to figure out what to give priority to. Thank you.\r\n> […](#)\r\n> ________________________________ From: Patrick von Platen <[email protected]> Sent: Tuesday, December 29, 2020 4:44 AM To: huggingface/transformers <[email protected]> Cc: Alex <[email protected]>; Mention <[email protected]> Subject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337) Hey @asigalov61<https://github.com/asigalov61>, The folder should be under examples/research_projects => so at examples/research_projects/music_transformers. There you are very free to add the files you want. If you want people to use your code, we suggest to add nice & readable code with a well-thought API, and a nice README.md that explains how to use your code and that also shows a nice use case. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<[#9337 (comment)](https://github.com/huggingface/transformers/pull/9337#issuecomment-752062712)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI6A3S2M2GFCU2LLHXTSXHFMLANCNFSM4VNBN4YQ>.\r\n\r\nIt should be based on hugging face transformers. If it's just a notebook - it might make more sense to just add it here: https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks",
"I was unable to translate my implementation to Huggingface transformers code because docs and examples were unclear so if someone can help me do that, I would really appreciate it.\r\n\r\nThank you so much.",
"Just in case here is the direct link to Google Colab because I am not sure how ot PR it (lol)...\r\n\r\nhttps://github.com/asigalov61/transformers/blob/master/examples/music_transformers/Music_Reformer_TPU_Edition.ipynb",
"Hi! Could you let us know which docs/instructions were unclear? What were you trying to do, how can we help out? Thanks.",
"Certainly.\n\nFirst of all, I could not find any clear and exact info on how to work with custom text datasets. You mostly provide examples for datasets from your library but not much else. For example, I could not find info on how to do basic things like loading a custom txt file or how to easily tokenize it to be compatible with huggingface implementations.\n\nAnother example would be the lack of complete notebooks/code. Like in the Reformer notebook by Peter, this one:\nnotebooks/PyTorch_Reformer.ipynb at master · patrickvonplaten/notebooks (github.com)<https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb>\nthere is not a single word about SentencePiece which was used to create Crime and Punishment tokenizer model.\n\nAlso, I could not find a single example for GPT2 models + text, also in a basic and easy to use/try format.\n\nYour docs/examples are probably ok if one invests enough time and effort (or if one has pre-existing knowledge and experience with your implementations) but w/o doing so, the examples/docs are not sufficient for beginners/newcomers due to a rather steep learning curve and effort/time required. Friendly IMHO please as I do appreciate your work and your efforts regardless.\n\nTo be very specific and relevant to the subject at hand, I need to know how to process and tokenize a simple line-by-line txt file that would work with Peter's Reformer's example? And also, I wanted to ask if you guys support Google Colab TPUs in any way as Reformer would train slower on GPUs?\n\nI hope this makes sense.\n\nThank you for your help/time and understanding.\n\nAlex\n\n\n\n\n\n\n________________________________\nFrom: Lysandre Debut <[email protected]>\nSent: Friday, January 22, 2021 7:02 AM\nTo: huggingface/transformers <[email protected]>\nCc: Alex <[email protected]>; Mention <[email protected]>\nSubject: Re: [huggingface/transformers] [WIP][Research projects] Add folder for Music AI / Music Transformers (#9337)\n\n\nHi! Could you let us know which docs/instructions were unclear? What were you trying to do, how can we help out? Thanks.\n\n—\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/pull/9337#issuecomment-765464464>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ANNXLI3YECRJSE7MSZGRUA3S3GHO3ANCNFSM4VNBN4YQ>.\n",
"Thank you for your feedback, there are indeed some aspects of the documentation which are lacking on which we are actively working.\r\n\r\nRegarding what you mention, maybe these links can help you:\r\n\r\n> how to work with custom text datasets.\r\n\r\nWe actually have an entire page dedicated to that aspect: [custom datasets](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20text)\r\n\r\n> Another example would be the lack of complete notebooks/code. Like in the Reformer notebook [...] there is not a single word about SentencePiece which was used to create Crime and Punishment tokenizer model.\r\n\r\nIndeed! If I may point you to other notebooks, the first of our official notebooks (hosted on this repository, see the `notebooks` folder at the root) is on training a tokenizer: [01-training-tokenizers](https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb). It is also visible from our [notebook documentation page](https://huggingface.co/transformers/notebooks.html).\r\n\r\nTraining a tokenizer isn't done by the Transformers library in itself, but by the [Tokenizers library](https://github.com/huggingface/tokenizers). I invite you to check their [documentation](https://huggingface.co/docs/tokenizers/python/latest/quicktour.html) which contains a lot of information regarding training tokenizers.\r\n\r\n> Also, I could not find a single example for GPT2 models + text, also in a basic and easy to use/try format.\r\n\r\nIf I may point you to the following parts of the documentation:\r\n- The [GPT-2 reference](https://huggingface.co/transformers/model_doc/gpt2.html) contains several snippets on how to to use GPT-2 models with text\r\n- The [generation utils](https://huggingface.co/transformers/internal/generation_utils.html#utilities-for-generation) showcase how to leverage GPT-2 to generate text. \r\n- Checking the documentation regarding the [generate](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#generation) method would probably be of interest as it has an identical API on all models - even if the code examples do not showcase GPT-2 directly.\r\n- Several [GPT-2-based notebooks](https://huggingface.co/transformers/notebooks.html) are available on the notebooks page in the documentation\r\n- Finally, we focus on having an identical API between all models. Checking a guide on how to generate text has very good chances of working for any other models. The quickstart on generating text [available here](https://huggingface.co/transformers/task_summary.html#text-generation) could be of use to you, as you simply need to replace the identifier of the model checkpoint by the GPT-2 checkpoint you're interested in.\r\n\r\n> To be very specific and relevant to the subject at hand, I need to know how to process and tokenize a simple line-by-line txt file that would work with Peter's Reformer's example? And also, I wanted to ask if you guys support Google Colab TPUs in any way as Reformer would train slower on GPUs?\r\n\r\nSee below for some pointers on how you can achieve this:\r\n- If you need to train a tokenizer, I invite you to check out the first notebook I mention: [01-training-tokenizers](https://github.com/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb)\r\n- Once you have your tokenizer you can train your model by either loading your dataset in `datasets` as it is shown in Patrick's notebook (and is simpler! in that case you may be interested in [loading a dataset from a local file](https://huggingface.co/docs/datasets/loading_datasets.html#loading-from-local-files)), or you can load a text file as it is shown in the [custom datasets](https://huggingface.co/transformers/custom_datasets.html?highlight=custom%20text) documentation.\r\n\r\nRegarding TPU training, @patrickvonplaten can chime in about the Reformer especially.\r\n\r\n\r\nThanks once again for your feedback.",
"@LysandreJik Thank you very much for the most detailed and helpful info. Much appreciated and I will definitely check it all out in a little bit as you have suggested quite a lot. This will be very useful to me and I am really looking forward to contributing to the huggingface community however I can :)\r\n\r\nMost sincerely,\r\n\r\nAlex",
"Great! We're looking forward to your contributions :) Let us know if we can help down the road.",
"@LysandreJik \r\n\r\nThank you for the welcome and offer to help. Much appreciated.\r\n\r\nYou can indeed help as I have run into problems pretty quickly...\r\n\r\nSo I have spent a few hours trying to make Peter's Reformer colab work with my dataset but to no avail, unfortunately...\r\n\r\nHere is the colab: https://colab.research.google.com/drive/1R8jkADMi0vRDwaNTEz_XGQkZUsg_tm_p?usp=sharing\r\n\r\nNo matter what I do or try, I get errors on training execution... I think I have loaded the dataset correctly but I most certainly can be mistaken...I know that Peter's colab works with default CP setup but I can't make it work just yet....\r\n\r\nI saved the output/errors in the colab so that you (or anyone else can take a look) + I am attaching my dataset for you to check out...\r\n\r\nNow, I know that my dataset may not be perfect/compatible with Peter's implementation due to encoding and because it is a music dataset...so I am aware that it may not be that all straightforward in this particular case...\r\n\r\n@LysandreJik if you can help/suggest something here, I will really appreciate it as I really want to make it work for you guys...\r\n\r\n[Efficient-Virtuoso-Music-TXT-Dataset.zip](https://github.com/huggingface/transformers/files/5882917/Efficient-Virtuoso-Music-TXT-Dataset.zip)\r\n\r\n\r\nP.S. Two questions: \r\n\r\n1) Can you enable Discussions on your repo...its a new GitHub feature and I think it would be a much better place for this kind of discussion/help/support questions??? Or if there is a place already that you prefer, we can move there with this...\r\n\r\n2) Any news on the Performer implementations??? It is the latest and the greatest from Google and I already tried it with other's people implementations because it may be more suitable for music than Reformer + its brand new (like 6 month old)...\r\n\r\nThank you for your time and responses.\r\n\r\nAlex.\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | A new dir specifically for Music AI/Music Transformers.
Created as suggested by Patrick von Platen.
I am still figuring out PRs so please correct this PR if I've done something wrong.
Thank you for your help/guidance and for the welcome to the Huggingface community :)
Looking forward to contributing what I can :)
GPT2: @LysandreJik, @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9337/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9337",
"html_url": "https://github.com/huggingface/transformers/pull/9337",
"diff_url": "https://github.com/huggingface/transformers/pull/9337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9337.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9336/comments | https://api.github.com/repos/huggingface/transformers/issues/9336/events | https://github.com/huggingface/transformers/issues/9336 | 775,799,871 | MDU6SXNzdWU3NzU3OTk4NzE= | 9,336 | "RuntimeError: Input, output and indices must be on the current device" when trying to finetune MBart | {
"login": "mespla",
"id": 6533192,
"node_id": "MDQ6VXNlcjY1MzMxOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6533192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mespla",
"html_url": "https://github.com/mespla",
"followers_url": "https://api.github.com/users/mespla/followers",
"following_url": "https://api.github.com/users/mespla/following{/other_user}",
"gists_url": "https://api.github.com/users/mespla/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mespla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mespla/subscriptions",
"organizations_url": "https://api.github.com/users/mespla/orgs",
"repos_url": "https://api.github.com/users/mespla/repos",
"events_url": "https://api.github.com/users/mespla/events{/privacy}",
"received_events_url": "https://api.github.com/users/mespla/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @mespla,\r\n\r\nThanks for your issue! I'm afraid at the moment, we're really unsure whether we want to keep supporting all the bash scripts in `examples/seq2seq`. In a couple of weeks, we plan on having a single concise training script for seq2seq models.\r\n\r\ncc @sgugger \r\n\r\nAlso tagging @stas00, @patil-suraj in case you know a quick fix to this problem or have encountered this before as well.",
"> When I run it on a single GPU, I get a memory error, as one GPU has not enough memory to load the MBart model. When I try to distribute the model on two GPUs, I get a RuntimeError:\r\nRuntimeError: Input, output and indices must be on the current device\r\n\r\nAre you implying you've changed modeling_bart.py to support Model Parallelism? Surely that would explain that error. You probably switched the layers to different devices but not the inputs/indices. \r\n\r\nI'm currently in the process of studying t5 MP we already have and about to do the same for Bart - i.e. add MP to Bart and its sub-classes (so MBART is included).\r\n\r\nIf you mean something else by \" I try to distribute the model on two GPUs\" please clarify what you mean. \r\n\r\nIf you're just trying to use 2 GPUs to solve the problem of not being able to load even one batch onto a single GPU, then just using 2 gpus won't do any good. In fact what you did (your command line) takes even more memory, since it activates DataParallel which is less memory efficient than DistributedDataParallel. See README.md in that folder for how to run DDP.\r\n\r\nBut fear not, have a look at these 2 possible solutions for you not being able to fit the model onto a single GPU:\r\nhttps://github.com/huggingface/transformers/issues/9311#issuecomment-751378696\r\nand another one will join soon once DeepSpeed has been integrated.\r\n\r\n",
"oh, wait a sec, I have only now noticed you used `--model_parallel`. This flag currently would work only for t5 and gpt2 - as the only 2 models that have been ported to support MP.\r\n\r\nSo trainer should assert if this flag is used and arch isn't supporting MP. \r\n\r\nThis PR https://github.com/huggingface/transformers/pull/9347 adds this assert.\r\n\r\nAnd hopefully Bart will support MP soon as well. Until then try my suggestions in the comment above."
] | 1,609 | 1,609 | 1,609 | NONE | null | ### Environment info
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Tried `transformers` versions 4.1.1 (installed with pip) and 4.2.2 (installed from master branch of the repository)
- Python version: 3.7
- PyTorch version: 1.7
- Tensorflow version: 2.4
- Number of available GPU: 2 (GeForce RTX 2080 Ti, with ~11GB of memory each)
### Information
Model I am using (Bert, XLNet ...): MBart -> facebook/mbart-large-cc25
The problem arises when using: the official example scripts: (details below)
The tasks I am working on is: my own task or dataset: (details below)
I am fine-tuning MBart using my own dataset, using the `examples/seq2seq/finetune.sh` script. When I run it on a single GPU, I get a memory error, as one GPU has not enough memory to load the MBart model. When I try to distribute the model on two GPUs, I get a RuntimeError:
`RuntimeError: Input, output and indices must be on the current device`
### To reproduce
I am running the script in the following way:
`CUDA_VISIBLE_DEVICES=0,1 transformers/examples/seq2seq/finetune.sh --model_name_or_path "facebook/mbart-large-cc25" --output_dir output --data_dir data --overwrite_output_dir --model_parallel --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --freeze_encoder --freeze_embeds --tgt_lang "en"`
I have also tried:
`CUDA_VISIBLE_DEVICES=0,1 transformers/examples/seq2seq/finetune.sh --model_name_or_path "facebook/mbart-large-cc25" --output_dir output --data_dir data --overwrite_output_dir --model_parallel --tgt_lang "en"`
I also tried limiting the length of source and target sentences by trying several values for `--max_target_length` and `--max_source_length'`. In addition, I tried using more GPUs (up to 4).
If I run `wc -l` on my `data` directory, I get:
```
3004 data/test.source
3004 data/test.target
686623 data/train.source
686623 data/train.target
2999 data/val.source
2999 data/val.target
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9335/comments | https://api.github.com/repos/huggingface/transformers/issues/9335/events | https://github.com/huggingface/transformers/issues/9335 | 775,799,207 | MDU6SXNzdWU3NzU3OTkyMDc= | 9,335 | Data Loading as a Service | {
"login": "mingruimingrui",
"id": 18568364,
"node_id": "MDQ6VXNlcjE4NTY4MzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18568364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingruimingrui",
"html_url": "https://github.com/mingruimingrui",
"followers_url": "https://api.github.com/users/mingruimingrui/followers",
"following_url": "https://api.github.com/users/mingruimingrui/following{/other_user}",
"gists_url": "https://api.github.com/users/mingruimingrui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingruimingrui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingruimingrui/subscriptions",
"organizations_url": "https://api.github.com/users/mingruimingrui/orgs",
"repos_url": "https://api.github.com/users/mingruimingrui/repos",
"events_url": "https://api.github.com/users/mingruimingrui/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingruimingrui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@lhoestq - this might be interesting for you! Any good tips from your side?",
"Interesting ! Cool features could be reading from s3, gcp etc.\r\nAlso maybe memory mapping can help speed up things a bit.\r\n\r\nStreaming datasets this way is something we'd like to add in the `datasets` library at one point since we're seeing more and more common crawl scale datasets.",
"> Interesting ! Cool features could be reading from s3, gcp etc.\r\n> Also maybe memory mapping can help speed up things a bit.\r\n> \r\n> Streaming datasets this way is something we'd like to add in the `datasets` library at one point since we're seeing more and more common crawl scale datasets.\r\n\r\nThis... is something I'd enjoy working on, even for free.\r\nBut if you already have plans to do it, please don't hesitate to start (*´∇`)",
"@mingruimingrui this fits actually well in a pretty cool larger community project we have. Wanna send me your email by DM/email/LinkedIn and I invite you on our slack to chat a bit more about it? I’ll probably make the project open in early January when it’s more solidly defined but I can give you early access.",
"@thomwolf I would like that very much (✿◕‿◕)",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | CONTRIBUTOR | null | # 🚀 Not a Feature request, what am I here for then?
Well, mainly I'd just like to get some feedback from fellow software engineers. Some of the frustrations I've experienced might not have been big issues at all or there could have been easy ways to get around them which I've failed to notice. Getting roasted can be a good way for us to identify obvious flaws in our thought processes that aren't so obvious from our own often tunneled point of view.
## Motivation
Data loading in PyTorch requires the user to define the Dataset, collator, as well as a sampling strategy.
I found it rather hard to stick to the framework when I have to deal with
- Extremely large datasets that do not fit in system memory
- The previous point + training with multiple processes and nodes
For Language Modeling and Machine Translation,
- We often have to deal with large text/CSV files that are multiple times the size of system memory.
- Also running training on a single GPU is often slow to the point of frustration.
- Then we have to duplicate the extremely large dataset across each machine.
- To format the data we have into a PyTorch Dataset, we simply just do a hack where we wrap an IO stream into a PyTorch Dataset
- Or preprocess the entire dataset beforehand to speed up data loading.
- Sometimes we would keep an index table to "lookup" the position of a data entry in a file which can be slow if not using SSD to perform random access.
Instead of trying to write code that fits into the format expected by PyTorch. I simply threw everything out the window and just made data loading a standalone service instead...
## Your contribution
Well, I've uploaded my work on https://github.com/mingruimingrui/data-loader-as-a-service-demo.
If you have taken your time to read up to this point, I would like to give you my gratitude as it had made me quite happy (^o^)
I would also like to ask for you to leave some comments for me which can be any of the following.
1. Can you relate to the problems I've faced?
2. Which part of the Data Loading as a Service do you like?
3. Which part of the Data Loading as a Service do you not like or have problems agreeing with?
5. Any other comments would also be appreciated.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9335/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/9335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9334/comments | https://api.github.com/repos/huggingface/transformers/issues/9334/events | https://github.com/huggingface/transformers/pull/9334 | 775,448,381 | MDExOlB1bGxSZXF1ZXN0NTQ2MTMwNjY5 | 9,334 | [Seq2Seq Templates] Correct some TF-serving errors and add gradient checkpointing to PT by default. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,609 | 1,609 | MEMBER | null | # What does this PR do?
This PR improves the Seq2Seq model templates.
Notably:
- a too model-specific test is removed from PyTorch
- gradient checkpointing is added to PyTorch
- some tf-serving incompatible statements are removed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9334/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9334/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9334",
"html_url": "https://github.com/huggingface/transformers/pull/9334",
"diff_url": "https://github.com/huggingface/transformers/pull/9334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9334.patch",
"merged_at": 1609174264000
} |
https://api.github.com/repos/huggingface/transformers/issues/9333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9333/comments | https://api.github.com/repos/huggingface/transformers/issues/9333/events | https://github.com/huggingface/transformers/issues/9333 | 775,419,307 | MDU6SXNzdWU3NzU0MTkzMDc= | 9,333 | TF Longformer has some graph compilation/execution issue | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can we define `is_global_attn` in the config?",
"Or can we assume that `is_global_attn == output_attentions` ? The main issue here is that we cannot build a returned value that depends of a tensor that is created during the execution (same issue we had with `output_attentions` and `output_hidden_states` before we decide to take the config values in graph mode)",
"I think Longformer has an inherent design problem with TF serving. The variable `is_global_attn` is decided by the user at execution time and depends on the **values** (not just the shape) of `global_attention_mask`. `is_global_attn` is not a boolean to indicate whether the user wants to output the `attentions`, but whether the model will make use of `global_attention`.\r\n\r\nIf TF serving only works when `is_global_attn` has to be known before execution time, then I guess the best option is to add a `config.use_global_attn_tf` that would default to `False`. Could we then add an assert statement that `is_global_attn == config.use_global_attn_tf` with a nice error message saying the in TF serving `config.use_global_attn_tf` has to be set according to the use case? \r\n\r\nFor some more information on the logic, see https://huggingface.co/transformers/model_doc/longformer.html#longformer-self-attention\r\n",
"Regarding the 1. case: `input_ids`, `position_ids` and `input_embeds` can only be `None` if they have been `None` before entering the function. I don't fully understand your proposed solution, but I think this would be easy to discuss in a PR.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | CONTRIBUTOR | null | TF longformer has the following issues to make it 100% graph compilation/execution compliant. I succeed to fix most of the issues but two still remains:
1. The first issue starts at line [1762](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1762). The test to know if the inputs needs to be padded prevent the graph to be compiled because `input_ids`, `position_ids` and `input_embeds` can be `None` at the end of the main branch.
As a solution I propose to export the padding process (from line 1769 to 1786) outside the `if` as if `padding_len == 0` the calls to `tf.pad(...)` and `tf.concat(...)` will have no effect on the different inputs.
2. The second issue is at line [1527](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_tf_longformer.py#L1527). Here `all_global_attentions` can be either a tuple or `None` in a same execution because `is_global_attn` is not defined globally but during the execution.
I don't know how to solve this one.
As a first test you can run:
```
from transformers import TFLongformerModel
model = TFLongformerModel.from_pretrained("lysandre/tiny-longformer-random", output_attentions=True, output_hidden_states=True)
model.save("path")
```
Ping @patrickvonplaten the Longformer expert :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9332/comments | https://api.github.com/repos/huggingface/transformers/issues/9332/events | https://github.com/huggingface/transformers/issues/9332 | 775,394,791 | MDU6SXNzdWU3NzUzOTQ3OTE= | 9,332 | block sparse bert | {
"login": "pingpiang2019",
"id": 55866494,
"node_id": "MDQ6VXNlcjU1ODY2NDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/55866494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pingpiang2019",
"html_url": "https://github.com/pingpiang2019",
"followers_url": "https://api.github.com/users/pingpiang2019/followers",
"following_url": "https://api.github.com/users/pingpiang2019/following{/other_user}",
"gists_url": "https://api.github.com/users/pingpiang2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pingpiang2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pingpiang2019/subscriptions",
"organizations_url": "https://api.github.com/users/pingpiang2019/orgs",
"repos_url": "https://api.github.com/users/pingpiang2019/repos",
"events_url": "https://api.github.com/users/pingpiang2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/pingpiang2019/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I can produce your error. @madlag running the following code:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\nqa_pipeline = pipeline(\r\n \"question-answering\",\r\n model=\"madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1\",\r\n tokenizer=\"madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1\"\r\n)\r\n\r\npredictions = qa_pipeline({\r\n 'context': \"Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.\",\r\n 'question': \"Who is Frederic Chopin?\",\r\n})\r\n\r\nprint(predictions)\r\n```\r\n\r\nresults in the above error. Any ideas on how to fix it?",
"It looks like there is a bug with the \"ampere optimized\" models I uploaded, thank you for your feedback, I will check what is happening.\r\nRight now I would advise you to use the non ampere ones (like madlag/bert-base-uncased-squad1.1-block-sparse-0.13-v1 ), the ampere version is not really good at this time.\r\nI am working on this, there should be some new, better and faster models soon, non-ampere optimized ones, then ampere optimized a bit latter.\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,609 | 1,614 | 1,614 | NONE | null | I got following error while running example usage from https://huggingface.co/madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1
Do I need specific torch or transformers setup? Thanks in advance!
Some weights of BertModel were not initialized from the model checkpoint at madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1 and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "test.py", line 6, in <module>
tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.09-ampere-v1"
File "/workspace/transformers/src/transformers/pipelines.py", line 3231, in pipeline
framework = framework or get_framework(model)
File "/workspace/transformers/src/transformers/pipelines.py", line 107, in get_framework
model = AutoModel.from_pretrained(model, revision=revision)
File "/workspace/transformers/src/transformers/models/auto/modeling_auto.py", line 698, in from_pretrained
pretrained_model_name_or_path, *model_args, config=config, **kwargs
File "/workspace/transformers/src/transformers/modeling_utils.py", line 1156, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BertModel:
size mismatch for bert.encoder.layer.0.attention.self.query.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]).
size mismatch for bert.encoder.layer.0.attention.self.query.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for bert.encoder.layer.0.attention.self.key.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]).
size mismatch for bert.encoder.layer.0.attention.self.key.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for bert.encoder.layer.0.attention.self.value.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([256, 768]). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9331/comments | https://api.github.com/repos/huggingface/transformers/issues/9331/events | https://github.com/huggingface/transformers/pull/9331 | 775,390,487 | MDExOlB1bGxSZXF1ZXN0NTQ2MDg1NzUx | 9,331 | [WIP] Temp work on pipelines. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,609 | 1,614 | 1,614 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9331",
"html_url": "https://github.com/huggingface/transformers/pull/9331",
"diff_url": "https://github.com/huggingface/transformers/pull/9331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9331.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9330/comments | https://api.github.com/repos/huggingface/transformers/issues/9330/events | https://github.com/huggingface/transformers/issues/9330 | 775,374,598 | MDU6SXNzdWU3NzUzNzQ1OTg= | 9,330 | Fail to reload tokenizer from save_pretrained method | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jc-hou,\r\n\r\nThe `Auto*` classes require the `config.json` (which is saved when you save the model) file to find the correct model/tokenizer class for loading the model/tokenizer. To directly load the tokenizer without the model use the specific tokenizer class, in this case, `BertTokenizer`.",
"Hi, thanks. I understand. "
] | 1,609 | 1,609 | 1,609 | NONE | null | Hi,
to reproduce:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
tokenizer.save_pretrained(".")
tokenizer = AutoTokenizer.from_pretrained(".")
```
with error msg:
```
file ./config.json not found
Traceback (most recent call last):
File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/configuration_utils.py", line 389, in get_config_dict
local_files_only=local_files_only,
File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/file_utils.py", line 1015, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file ./config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/models/auto/tokenization_auto.py", line 337, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/models/auto/configuration_auto.py", line 341, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/data/stars/user/jhou/Test/pytorch_test/huggingface/jc-hou_fork/transformers/src/transformers/configuration_utils.py", line 401, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for '.'. Make sure that:
- '.' is a correct model identifier listed on 'https://huggingface.co/models'
- or '.' is the correct path to a directory containing a config.json file
```
Thanks.
transformers:4.1.0
tokenizers: @mfuntowicz
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9330/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.