url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6421/comments | https://api.github.com/repos/huggingface/transformers/issues/6421/events | https://github.com/huggingface/transformers/issues/6421 | 677,093,083 | MDU6SXNzdWU2NzcwOTMwODM= | 6,421 | test_run_glue_with_pabee failing | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer Thanks for the issue. I noticed that this test fails sometimes. Do you have any idea how long has this problem been? Is it broken at the beginning or recently?",
"Started breaking within the last few days. It is breaking fairly consistently at this point.",
"It's I guess just flaky, for example it's not broken on master right now.",
"But it's not really a flaky \"type of error\". It's hitting IndexError on an embedding lookup.\r\n```\r\nIndexError: index out of range in self\r\n```\r\n",
"It also fails with the assert: `self.assertGreaterEqual(value, 0.75)` in my case (three times out of four right now)."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | examples/bert-loses-patience/test_run_glue_with_pabee.py::PabeeTests::test_run_glue
https://app.circleci.com/pipelines/github/huggingface/transformers/10373/workflows/0c9f2e61-2732-4857-84f0-71b59ddf10a9/jobs/71646
@JetRunner | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6421/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6420/comments | https://api.github.com/repos/huggingface/transformers/issues/6420/events | https://github.com/huggingface/transformers/issues/6420 | 677,077,872 | MDU6SXNzdWU2NzcwNzc4NzI= | 6,420 | Experiment: ROUGE impact of using pegasus length-penalty implementation | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I tried this a bit and did not get any improvements. Meanwhile, our beam search is getting similar scores to pegasus in #6844 , so I am less motivated to push further.\r\nBranch with maybe correct beam search implem: https://github.com/sshleifer/transformers_fork/tree/peg-beam",
"> \r\n> \r\n> I tried this a bit and did not get any improvements. Meanwhile, our beam search is getting similar scores to pegasus in #6844 , so I am less motivated to push further.\r\n> Branch with maybe correct beam search implem: https://github.com/sshleifer/transformers_fork/tree/peg-beam\r\n\r\nIf it is better how can we install this? Can we do this with pip? Thanks. "
] | 1,597 | 1,632 | 1,602 | CONTRIBUTOR | null | code is under `length_normalization` in the pegasus [tf repo](https://github.com/google-research/pegasus) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6420/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6419/comments | https://api.github.com/repos/huggingface/transformers/issues/6419/events | https://github.com/huggingface/transformers/issues/6419 | 677,077,536 | MDU6SXNzdWU2NzcwNzc1MzY= | 6,419 | Add pegasus model cards | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6419/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/6418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6418/comments | https://api.github.com/repos/huggingface/transformers/issues/6418/events | https://github.com/huggingface/transformers/issues/6418 | 677,037,754 | MDU6SXNzdWU2NzcwMzc3NTQ= | 6,418 | All learning rates are 0 warning | {
"login": "sajastu",
"id": 10419055,
"node_id": "MDQ6VXNlcjEwNDE5MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajastu",
"html_url": "https://github.com/sajastu",
"followers_url": "https://api.github.com/users/sajastu/followers",
"following_url": "https://api.github.com/users/sajastu/following{/other_user}",
"gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajastu/subscriptions",
"organizations_url": "https://api.github.com/users/sajastu/orgs",
"repos_url": "https://api.github.com/users/sajastu/repos",
"events_url": "https://api.github.com/users/sajastu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajastu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #5338 , ignore it.\r\nSorry for the confusion.",
"It makes sense what you answered at #5338, thanks for the clarification. I'm closing this issue!"
] | 1,597 | 1,597 | 1,597 | NONE | null | - `transformers` version: 3.0.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.4 (GPU)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Running the example script in https://github.com/huggingface/transformers/tree/master/examples/seq2seq (finetune_bart_tiny.sh), I'm getting this warning in the beginning of training. However, the training process is continuing after that.
Warning:
```
finetune.py:245: UserWarning: All learning rates are 0
warnings.warn("All learning rates are 0")
Epoch 1: 0%|
/home/sajad/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`.
warnings.warn("To get the last learning rate computed by the scheduler, "
```
## Expected behavior
While the training seemingly goes well, I'm wondering if this warning would cause problems, leading to deteriorate model's final performance? As a add-on, I've also incorporated the gradient checkpointing to some computational blocks of BART (modifying `modelling_bart.py` script a bit). But even w/o incorporating this module, I'm still getting this warning message? Any thoughts of how to solve it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6418/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6417/comments | https://api.github.com/repos/huggingface/transformers/issues/6417/events | https://github.com/huggingface/transformers/issues/6417 | 677,033,681 | MDU6SXNzdWU2NzcwMzM2ODE= | 6,417 | how to fine tune t5 model for summarization task using tensorflow2? | {
"login": "banunitte",
"id": 6847024,
"node_id": "MDQ6VXNlcjY4NDcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/banunitte",
"html_url": "https://github.com/banunitte",
"followers_url": "https://api.github.com/users/banunitte/followers",
"following_url": "https://api.github.com/users/banunitte/following{/other_user}",
"gists_url": "https://api.github.com/users/banunitte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/banunitte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/banunitte/subscriptions",
"organizations_url": "https://api.github.com/users/banunitte/orgs",
"repos_url": "https://api.github.com/users/banunitte/repos",
"events_url": "https://api.github.com/users/banunitte/events{/privacy}",
"received_events_url": "https://api.github.com/users/banunitte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"please see: https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641."
] | 1,597 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6417/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6416/comments | https://api.github.com/repos/huggingface/transformers/issues/6416/events | https://github.com/huggingface/transformers/issues/6416 | 677,014,260 | MDU6SXNzdWU2NzcwMTQyNjA= | 6,416 | Docs: Separate documentation for mbart | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer I think adding new class should make thing more clear, should I go ahead with it ? Will also need to modify tests a little bit I guess",
"yes, thanks!\r\n\r\nkey tests not to break\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_modeling_mbart.py\r\n```",
"You can also make `tokenization_mbart.py`"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | Currently all mbart documentation is stuffed into docs/
https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/docs/source/model_doc/bart.rst#L42
mbart should have it's own `model_doc/mbart.rst` and entry in `pretrained_models.rst`.
Optionally you can also create a new `src/transformers/modeling_mbart.py` with roughly these contents:
```python
from .modeling_bart import BartForConditionalGeneration
from .configuration_bart import MbartConfig
class MBartForConditionalGeneration(BartForConditionalGeneration):
config_class = MbartConfig
# this model fully inherits its implementation from bart
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6415/comments | https://api.github.com/repos/huggingface/transformers/issues/6415/events | https://github.com/huggingface/transformers/pull/6415 | 677,001,336 | MDExOlB1bGxSZXF1ZXN0NDY2MjAzMDA2 | 6,415 | [EncoderDecoder] Add Cross Attention for GPT2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=h1) Report\n> Merging [#6415](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `96.61%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6415 +/- ##\n==========================================\n- Coverage 79.98% 79.98% -0.01% \n==========================================\n Files 153 153 \n Lines 28005 28039 +34 \n==========================================\n+ Hits 22401 22427 +26 \n- Misses 5604 5612 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.66% <87.50%> (+0.64%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <97.87%> (+0.71%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=footer). Last update [bc82047...56094e2](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR implements **Bert2GPT2** by adding cross-attention layers to GPT2.
Note that currently it is not possible to speed up decoder generation with the encoder-decoder framework (by using GPT2's past tensors) since it has to be implemented for all models that are compatible with the encoder/decoder framework (Bert, Roberta) before it can be used within the framework.
All GPT2 `RUN_SLOW` tests are verified to pass.
**Future PRs TODO**:
- [ ] Verify that Bert2GPT2 works by training on CNN Daily Mail summarization
- [ ] Add smart caching to Bert and add it to the encoder-decoder framework
- [ ] Update encoder-decoder docs
- [ ] Add a notebook explaining how to use encoder-decoder models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6415/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6415/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6415",
"html_url": "https://github.com/huggingface/transformers/pull/6415",
"diff_url": "https://github.com/huggingface/transformers/pull/6415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6415.patch",
"merged_at": 1597391010000
} |
https://api.github.com/repos/huggingface/transformers/issues/6414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6414/comments | https://api.github.com/repos/huggingface/transformers/issues/6414/events | https://github.com/huggingface/transformers/issues/6414 | 676,926,214 | MDU6SXNzdWU2NzY5MjYyMTQ= | 6,414 | TypeError: forward() got an unexpected keyword argument 'labels' | {
"login": "vgoklani",
"id": 180487,
"node_id": "MDQ6VXNlcjE4MDQ4Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/180487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vgoklani",
"html_url": "https://github.com/vgoklani",
"followers_url": "https://api.github.com/users/vgoklani/followers",
"following_url": "https://api.github.com/users/vgoklani/following{/other_user}",
"gists_url": "https://api.github.com/users/vgoklani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vgoklani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vgoklani/subscriptions",
"organizations_url": "https://api.github.com/users/vgoklani/orgs",
"repos_url": "https://api.github.com/users/vgoklani/repos",
"events_url": "https://api.github.com/users/vgoklani/events{/privacy}",
"received_events_url": "https://api.github.com/users/vgoklani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please copy-paste the entire stack trace, just the error message is not enough to know what's going on :-)",
"Hi @sgugger thanks for the reply! \r\n\r\nPlease see below for the full stack-trace\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-2-2942c2ba4004> in <module>\r\n 42 )\r\n 43\r\n---> 44 trainer.train()\r\n 45 trainer.save_model(output_directory)\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path)\r\n 497 continue\r\n 498\r\n--> 499 tr_loss += self._training_step(model, inputs, optimizer)\r\n 500\r\n 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or (\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer)\r\n 620 inputs[\"mems\"] = self._past\r\n 621\r\n--> 622 outputs = model(**inputs)\r\n 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n 624\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 720 result = self._slow_forward(*input, **kwargs)\r\n 721 else:\r\n--> 722 result = self.forward(*input, **kwargs)\r\n 723 for hook in itertools.chain(\r\n 724 _global_forward_hooks.values(),\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)\r\n 153 return self.module(*inputs[0], **kwargs[0])\r\n 154 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\r\n--> 155 outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n 156 return self.gather(outputs, self.output_device)\r\n 157\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)\r\n 163\r\n 164 def parallel_apply(self, replicas, inputs, kwargs):\r\n--> 165 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n 166\r\n 167 def gather(self, outputs, output_device):\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)\r\n 83 output = results[i]\r\n 84 if isinstance(output, ExceptionWrapper):\r\n---> 85 output.reraise()\r\n 86 outputs.append(output)\r\n 87 return outputs\r\n\r\n/opt/conda/lib/python3.7/site-packages/torch/_utils.py in reraise(self)\r\n 393 # (https://bugs.python.org/issue2651), so we work around it.\r\n 394 msg = KeyErrorMessage(msg)\r\n--> 395 raise self.exc_type(msg)\r\n\r\nTypeError: Caught TypeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'labels'\r\n\r\n```",
"Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task.",
"haha, i feel stupid now :) Thanks!",
"\r\n\r\n\r\n> Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task.\r\n\r\n@sgugger can we finetune models with specific task only (like RobertaForMasekdLm etc) ? is there a way we can pre-train RobertaModel on our data then go for specific tasks?",
"I have similar a question as @sanjay23singh. I want to train a no-head RobertaModel on my corpus, then fine tuned using RobertaForSentenceClassification? (as below)\r\n\r\n```\r\nmodel = RobertaModel(config=config)\r\ntraining_args = ..\r\ntrainer =...\r\ntrainer.train()\r\ntrainer.save_model('myRoberta')\r\n\r\n#fine tune\r\nsentimodel = RobertaForSequenceClassification.from_pretrained(\"./myRoberta\")\r\n```\r\n\r\nMy ultimate goal is training my own corpus with masked only, then do a classification. ",
"> Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task.\r\n\r\nSorry, could you explain this in more details here. Thank you.",
"I'm absolutely confused by this too, it looks to me like there's a big missing piece in the doc.",
"its easy,\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./My_train_BERT\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=5,\r\n per_gpu_train_batch_size=64,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n prediction_loss_only=True,\r\n **label_smoothing_factor=0.1** ##add it is ok\r\n)\r\n"
] | 1,597 | 1,652 | 1,597 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-53-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: Distributed
Hey there,
I've run into this issue and not sure how to fix it:
TypeError: forward() got an unexpected keyword argument 'labels'
I'm running transformers v3.0.2 installed via pip
Please see my code below. There is nothing fancy going on, I'm just trying to train RobertaMLM for a few more epochs on a different dataset.
```python
import os
import argparse
import datetime
from torch.utils.tensorboard import SummaryWriter
from transformers import RobertaModel, RobertaConfig, RobertaTokenizerFast, LineByLineTextDataset, DataCollatorForLanguageModeling, Trainer, TrainingArguments
from configs import model_directory, tensorboard_directory
from logger import get_logger
log = get_logger(__name__)
args = argparse.Namespace(
seed=42,
model_id="Roberta2",
pretrained_model_name_or_path="roberta-base",
vocab_file="/data/nlp/roberta_vocabulary/roberta-base-vocab.json",
merges_file="/data/nlp/roberta_vocabulary/roberta-base-merges.txt",
filename="/data/nlp/trc2.txt",
block_size=2**7,
epochs=25,
)
output_directory = os.path.join(model_directory, args.model_id)
os.makedirs(output_directory, exist_ok=True)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def build_model():
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_model_name_or_path=args.pretrained_model_name_or_path, lowercase=True, add_prefix_space=True, max_len=512)
config = RobertaConfig.from_pretrained(args.pretrained_model_name_or_path)
config.output_hidden_states = False
model = RobertaModel.from_pretrained(pretrained_model_name_or_path=args.pretrained_model_name_or_path, config=config, cache_dir=output_directory)
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=args.filename,
block_size=args.block_size,
)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args = TrainingArguments(
seed=args.seed,
output_dir=output_directory,
overwrite_output_dir=True,
num_train_epochs=args.epochs,
per_device_train_batch_size=128,
save_steps=10_000,
# save_total_limit=2,
fp16=True,
fp16_opt_level="O1"
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model(output_directory)
```
tag: @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6414/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6413/comments | https://api.github.com/repos/huggingface/transformers/issues/6413/events | https://github.com/huggingface/transformers/pull/6413 | 676,898,466 | MDExOlB1bGxSZXF1ZXN0NDY2MTIwODQy | 6,413 | Create README.md | {
"login": "abedkhooli",
"id": 11407254,
"node_id": "MDQ6VXNlcjExNDA3MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abedkhooli",
"html_url": "https://github.com/abedkhooli",
"followers_url": "https://api.github.com/users/abedkhooli/followers",
"following_url": "https://api.github.com/users/abedkhooli/following{/other_user}",
"gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions",
"organizations_url": "https://api.github.com/users/abedkhooli/orgs",
"repos_url": "https://api.github.com/users/abedkhooli/repos",
"events_url": "https://api.github.com/users/abedkhooli/events{/privacy}",
"received_events_url": "https://api.github.com/users/abedkhooli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Model card for https://huggingface.co/akhooli/gpt2-small-arabic | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6413",
"html_url": "https://github.com/huggingface/transformers/pull/6413",
"diff_url": "https://github.com/huggingface/transformers/pull/6413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6413.patch",
"merged_at": 1597156532000
} |
https://api.github.com/repos/huggingface/transformers/issues/6412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6412/comments | https://api.github.com/repos/huggingface/transformers/issues/6412/events | https://github.com/huggingface/transformers/pull/6412 | 676,865,371 | MDExOlB1bGxSZXF1ZXN0NDY2MDk1MTI0 | 6,412 | Create model card T5-base fine-tuned on event2Mind for Intent Prediction | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | More funny examples: https://twitter.com/mrm8488/status/1292952742395367424 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6412",
"html_url": "https://github.com/huggingface/transformers/pull/6412",
"diff_url": "https://github.com/huggingface/transformers/pull/6412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6412.patch",
"merged_at": 1597185327000
} |
https://api.github.com/repos/huggingface/transformers/issues/6411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6411/comments | https://api.github.com/repos/huggingface/transformers/issues/6411/events | https://github.com/huggingface/transformers/pull/6411 | 676,844,722 | MDExOlB1bGxSZXF1ZXN0NDY2MDc5MTI0 | 6,411 | [EncoderDecoder] Add encoder-decoder for roberta/ vanilla longformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=h1) Report\n> Merging [#6411](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/404782912ad1324592c2d5bb2e88d1ee99a040b6&el=desc) will **decrease** coverage by `1.93%`.\n> The diff coverage is `92.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6411 +/- ##\n==========================================\n- Coverage 79.77% 77.84% -1.94% \n==========================================\n Files 150 150 \n Lines 27789 27826 +37 \n==========================================\n- Hits 22170 21660 -510 \n- Misses 5619 6166 +547 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <ø> (-14.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.22% <50.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.98% <97.36%> (+0.20%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=footer). Last update [4047829...ed8414a](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach?\r\n\r\nHmm, I think it's very new so not sure if many people have tried out the framework yet. @patil-suraj - do you know if people work a lot with EncoderDecoder by chance? ",
"> do you know if people work a lot with EncoderDecoder by chance?\r\n\r\nSeems like it, seen quite a few issues and questions (on forum as well) regarding EncoderDecoder, but no one has reported any good results yet",
"Looks great. Thanks, @patrickvonplaten. \r\n\r\n> LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach?\r\n\r\n@sshleifer, was thinking about the same thing. My guess is that numbers won't be great because cross-attention is randomly initialized? \r\n",
"> Looks great. Thanks, @patrickvonplaten.\r\n> \r\n> > LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach?\r\n> \r\n> @sshleifer, was thinking about the same thing. My guess is that numbers won't be great because cross-attention is randomly initialized?\r\n\r\nBtw, this paper does some great analysis on reusing checkpoints for Seq2Seq models: https://arxiv.org/pdf/1907.12461.pdf"
] | 1,597 | 1,597 | 1,597 | MEMBER | null | This PR adds Roberta to the Encoder Decoder framework. Thus, it automatically makes it possible to use both `Roberta2Roberta` models and `Longformer2Roberta` model:
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("roberta-base", "roberta-base")
input_ids = torch.tensor([10 * [0]])
model(input_ids=input_ids, decoder_input_ids=input_ids)
```
and
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_pretrained("allenai/longformer-base-4096", "roberta-base")
input_ids = torch.tensor([10 * [0]])
model(input_ids=input_ids, decoder_input_ids=input_ids)
```
Also pinging @ibeltagy and @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6411/reactions",
"total_count": 3,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6411",
"html_url": "https://github.com/huggingface/transformers/pull/6411",
"diff_url": "https://github.com/huggingface/transformers/pull/6411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6411.patch",
"merged_at": 1597249410000
} |
https://api.github.com/repos/huggingface/transformers/issues/6410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6410/comments | https://api.github.com/repos/huggingface/transformers/issues/6410/events | https://github.com/huggingface/transformers/issues/6410 | 676,712,630 | MDU6SXNzdWU2NzY3MTI2MzA= | 6,410 | Cannot unzip the XNLI-MT 1.0 zip file. | {
"login": "minstar",
"id": 24719775,
"node_id": "MDQ6VXNlcjI0NzE5Nzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/24719775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minstar",
"html_url": "https://github.com/minstar",
"followers_url": "https://api.github.com/users/minstar/followers",
"following_url": "https://api.github.com/users/minstar/following{/other_user}",
"gists_url": "https://api.github.com/users/minstar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minstar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minstar/subscriptions",
"organizations_url": "https://api.github.com/users/minstar/orgs",
"repos_url": "https://api.github.com/users/minstar/repos",
"events_url": "https://api.github.com/users/minstar/events{/privacy}",
"received_events_url": "https://api.github.com/users/minstar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am able to unzip the archive form this link: https://dl.fbaipublicfiles.com/XNLI/XNLI-MT-1.0.zip"
] | 1,597 | 1,623 | 1,602 | NONE | null | # ❓ Questions & Help
Is there anyone who succeeding the unzip of XNLI-MT 1.0 zip file?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6410/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6409/comments | https://api.github.com/repos/huggingface/transformers/issues/6409/events | https://github.com/huggingface/transformers/issues/6409 | 676,646,545 | MDU6SXNzdWU2NzY2NDY1NDU= | 6,409 | TF2 TPU slow? | {
"login": "volker42maru",
"id": 51976664,
"node_id": "MDQ6VXNlcjUxOTc2NjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/51976664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/volker42maru",
"html_url": "https://github.com/volker42maru",
"followers_url": "https://api.github.com/users/volker42maru/followers",
"following_url": "https://api.github.com/users/volker42maru/following{/other_user}",
"gists_url": "https://api.github.com/users/volker42maru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/volker42maru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/volker42maru/subscriptions",
"organizations_url": "https://api.github.com/users/volker42maru/orgs",
"repos_url": "https://api.github.com/users/volker42maru/repos",
"events_url": "https://api.github.com/users/volker42maru/events{/privacy}",
"received_events_url": "https://api.github.com/users/volker42maru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nTraining on TPU is slow indeed because the training loop is not optimized for TPU. For two reasons:\r\n1) we don't want to have a version that is specifically optimized for each device which will create to much confusion for maintenance.\r\n2) having a training loop optimized for TPU will limit the possibility of logging, the logging will be at every epoch instead of every X steps. Which is a not wanted behavior.\r\n\r\nNevertheless if you have a solution that respect these two points, I will be happy to review your PR :)",
"You are right, this will introduce some behavior changes because logging/saving is not possible while the TPU is processing several batches.\r\n\r\nI actually played around with this a bit and introduced a variable `steps_per_loop` that I set to `200` when using TPU. I then used an iterator for the dataset to only do batch feeding/optimization during this period. However, this only improved the training speed marginally, so I don't think it's worth a PR.\r\n\r\nWhat gives me a bigger speedup (around 80%) is actually using 'keras compile/fit', where we can set `experimental_steps_per_execution` to different values depending on the device, for the training loop. I would be curious though if we could substitute part of the trainer or even the whole training by a \"keras style\" implementation. However, this would change the current architecture quite significantly and I am not sure it will retain the generic properties of the TFTrainer class.\r\n\r\nMaybe it becomes clearer if I just show you how I implemented an option to use keras for the training loop in the TFTrainer (pretty hacky of course): https://gist.github.com/volker42maru/20641970599c27dc9503161f52aa67c9#file-tf_trainer_train-py-L85-L113\r\n\r\n\r\nI will close this for now, because it's probably more of a long term prospect."
] | 1,597 | 1,598 | 1,598 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: ubuntu 18.04
- Python version: 3.6
- Tensorflow version (GPU?): TF 2.3.0
### Who can help
@jplu
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] task: MLM
## To reproduce
I am using the TFTrainer from `transformers` for MLM pretraining. However, it seems that even for TPU training each batch is fed separately to the TPU, while it's usually more common to feed a bunch of batches to the TPU for efficiency (see https://github.com/tensorflow/models/blob/master/official/nlp/bert/model_training_utils.py#L226).
I am not sure that's the only problem, but MLM pretraining BERT is around 3x slower on TPU with the TFTrainer compared to the official implementation (https://github.com/google-research/bert).
For better TPU usage, we probably need sth like here:
https://github.com/tensorflow/models/blob/master/official/nlp/bert/model_training_utils.py#L345-L361 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6409/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6408/comments | https://api.github.com/repos/huggingface/transformers/issues/6408/events | https://github.com/huggingface/transformers/issues/6408 | 676,642,891 | MDU6SXNzdWU2NzY2NDI4OTE= | 6,408 | i have used t5_base for abstractive summarization but it is not giving good results,Could you please give me solution for this | {
"login": "gopal354",
"id": 42486206,
"node_id": "MDQ6VXNlcjQyNDg2MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/42486206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gopal354",
"html_url": "https://github.com/gopal354",
"followers_url": "https://api.github.com/users/gopal354/followers",
"following_url": "https://api.github.com/users/gopal354/following{/other_user}",
"gists_url": "https://api.github.com/users/gopal354/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gopal354/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gopal354/subscriptions",
"organizations_url": "https://api.github.com/users/gopal354/orgs",
"repos_url": "https://api.github.com/users/gopal354/repos",
"events_url": "https://api.github.com/users/gopal354/events{/privacy}",
"received_events_url": "https://api.github.com/users/gopal354/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @gopal354 , this depends on lot of factors. What is the domain of your dataset ? There are many other summrization models available on the model hub trained on different datasets. You can try them as well. Or if you have a dataset, then you can further fine-tune these models on your domain. ",
"Would be nice if the detailed question is written in the description box rather than title and use the relevant issue topic (this should be Questions & Help and not Benchmarking transformers). This will help the team and contributors to act faster on the issue :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,602 | 1,602 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6408/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6407/comments | https://api.github.com/repos/huggingface/transformers/issues/6407/events | https://github.com/huggingface/transformers/issues/6407 | 676,614,358 | MDU6SXNzdWU2NzY2MTQzNTg= | 6,407 | Slow Decoding Speed when using BertForLMModel | {
"login": "JamesHujy",
"id": 48405323,
"node_id": "MDQ6VXNlcjQ4NDA1MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/48405323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesHujy",
"html_url": "https://github.com/JamesHujy",
"followers_url": "https://api.github.com/users/JamesHujy/followers",
"following_url": "https://api.github.com/users/JamesHujy/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesHujy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesHujy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesHujy/subscriptions",
"organizations_url": "https://api.github.com/users/JamesHujy/orgs",
"repos_url": "https://api.github.com/users/JamesHujy/repos",
"events_url": "https://api.github.com/users/JamesHujy/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesHujy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yeah, exactly - we should / could add the layer cache for `BertLMHeadModel`. It's not trivial to do it, but feel free to give it a try. "
] | 1,597 | 1,597 | 1,597 | NONE | null | I set the BertLMHeadModel as Decoder in my Seq2Seq model. It seems to work well in training. But when decoing, it decodes very slowly. I think there is no layer_past used in GPT2, XLNet in BertLMHeadModel and many attentions are computed repetitively?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6407/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6406/comments | https://api.github.com/repos/huggingface/transformers/issues/6406/events | https://github.com/huggingface/transformers/issues/6406 | 676,582,655 | MDU6SXNzdWU2NzY1ODI2NTU= | 6,406 | RuntimeError: Error while creating shape using tf-xlm-roberta-large | {
"login": "msafi04",
"id": 27760169,
"node_id": "MDQ6VXNlcjI3NzYwMTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/27760169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msafi04",
"html_url": "https://github.com/msafi04",
"followers_url": "https://api.github.com/users/msafi04/followers",
"following_url": "https://api.github.com/users/msafi04/following{/other_user}",
"gists_url": "https://api.github.com/users/msafi04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msafi04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msafi04/subscriptions",
"organizations_url": "https://api.github.com/users/msafi04/orgs",
"repos_url": "https://api.github.com/users/msafi04/repos",
"events_url": "https://api.github.com/users/msafi04/events{/privacy}",
"received_events_url": "https://api.github.com/users/msafi04/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Facing the same issue on on GCP + TPU, same TF and TPU versions. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,607 | 1,607 | NONE | null | I get the following runtime error after the 2nd fold.
this is my model:
maxlen = 50
`with strategy.scope():
#bert_encoder = TFBertModel.from_pretrained(model_name)
base_model = TFAutoModel.from_pretrained(model_name)
input_word_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_word_ids")
input_mask = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_mask")
input_type_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_type_ids")
embedding = base_model([input_word_ids, input_mask, input_type_ids])[0]
output = tf.keras.layers.Dense(3, activation = 'softmax')(embedding[:, 0, :])
model = tf.keras.Model(inputs = [input_word_ids, input_mask, input_type_ids], outputs = output)
model.compile(tf.keras.optimizers.Adam(lr = 1e-5), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
`
And the traceback below:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-27-a45ce90453f2> in <module>
22
23 K.clear_session()
---> 24 model = build_model(maxlen, model_name)
25 checkpoint = tf.keras.callbacks.ModelCheckpoint(
26 'XLMRoberta_fold-%i.h5'%fold, monitor = 'val_loss', verbose = 1, save_best_only = True,
<ipython-input-23-9faa2e5f1d9b> in build_model(maxlen, model_name)
2 with strategy.scope():
3 #bert_encoder = TFBertModel.from_pretrained(model_name)
----> 4 base_model = TFAutoModel.from_pretrained(model_name)
5 input_word_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_word_ids")
6 input_mask = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_mask")
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 for config_class, model_class in TF_MODEL_MAPPING.items():
422 if isinstance(config, config_class):
--> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
424 raise ValueError(
425 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n"
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
482 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
483
--> 484 model(model.dummy_inputs, training=False) # build the network with dummy inputs
485
486 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py in call(self, inputs, **kwargs)
229 heads.
230 """
--> 231 outputs = self.roberta(inputs, **kwargs)
232 return outputs
233
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, training)
604 # head_mask = tf.constant([0] * self.num_hidden_layers)
605
--> 606 embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
607 encoder_outputs = self.encoder(
608 [embedding_output, extended_attention_mask, head_mask, output_attentions, output_hidden_states],
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
962 # Eager execution on data tensors.
963 with backend.name_scope(self._name_scope()):
--> 964 self._maybe_build(inputs)
965 cast_inputs = self._maybe_cast_inputs(inputs)
966 with base_layer_utils.autocast_context_manager(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2414 # operations.
2415 with tf_utils.maybe_init_scope(self):
-> 2416 self.build(input_shapes) # pylint:disable=not-callable
2417 # We must set also ensure that the layer is marked as built, and the build
2418 # shape is stored since user defined build functions may not be calling
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_bert.py in build(self, input_shape)
144 "weight",
145 shape=[self.vocab_size, self.hidden_size],
--> 146 initializer=get_initializer(self.initializer_range),
147 )
148 super().build(input_shape)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, synchronization, aggregation, **kwargs)
575 synchronization=synchronization,
576 aggregation=aggregation,
--> 577 caching_device=caching_device)
578 if regularizer is not None:
579 # TODO(fchollet): in the future, this should be handled at the
/opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter)
741 dtype=dtype,
742 initializer=initializer,
--> 743 **kwargs_for_getter)
744
745 # If we set an initializer and the variable processed it, tracking will not
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner)
139 synchronization=synchronization,
140 aggregation=aggregation,
--> 141 shape=variable_shape if variable_shape else None)
142
143
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
257 def __call__(cls, *args, **kwargs):
258 if cls is VariableV1:
--> 259 return cls._variable_v1_call(*args, **kwargs)
260 elif cls is Variable:
261 return cls._variable_v2_call(*args, **kwargs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape)
218 synchronization=synchronization,
219 aggregation=aggregation,
--> 220 shape=shape)
221
222 def _variable_v2_call(cls,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in getter(**kwargs)
64
65 def getter(**kwargs):
---> 66 return captured_getter(captured_previous, **kwargs)
67
68 return getter
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py in creator_with_resource_vars(next_creator, **kwargs)
1765 kwargs["initial_value"] = kwargs["initial_value"].wrapped_value
1766
-> 1767 return self._create_variable(next_creator, **kwargs)
1768
1769 def distributed_getter(getter, *args, **kwargs):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/tpu_strategy.py in _create_variable(self, next_creator, **kwargs)
670 tpu_values.TPUMirroredVariable,
671 tpu_values.TPUSyncOnReadVariable,
--> 672 **kwargs)
673
674 def _reduce_to(self, reduce_op, value, destinations, experimental_hints):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/values.py in create_mirrored_variable(strategy, real_mirrored_creator, mirrored_cls, sync_on_read_cls, **kwargs)
692 # here.
693 with tape.stop_recording():
--> 694 value_list = real_mirrored_creator(**kwargs)
695 var_cls = sync_on_read_cls if is_sync_on_read else mirrored_cls
696 result = var_cls(strategy, value_list, aggregation)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/tpu_strategy.py in _real_mirrored_creator(**kwargs)
660
661 with context.device_policy(context.DEVICE_PLACEMENT_SILENT):
--> 662 v = next_creator(**kwargs)
663
664 assert not isinstance(v, tpu_values.TPUMirroredVariable)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in <lambda>(**kwargs)
196 shape=None):
197 """Call on Variable class. Useful to force the signature."""
--> 198 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
199 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
200 previous_getter = _make_getter(getter, previous_getter)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator(next_creator, **kwargs)
2596 synchronization=synchronization,
2597 aggregation=aggregation,
-> 2598 shape=shape)
2599 else:
2600 return variables.RefVariable(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs)
261 return cls._variable_v2_call(*args, **kwargs)
262 else:
--> 263 return super(VariableMetaclass, cls).__call__(*args, **kwargs)
264
265
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1432 aggregation=aggregation,
1433 shape=shape,
-> 1434 distribute_strategy=distribute_strategy)
1435
1436 def _init_from_args(self,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1568 name="initial_value", dtype=dtype)
1569 if shape is not None:
-> 1570 if not initial_value.shape.is_compatible_with(shape):
1571 raise ValueError(
1572 "The initial value's shape (%s) is not compatible with "
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in shape(self)
1063 # `_tensor_shape` is declared and defined in the definition of
1064 # `EagerTensor`, in C.
-> 1065 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
1066 except core._NotOkStatusException as e:
1067 six.raise_from(core._status_to_exception(e.code, e.message), None)
RuntimeError: Error while creating shape | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6406/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6405/comments | https://api.github.com/repos/huggingface/transformers/issues/6405/events | https://github.com/huggingface/transformers/pull/6405 | 676,556,494 | MDExOlB1bGxSZXF1ZXN0NDY1ODQ1MTk4 | 6,405 | [s2s] wmt download script use less ram | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=h1) Report\n> Merging [#6405](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ecd92ee4fcd515f542c73593a4b6fa0b2c81fc&el=desc) will **decrease** coverage by `2.10%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6405 +/- ##\n==========================================\n- Coverage 80.10% 78.00% -2.11% \n==========================================\n Files 149 149 \n Lines 27680 27680 \n==========================================\n- Hits 22173 21591 -582 \n- Misses 5507 6089 +582 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <0.00%> (-14.54%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.20% <0.00%> (-1.37%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=footer). Last update [b9ecd92...de72b0a](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"further testing showed that chunking doesn't make much of a difference - writing one record at a time is almost as fast as writing in chunks of 10K records - I think it's the reading that's the bottleneck here, which we can't optimize. So I removed the chunking from this PR.\r\n\r\n`wmt19-ru-en` with 37M records converted in ~40mins on my machine using this PR."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | a few enhancement to https://github.com/huggingface/transformers/pull/6403
the main change:
- rewrite not to load 100GB into RAM - wmt19 is huge!
and then a few small things:
- replaced the default dataset to wmt16 as it's much much smaller than wmt19 to experiment with (also it seems that at least wmt19-ru-en is missing a test dataset, while wmt16-run-en has it)
- added lang defaults so it's easy to start experimenting with
- moved tqdm to where a detailed progress can be seen
- added some extra notes
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6405",
"html_url": "https://github.com/huggingface/transformers/pull/6405",
"diff_url": "https://github.com/huggingface/transformers/pull/6405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6405.patch",
"merged_at": 1597161857000
} |
https://api.github.com/repos/huggingface/transformers/issues/6404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6404/comments | https://api.github.com/repos/huggingface/transformers/issues/6404/events | https://github.com/huggingface/transformers/pull/6404 | 676,531,422 | MDExOlB1bGxSZXF1ZXN0NDY1ODI1MTM3 | 6,404 | [lightning_base] fix s2s logging, only make train_loader once | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=h1) Report\n> Merging [#6404](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **decrease** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6404 +/- ##\n==========================================\n- Coverage 80.38% 80.26% -0.12% \n==========================================\n Files 156 156 \n Lines 28058 28058 \n==========================================\n- Hits 22554 22521 -33 \n- Misses 5504 5537 +33 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=footer). Last update [72add6c...e43061d](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | setup is called many times (incl twice by trainer.test), creating a dataloader each time. Will only creating a train_loader on the first call cause bad side effects that I don't understand @nateraw @williamFalcon ?
I read docs [https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html], so I think I'm fine, but not sure.
cc @stas00
Also:
- add a fast test for run_ner
- [pl] centralize `data_dir` argument to `add_generic_args` cause rule of 3
Checks:
- verified xsum distillation trains well, has good LR logs. (warmup+linear decay are honored.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6404",
"html_url": "https://github.com/huggingface/transformers/pull/6404",
"diff_url": "https://github.com/huggingface/transformers/pull/6404.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6404.patch",
"merged_at": 1597632582000
} |
https://api.github.com/repos/huggingface/transformers/issues/6403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6403/comments | https://api.github.com/repos/huggingface/transformers/issues/6403/events | https://github.com/huggingface/transformers/pull/6403 | 676,521,301 | MDExOlB1bGxSZXF1ZXN0NDY1ODE3MTU4 | 6,403 | [s2s] Script to save wmt data to disk | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=h1) Report\n> Merging [#6403](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/00bb0b25ed66a4878f2e0ffdd1ca65b7684db57e&el=desc) will **decrease** coverage by `0.63%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6403 +/- ##\n==========================================\n- Coverage 80.24% 79.60% -0.64% \n==========================================\n Files 149 149 \n Lines 27680 27680 \n==========================================\n- Hits 22211 22035 -176 \n- Misses 5469 5645 +176 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.95% <0.00%> (-25.22%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.89% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=footer). Last update [00bb0b2...19f2c61](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6403",
"html_url": "https://github.com/huggingface/transformers/pull/6403",
"diff_url": "https://github.com/huggingface/transformers/pull/6403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6403.patch",
"merged_at": 1597114180000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6402/comments | https://api.github.com/repos/huggingface/transformers/issues/6402/events | https://github.com/huggingface/transformers/pull/6402 | 676,441,829 | MDExOlB1bGxSZXF1ZXN0NDY1NzUyNzE3 | 6,402 | remove lr_scheduler redundancy | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sshleifer has a better version in works, closing."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This PR solves https://github.com/huggingface/transformers/issues/6374
by removing a hardcoded `lr_scheduler` and switching to using the new method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6402",
"html_url": "https://github.com/huggingface/transformers/pull/6402",
"diff_url": "https://github.com/huggingface/transformers/pull/6402.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6402.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6401/comments | https://api.github.com/repos/huggingface/transformers/issues/6401/events | https://github.com/huggingface/transformers/issues/6401 | 676,425,598 | MDU6SXNzdWU2NzY0MjU1OTg= | 6,401 | [TF Longformer] Add Multiple Choice, Seq Classification Model | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi !\r\nI'd like to help and work on this if that's ok.",
"Awesome, feel free to open an issue :-) ",
"Hello !\r\n\r\nI'm a bit lost here. I've looked at `modeling_tf_roberta.py` and `modeling_longformer.py` to create the class `TFLongformerForSequenceClassification`. I'm not sure if I am going in the right direction here and same goes for the tests.\r\nI used `python -m pytest -n auto --dist=loadfile -s -v ./tests/test_modeling_tf_roberta.py` to get an idea on what should I do for testing but it seems the test for `TFRobertaForSequenceClassification` is skipped and my test on the class I created (which is basically just a copy/paste of the roberta's test) is skipped too.\r\n\r\nHere is a link to what I've done so far: https://github.com/Groskilled/transformers/commit/461ee6279433f94868332b1abbfe7875e19f243a\r\n\r\nAm I on the right track ? And what am I missing on the tests ?\r\n\r\nSorry to ask such simple questions, it's my first time participating in an open source project.",
"No worries ;-). This looks alright! Could you open a PR so that we can see your changes directly on the PR? You can checkout this doc to understand how to do PRs: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md. Would be great if you can ping me on the PR and then we look together!",
"HI @Groskilled and @patrickvonplaten, I have been playing a bit around this issue, as I have some familiarity with Keras/TF2 but no previous experience with transformers, and I was figuring out a way to start familiarising with them. As I am interested in classifying long documents Longformer is of interest to me.\r\nI have a draft of my current changes [here](https://github.com/huggingface/transformers/compare/master...Zigur:tf-lonformer-good-first-release). The test suite seems to pass (using Python 3.7.5, they did not on Python 3.8.2 on my Mac machine), but I would need extensive feedback as I have mostly lifted code from `test_modeling_tf_roberta.py` and the testing counterpart.\r\nIf it is of interest, I can open a pull request with all the details, or @Groskilled you can feel free to cherry-pick part of it if it's useful for your own pull request (as you were working on this earlier on, apologies for the intromission)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Issue is still open! If stuck, feel free to take a look at the unfinished PR."
] | 1,597 | 1,605 | 1,605 | MEMBER | null | # 🚀 Feature request
`modeling_longformer.py` has the classes `LongformerForSequenceClassification`, `LongformerForMultipleChoice` and `LongformerForTokenClassification` which are not present in `modeling_tf_longformer.py` at the moment.
Those classes should be equally added to `modeling_tf_longformer.py`.
## Motivation
The pretrained weights for TFLongformer are available so that these classes could be used for finetuning.
## Your contribution
This issue is a good first issue because it is not too complicated to add these models. One should take a look at `modeling_tf_roberta.py` to see how these models are implemented for `TFRoberta` and implement them analogous for `TFLongformer`. Please make sure that the docstring is correct and that test are added for each class (again Roberta can serve as an example here, check out `test_modeling_tf_roberta.py`).
I am happy to guide interested community contributors through the PR and help them get it merged.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6401/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6400/comments | https://api.github.com/repos/huggingface/transformers/issues/6400/events | https://github.com/huggingface/transformers/issues/6400 | 676,406,977 | MDU6SXNzdWU2NzY0MDY5Nzc= | 6,400 | ZeroDivisionError with Reformer | {
"login": "eliasjacob",
"id": 34211393,
"node_id": "MDQ6VXNlcjM0MjExMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/34211393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasjacob",
"html_url": "https://github.com/eliasjacob",
"followers_url": "https://api.github.com/users/eliasjacob/followers",
"following_url": "https://api.github.com/users/eliasjacob/following{/other_user}",
"gists_url": "https://api.github.com/users/eliasjacob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliasjacob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliasjacob/subscriptions",
"organizations_url": "https://api.github.com/users/eliasjacob/orgs",
"repos_url": "https://api.github.com/users/eliasjacob/repos",
"events_url": "https://api.github.com/users/eliasjacob/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliasjacob/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @eliasjacob, the problem is probably that `self.args.gradient_accumaltion_steps` is set to a value > then `(len(train_dataloader)`",
"You were right (although I am unsure why the same notebook yielded different results in collab). Thank you very much!"
] | 1,597 | 1,598 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-5.4.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): **Reformer**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below) (well, actually, not my own, but @patrickvonplaten 's)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Execute @patrickvonplaten 's notebook available at https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb
2. I've tried to run it on google colab and works fine. The problem appears when I try to run on my machine.
3. I've tried it with two different clean virtual environments (python 3.6 and 3.7), but they've both failed.
4. I haven't change the dataset, nor any model config/training args.
4. After calling trainer.train() I get the following error
```
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-13-02431faf649a> in <module>
8
9 # train
---> 10 trainer.train()
/data/venv36/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path)
394 t_total = self.args.max_steps
395 num_train_epochs = (
--> 396 self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1
397 )
398 else:
ZeroDivisionError: integer division or modulo by zero
```
## Expected behavior
The model should begin to train
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6400/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6399/comments | https://api.github.com/repos/huggingface/transformers/issues/6399/events | https://github.com/huggingface/transformers/issues/6399 | 676,388,234 | MDU6SXNzdWU2NzYzODgyMzQ= | 6,399 | DPR retriever module | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Pinging @lhoestq!",
"Hi @mchari \r\n\r\nThe retriever is now part of the `nlp` library.\r\nYou can install it with\r\n\r\n```\r\npip install nlp\r\n```\r\n\r\nand load the retriever:\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nwiki = load_dataset(\"wiki_dpr\", with_embeddings=False, with_index=True, split=\"train\")\r\n```\r\n\r\nThe retriever is basically a dense index over wikipedia passages.\r\nTo query it using the DPR question encoder you can do:\r\n```python\r\nfrom transformers import DPRQuestionEncoderTokenizer, DPRQuestionEncoder\r\n\r\nquestion_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')\r\nquestion_encoder = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base')\r\n\r\nquestion = \"What is love ?\"\r\nquestion_emb = question_encoder(**question_tokenizer(question, return_tensors=\"pt\"))[0].detach().numpy()\r\npassages_scores, passages = wiki.get_nearest_examples(\"embeddings\", question_emb, k=20) # get k nearest neighbors\r\n```\r\n\r\nShall we make a blog post or something to show how to use it with `transformers` @thomwolf ?\r\n\r\nEDIT: `nlp` is now renamed to `datasets`",
"Hi @lhoestq ,\r\nWhich metric does the `FaissIndex` use to compute vector similarity? (i.e. how `passages_scores` values are computed?)\r\n\r\nIt uses the _inner product_ (as described in DPR paper) or something else?\r\nThank you",
"It uses inner product.\r\nYou can see the code that creates the index here https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/datasets/wiki_dpr/wiki_dpr.py#L171",
"Thanks for the retriever functionality ! Not sure how it works if I want to use it on my own documents.\r\n",
"@lhoestq , any guidance for fine tuning the retriever module on another set of documents ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,608 | 1,608 | NONE | null | I see https://github.com/huggingface/transformers/pull/5279 that describes the DPR flow.
Just checking to see when the retriever module will be available.
Many thanks for making DPR available ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6399/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6398/comments | https://api.github.com/repos/huggingface/transformers/issues/6398/events | https://github.com/huggingface/transformers/pull/6398 | 676,373,801 | MDExOlB1bGxSZXF1ZXN0NDY1Njk1ODc5 | 6,398 | Data collator with padding | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=h1) Report\n> Merging [#6398](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3425936643b157bda181af169b371dcf0a3ad3eb&el=desc) will **increase** coverage by `0.18%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6398 +/- ##\n==========================================\n+ Coverage 79.55% 79.73% +0.18% \n==========================================\n Files 148 148 \n Lines 27206 27226 +20 \n==========================================\n+ Hits 21644 21710 +66 \n+ Misses 5562 5516 -46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.05% <50.00%> (-7.54%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <0.00%> (-5.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.22% <0.00%> (+9.72%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=footer). Last update [3425936...4bed573](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"It's hard to see how to combine it with another data collator since a data collator's function is to create batch, and you can't create batches if your tensors are not padded to the same size.",
"Things I tried to fix here were actually addressed by @thomwolf in #6423, so waiting for this PR to be merged before merging this one.",
"Rebase made the PR unreadable. Opening a new clean one."
] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | Add a data collator to dynamically pad samples during batching. This is necessary for the training set since padding can't be applied beforehand if we use shuffling (unless with pad to a fixed `max_length`).
This should make it more straightforward to plug nlp into the Trainer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6398",
"html_url": "https://github.com/huggingface/transformers/pull/6398",
"diff_url": "https://github.com/huggingface/transformers/pull/6398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6398.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6397/comments | https://api.github.com/repos/huggingface/transformers/issues/6397/events | https://github.com/huggingface/transformers/pull/6397 | 676,359,315 | MDExOlB1bGxSZXF1ZXN0NDY1Njg0MDM3 | 6,397 | Create README.md | {
"login": "abedkhooli",
"id": 11407254,
"node_id": "MDQ6VXNlcjExNDA3MjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11407254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abedkhooli",
"html_url": "https://github.com/abedkhooli",
"followers_url": "https://api.github.com/users/abedkhooli/followers",
"following_url": "https://api.github.com/users/abedkhooli/following{/other_user}",
"gists_url": "https://api.github.com/users/abedkhooli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abedkhooli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abedkhooli/subscriptions",
"organizations_url": "https://api.github.com/users/abedkhooli/orgs",
"repos_url": "https://api.github.com/users/abedkhooli/repos",
"events_url": "https://api.github.com/users/abedkhooli/events{/privacy}",
"received_events_url": "https://api.github.com/users/abedkhooli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=h1) Report\n> Merging [#6397](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ea9b2db3732904014b9121fb8a5c896ae00d4cf&el=desc) will **increase** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6397 +/- ##\n==========================================\n+ Coverage 77.31% 78.27% +0.96% \n==========================================\n Files 146 146 \n Lines 26597 26597 \n==========================================\n+ Hits 20563 20820 +257 \n+ Misses 6034 5777 -257 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.40% <0.00%> (-34.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.75% <0.00%> (+73.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=footer). Last update [7ea9b2d...141f941](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"[Thanks for sharing](https://huggingface.co/akhooli/gpt2-small-arabic-poetry)\r\n\r\nIf you'd like, you could submit some simple inputs for Arabic to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts – let me know if you need any help"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | For GPT-2 Arabic Poetry - https://huggingface.co/akhooli/gpt2-small-arabic-poetry | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6397/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6397",
"html_url": "https://github.com/huggingface/transformers/pull/6397",
"diff_url": "https://github.com/huggingface/transformers/pull/6397.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6397.patch",
"merged_at": 1597151003000
} |
https://api.github.com/repos/huggingface/transformers/issues/6396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6396/comments | https://api.github.com/repos/huggingface/transformers/issues/6396/events | https://github.com/huggingface/transformers/pull/6396 | 676,310,269 | MDExOlB1bGxSZXF1ZXN0NDY1NjQ0MzM1 | 6,396 | switch Hindi-BERT to S3 README | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=h1) Report\n> Merging [#6396](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0fe3cf5c1059c04535de8f04f4efed7251adbe&el=desc) will **increase** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6396 +/- ##\n==========================================\n+ Coverage 79.40% 79.51% +0.11% \n==========================================\n Files 148 148 \n Lines 27200 27200 \n==========================================\n+ Hits 21598 21628 +30 \n+ Misses 5602 5572 -30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=footer). Last update [06bc347...106d0f3](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@mapmeld we have a better version control/preview system coming in the future. In the meantime, merging this"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | The Markdown parser currently cuts off the CoLab URL (the last char is an underscore) on https://huggingface.co/monsoon-nlp/hindi-bert
There are some other necessary updates, and I'd rather update this model card by pushing to S3 in the future | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6396",
"html_url": "https://github.com/huggingface/transformers/pull/6396",
"diff_url": "https://github.com/huggingface/transformers/pull/6396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6396.patch",
"merged_at": 1597156463000
} |
https://api.github.com/repos/huggingface/transformers/issues/6395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6395/comments | https://api.github.com/repos/huggingface/transformers/issues/6395/events | https://github.com/huggingface/transformers/issues/6395 | 676,273,276 | MDU6SXNzdWU2NzYyNzMyNzY= | 6,395 | Bug in the question answering pipeline | {
"login": "elronbandel",
"id": 23455264,
"node_id": "MDQ6VXNlcjIzNDU1MjY0",
"avatar_url": "https://avatars.githubusercontent.com/u/23455264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elronbandel",
"html_url": "https://github.com/elronbandel",
"followers_url": "https://api.github.com/users/elronbandel/followers",
"following_url": "https://api.github.com/users/elronbandel/following{/other_user}",
"gists_url": "https://api.github.com/users/elronbandel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elronbandel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elronbandel/subscriptions",
"organizations_url": "https://api.github.com/users/elronbandel/orgs",
"repos_url": "https://api.github.com/users/elronbandel/repos",
"events_url": "https://api.github.com/users/elronbandel/events{/privacy}",
"received_events_url": "https://api.github.com/users/elronbandel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This bug was patched on `master`. Can you install from source and let me know if this fixes your issue? \r\n\r\n`pip install git+https://github.com/huggingface/transformers`",
"This fixed it! Thank you!"
] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
The bug appears since transformers 3.0.1 but not before.
Model I am using distilbert-base-cased-distilled-squad:
The problem arises when using:
* [ ] my own modified scripts:
```
from transformers import pipeline
model = "distilbert-base-cased-distilled-squad"
qa_pipeline = pipeline(
"question-answering",
model=model,
tokenizer=model,
)
instance = {
"question": "what is your product?",
"context": " is an amazing new platform that help businesses of students from BarIlan University that are enthusiastic about conversational AI. The difference between our Sprybot platform and other chat bots is that constructing chat bot is a long and hard process and with Sprybot you can do it quickly and eaily. You can construct chatbot using our platform just by feeding textual description of you business that contain any details important for costumers. The time it takes to create a bot using our platform is the time takes you to describe your business. In order to create Sprybot we used natural language processing and state of the art deep learning artificial intelligence. At the moment you cant buy our product because its still under construction. Sprybot can answer questions about your business but it can not talk about anything else other than the information was fed to it."
}
qa_pipeline(instance)
```
Notice: little changes in the context text make the bug to not show up
## To reproduce
Steps to reproduce the behavior:
1. [fully reproduced on google colab](https://colab.research.google.com/drive/1YqamXA6qq8xxWXhq6VqEA9clHsEVW7sh?usp=sharing)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-a5f26c48556d> in <module>()
4 }
5
----> 6 qa_pipeline(instance)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
KeyError: 0
```
## Expected behavior
get the qa pipline output with no errors
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6394/comments | https://api.github.com/repos/huggingface/transformers/issues/6394/events | https://github.com/huggingface/transformers/issues/6394 | 676,256,140 | MDU6SXNzdWU2NzYyNTYxNDA= | 6,394 | Error while loading albert for token classification | {
"login": "nirajkale",
"id": 40765055,
"node_id": "MDQ6VXNlcjQwNzY1MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/40765055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nirajkale",
"html_url": "https://github.com/nirajkale",
"followers_url": "https://api.github.com/users/nirajkale/followers",
"following_url": "https://api.github.com/users/nirajkale/following{/other_user}",
"gists_url": "https://api.github.com/users/nirajkale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nirajkale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nirajkale/subscriptions",
"organizations_url": "https://api.github.com/users/nirajkale/orgs",
"repos_url": "https://api.github.com/users/nirajkale/repos",
"events_url": "https://api.github.com/users/nirajkale/events{/privacy}",
"received_events_url": "https://api.github.com/users/nirajkale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Quick update:\r\nAbove code works just fine in Ubuntu environment with below specs:\r\n- `transformers` version: 3.0.2\r\n- Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid\r\n- Python version: 3.6.10\r\n- PyTorch version (GPU?): 1.5.1 (True)\r\n- Tensorflow version (GPU?): 2.0.0 (True)\r\n- Using GPU in script?: yes (Tesla P40)\r\n- Using distributed or parallel set-up in script?: No\r\n\r\nI think this issue only occurs in Windows.\r\n",
"Hi! This may be related to a network error, can you download other models on your Windows machine?",
"set REQUESTS_CA_BUNDLE env var to ca-certificates.crt\r\n\r\nIn my case I am using Ubuntu, so running the following command on terminal solves the issue:\r\n\r\n` export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,607 | 1,607 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using albert-base-v2 or albert-base-v1:
The tasks I am working on is:
token classification using albert-base-v2 or v1
## To reproduce
```
>>> from transformers import AlbertTokenizer, TFAlbertForTokenClassification
>>> import tensorflow as tf
>>> tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2', cache_dir = 'cache')
>>> model = TFAlbertForTokenClassification.from_pretrained('albert-base-v2', cache_dir = 'cache')
```
When I run above script I get error:
```
Traceback (most recent call last):
File "C:\Users\703235761\AppData\Local\Continuum\anaconda3\envs\slot\lib\site-packages\transformers\modeling_tf_utils.py", line 581, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "model_utils.py", line 7, in <module>
model = TFAlbertForTokenClassification.from_pretrained('albert-base-v1', cache_dir = 'cache')
File "C:\Users\703235761\AppData\Local\Continuum\anaconda3\envs\slot\lib\site-packages\transformers\modeling_tf_utils.py", line 588, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'albert-base-v1'. Make sure that:
- 'albert-base-v1' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'albert-base-v1' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
## Expected behavior
I think this model was supposed to work with TFAlbertModel as well.
Thanks in advance! :-)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6394/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6393/comments | https://api.github.com/repos/huggingface/transformers/issues/6393/events | https://github.com/huggingface/transformers/pull/6393 | 676,253,460 | MDExOlB1bGxSZXF1ZXN0NDY1NTk3NTY0 | 6,393 | Add missing docker arg for TPU CI. | {
"login": "zcain117",
"id": 14796584,
"node_id": "MDQ6VXNlcjE0Nzk2NTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/14796584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zcain117",
"html_url": "https://github.com/zcain117",
"followers_url": "https://api.github.com/users/zcain117/followers",
"following_url": "https://api.github.com/users/zcain117/following{/other_user}",
"gists_url": "https://api.github.com/users/zcain117/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zcain117/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zcain117/subscriptions",
"organizations_url": "https://api.github.com/users/zcain117/orgs",
"repos_url": "https://api.github.com/users/zcain117/repos",
"events_url": "https://api.github.com/users/zcain117/events{/privacy}",
"received_events_url": "https://api.github.com/users/zcain117/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=h1) Report\n> Merging [#6393](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0fe3cf5c1059c04535de8f04f4efed7251adbe&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6393 +/- ##\n==========================================\n- Coverage 79.40% 79.16% -0.24% \n==========================================\n Files 148 148 \n Lines 27200 27200 \n==========================================\n- Hits 21598 21533 -65 \n- Misses 5602 5667 +65 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-5.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+7.26%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=footer). Last update [06bc347...3d1b87b](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Fixes `"docker build" requires exactly 1 argument.` for the path where `$CIRCLE_PR_NUMBER` is unset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6393",
"html_url": "https://github.com/huggingface/transformers/pull/6393",
"diff_url": "https://github.com/huggingface/transformers/pull/6393.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6393.patch",
"merged_at": 1597128530000
} |
https://api.github.com/repos/huggingface/transformers/issues/6392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6392/comments | https://api.github.com/repos/huggingface/transformers/issues/6392/events | https://github.com/huggingface/transformers/issues/6392 | 676,226,578 | MDU6SXNzdWU2NzYyMjY1Nzg= | 6,392 | seq2seq examples require pytest | {
"login": "dmlap",
"id": 56667,
"node_id": "MDQ6VXNlcjU2NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/56667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmlap",
"html_url": "https://github.com/dmlap",
"followers_url": "https://api.github.com/users/dmlap/followers",
"following_url": "https://api.github.com/users/dmlap/following{/other_user}",
"gists_url": "https://api.github.com/users/dmlap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmlap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmlap/subscriptions",
"organizations_url": "https://api.github.com/users/dmlap/orgs",
"repos_url": "https://api.github.com/users/dmlap/repos",
"events_url": "https://api.github.com/users/dmlap/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmlap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"yes, great catch. Will update!"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.29
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
examples/seq2seq: @sshleifer
documentation: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Create a new virtual environment and set it up to run the examples tests. Do _not_ install `pytest` and `pytest-xdist`.
2. Run the tests with `unittest` as [described in the docs](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#tests)
## Expected behavior
The examples tests pass. Actual behavior:
```sh
======================================================================
ERROR: seq2seq.test_bash_script (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: seq2seq.test_bash_script
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/dmlap/projects/transformers/examples/seq2seq/test_bash_script.py", line 8, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
======================================================================
ERROR: seq2seq.test_seq2seq_examples (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: seq2seq.test_seq2seq_examples
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/dmlap/projects/transformers/examples/seq2seq/test_seq2seq_examples.py", line 10, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
----------------------------------------------------------------------
Ran 16 tests in 179.454s
FAILED (errors=2)
```
Perhaps the documentation should be updated to require `pytest`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6391/comments | https://api.github.com/repos/huggingface/transformers/issues/6391/events | https://github.com/huggingface/transformers/pull/6391 | 676,213,782 | MDExOlB1bGxSZXF1ZXN0NDY1NTY0Mzc4 | 6,391 | Fix links for open in colab | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | This commit was supposed to be in #6389 but I didn't push hard enough. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6391/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6391",
"html_url": "https://github.com/huggingface/transformers/pull/6391",
"diff_url": "https://github.com/huggingface/transformers/pull/6391.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6391.patch",
"merged_at": 1597072578000
} |
https://api.github.com/repos/huggingface/transformers/issues/6390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6390/comments | https://api.github.com/repos/huggingface/transformers/issues/6390/events | https://github.com/huggingface/transformers/pull/6390 | 676,208,531 | MDExOlB1bGxSZXF1ZXN0NDY1NTU5OTcx | 6,390 | Warn if debug requested without TPU fixes (#6308) | {
"login": "dmlap",
"id": 56667,
"node_id": "MDQ6VXNlcjU2NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/56667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmlap",
"html_url": "https://github.com/dmlap",
"followers_url": "https://api.github.com/users/dmlap/followers",
"following_url": "https://api.github.com/users/dmlap/following{/other_user}",
"gists_url": "https://api.github.com/users/dmlap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmlap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmlap/subscriptions",
"organizations_url": "https://api.github.com/users/dmlap/orgs",
"repos_url": "https://api.github.com/users/dmlap/repos",
"events_url": "https://api.github.com/users/dmlap/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmlap/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The CircleCI failures look like a pre-existing line length violation in `trainer.py` and a checksum mismatch downloading transformers itself for `run_tests_torch`. I don't believe either are related to my change – I was able to run the examples test suite locally and everything passed. I'd be happy to fix the line length issue, if that helps. I think it would take me awhile to figure out what's going on with the checksum mismatch.",
"Hi! There was an issue with the style in your PR, I pushed the fix. Will merge once all the tests are green!",
"Thanks for your contribution :)",
"No problem! Thanks for the style-fixup, @LysandreJik. "
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | Check whether a PyTorch compatible TPU is available before attempting to print TPU metrics after training has completed. This way, users who apply `--debug` without reading the documentation aren't suprised by a stacktrace. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6390",
"html_url": "https://github.com/huggingface/transformers/pull/6390",
"diff_url": "https://github.com/huggingface/transformers/pull/6390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6390.patch",
"merged_at": 1597138287000
} |
https://api.github.com/repos/huggingface/transformers/issues/6389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6389/comments | https://api.github.com/repos/huggingface/transformers/issues/6389/events | https://github.com/huggingface/transformers/pull/6389 | 676,203,162 | MDExOlB1bGxSZXF1ZXN0NDY1NTU1NTcw | 6,389 | Colab button | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | COLLABORATOR | null | This PR adds a "open in colab" button on the tutorials of our documentation. For each of those tutorials, three notebooks are available: mixed version (with cells PyTorch and TensorFlow), PyTorch-only and TensorFlow only, so hovering on the button makes a dropdown appear with the three different links.
Those notebooks are generated automatically from the docs rst files and the script in the [notebooks repo](https://github.com/huggingface/notebooks/blob/master/utils/convert_doc_to_notebooks.py). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6389/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6389",
"html_url": "https://github.com/huggingface/transformers/pull/6389",
"diff_url": "https://github.com/huggingface/transformers/pull/6389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6389.patch",
"merged_at": 1597072350000
} |
https://api.github.com/repos/huggingface/transformers/issues/6388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6388/comments | https://api.github.com/repos/huggingface/transformers/issues/6388/events | https://github.com/huggingface/transformers/pull/6388 | 676,175,955 | MDExOlB1bGxSZXF1ZXN0NDY1NTMzMDE4 | 6,388 | [T5 3B Covid 19] Adapt T5 TF conversion script to handle covid-19 3b t5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,651 | 1,602 | MEMBER | null | This PR shows which changes were necessary to convert the 3B (and 11B) T5 model from this issue: https://github.com/huggingface/transformers/tree/adapt_t5_for_covid_19_3b to PyTorch.
It might be possible that the official T5 library has changed in which case this code might be useful again.
For now, this PR stays a draft though, but can be cleaned and merged if more T5 Conversion issues arise.
Pinging @sshleifer @thomwolf for notification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6388/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6388",
"html_url": "https://github.com/huggingface/transformers/pull/6388",
"diff_url": "https://github.com/huggingface/transformers/pull/6388.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6388.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6387/comments | https://api.github.com/repos/huggingface/transformers/issues/6387/events | https://github.com/huggingface/transformers/pull/6387 | 676,162,328 | MDExOlB1bGxSZXF1ZXN0NDY1NTIxNzky | 6,387 | Fix docs and bad word tokens generation_utils.py | {
"login": "ZhuBaohe",
"id": 35796307,
"node_id": "MDQ6VXNlcjM1Nzk2MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/35796307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhuBaohe",
"html_url": "https://github.com/ZhuBaohe",
"followers_url": "https://api.github.com/users/ZhuBaohe/followers",
"following_url": "https://api.github.com/users/ZhuBaohe/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhuBaohe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhuBaohe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhuBaohe/subscriptions",
"organizations_url": "https://api.github.com/users/ZhuBaohe/orgs",
"repos_url": "https://api.github.com/users/ZhuBaohe/repos",
"events_url": "https://api.github.com/users/ZhuBaohe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhuBaohe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=h1) Report\n> Merging [#6387](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff&el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6387 +/- ##\n==========================================\n+ Coverage 79.94% 80.07% +0.12% \n==========================================\n Files 153 153 \n Lines 27902 27902 \n==========================================\n+ Hits 22307 22343 +36 \n+ Misses 5595 5559 -36 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <100.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=footer). Last update [155288f...f9ff044](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer \r\n1. As suggested, I fixed docsrtings.\r\n\r\n2. `RUN_SLOW=1 pytest tests/test_modeling_bart.py` outputs `collected 47 items: 45 passed, 2 skipped, 15 warnings`\r\n `RUN_SLOW=1 pytest tests/test_modeling_marian.py` outputs `collected 15 items: 15 passed, 111 warnings`\r\n `RUN_SLOW=1 pytest tests/test_modeling_t5.py` outputs `collected 35 items: 33 passed, 2 skipped, 198 warnings`\r\n `RUN_SLOW=1 pytest tests/test_modeling_mbart.py` outputs `collected 6 items: 1 failed, 3 passed, 2 skipped, 105 warnings`\r\n\r\n For the failed test, the detailed output is as follows:\r\n\r\n```\r\n ___________________________________ MBartEnroIntegrationTest.test_enro_generate ___________________________________ \r\n\r\nself = <tests.test_modeling_mbart.MBartEnroIntegrationTest testMethod=test_enro_generate>\r\n\r\n @slow\r\n def test_enro_generate(self):\r\n batch: BatchEncoding = self.tokenizer.prepare_seq2seq_batch(self.src_text).to(torch_device)\r\n translated_tokens = self.model.generate(**batch)\r\n decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)\r\n self.assertEqual(self.tgt_text[0], decoded[0])\r\n> self.assertEqual(self.tgt_text[1], decoded[1])\r\nE AssertionError: 'Secr[223 chars]înrăutăţească violenţa şi mizeria pentru milioane de oameni.' != 'Secr[223 chars]înrăutăţească violenţele şi mizeria pentru milioane de oameni.'\r\nE Diff is 1089 characters long. Set self.maxDiff to None to see it.\r\n\r\ntests\\test_modeling_mbart.py:89: AssertionError\r\n```\r\nEven if I restore the modified code, the test still fails. So the failed test has nothing to do with the code I modified.",
"Does the failure also happen on your machine on master? Otherwise it does seem like your code causes the failure. That translation is created by the generate function.",
"@sshleifer\r\n\r\nThe failure also happen on master branch and the master brach has been updated to the latest.\r\nMy machine is Win10 64 bit and test environment is Python 3.8.3, pytest-6.0.1, py-1.9.0, pluggy-0.13.1,pytorch-1.6.0.\r\n\r\nOn the other hand , I debugged the failed test code, as shown below:\r\n```python\r\nfrom transformers import (\r\n AutoModelForSeq2SeqLM,\r\n BartConfig,\r\n BartForConditionalGeneration,\r\n BatchEncoding,\r\n AutoTokenizer,\r\n)\r\n\r\nsrc_text = [\r\n \" UN Chief Says There Is No Military Solution in Syria\",\r\n \"\"\" Secretary-General Ban Ki-moon says his response to Russia's steppedupmilitary support for Syria is that \"there is no military solution\"to thenearly five-year conflict and more weapons will only worsen theviolenceand misery for millions of people.\"\"\",\r\n]\r\ntgt_text = [\r\n \"Şeful ONU declară că nu există o soluţie militară în Siria\",\r\n 'Secretarul General Ban Ki-moon declară că răspunsul său laintensificareasprijinului militar al Rusiei pentru Siria este că \"nuexistă o soluţiemilitară\" la conflictul de aproape cinci ani şi că noiarme nu vor facedecât să înrăutăţească violenţa şi mizeria pentrumilioane de oameni.',\r\n]\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/mbart-large-en-ro\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/mbart-large-en-ro\")\r\nbatch: BatchEncoding = tokenizer.prepare_seq2seq_batch(src_text)\r\ntranslated_tokens = model.generate(**batch)\r\ndecoded = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)\r\nassert(tgt_text[0] == decoded[0])\r\nassert(tgt_text[1] == decoded[1]) \r\n```\r\nBy debugging, I find that `bad_words_ids` is `None` in `generate` function. So the code I modified will not run during this test and will not affect the results of `generate` function.",
"Great @ZhuBaohe, thanks for running the tests and fixing the bad word tokens."
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | This PR fixes two issues:
1.
The codes
https://github.com/huggingface/transformers/blob/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb/src/transformers/generation_utils.py#L224-L228
throw an exception
`AssertionError: Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1`
2.
The codes
```python
from transformers.generation_utils import calc_banned_bad_words_ids
prev_input_ids = torch.tensor([[1, 2, 3, 4, 5]])
bad_words_ids = [[4, 5, 9]]
banned_tokens = calc_banned_bad_words_ids(prev_input_ids, bad_words_ids)
print(banned_tokens)
```
output `[[]]`,but we expect to output `[[9]]`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6387/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6387",
"html_url": "https://github.com/huggingface/transformers/pull/6387",
"diff_url": "https://github.com/huggingface/transformers/pull/6387.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6387.patch",
"merged_at": 1597317137000
} |
https://api.github.com/repos/huggingface/transformers/issues/6386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6386/comments | https://api.github.com/repos/huggingface/transformers/issues/6386/events | https://github.com/huggingface/transformers/pull/6386 | 676,154,716 | MDExOlB1bGxSZXF1ZXN0NDY1NTE1NDU5 | 6,386 | Create README.md | {
"login": "cedspam",
"id": 7693193,
"node_id": "MDQ6VXNlcjc2OTMxOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7693193?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cedspam",
"html_url": "https://github.com/cedspam",
"followers_url": "https://api.github.com/users/cedspam/followers",
"following_url": "https://api.github.com/users/cedspam/following{/other_user}",
"gists_url": "https://api.github.com/users/cedspam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cedspam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedspam/subscriptions",
"organizations_url": "https://api.github.com/users/cedspam/orgs",
"repos_url": "https://api.github.com/users/cedspam/repos",
"events_url": "https://api.github.com/users/cedspam/events{/privacy}",
"received_events_url": "https://api.github.com/users/cedspam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"model card",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=h1) Report\n> Merging [#6386](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.69%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6386 +/- ##\n==========================================\n- Coverage 79.05% 78.36% -0.70% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21501 21312 -189 \n- Misses 5695 5884 +189 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=footer). Last update [6028ed9...08b609e](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"merging this, but can you add a few words about which separator tokens you used + maybe a few lines of sample code showing to interact with the model"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6386",
"html_url": "https://github.com/huggingface/transformers/pull/6386",
"diff_url": "https://github.com/huggingface/transformers/pull/6386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6386.patch",
"merged_at": 1597179412000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6385/comments | https://api.github.com/repos/huggingface/transformers/issues/6385/events | https://github.com/huggingface/transformers/pull/6385 | 676,141,763 | MDExOlB1bGxSZXF1ZXN0NDY1NTA0NTg1 | 6,385 | [POC] Notebooks cron | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,651 | 1,597 | MEMBER | null | Setting up a cron job to create the notebooks on the documentation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6385",
"html_url": "https://github.com/huggingface/transformers/pull/6385",
"diff_url": "https://github.com/huggingface/transformers/pull/6385.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6385.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6384/comments | https://api.github.com/repos/huggingface/transformers/issues/6384/events | https://github.com/huggingface/transformers/issues/6384 | 676,103,206 | MDU6SXNzdWU2NzYxMDMyMDY= | 6,384 | AttributeError: type object "BartTokenizer" has no attribute 'name' | {
"login": "Siddhant021295",
"id": 22122136,
"node_id": "MDQ6VXNlcjIyMTIyMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/22122136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Siddhant021295",
"html_url": "https://github.com/Siddhant021295",
"followers_url": "https://api.github.com/users/Siddhant021295/followers",
"following_url": "https://api.github.com/users/Siddhant021295/following{/other_user}",
"gists_url": "https://api.github.com/users/Siddhant021295/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Siddhant021295/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Siddhant021295/subscriptions",
"organizations_url": "https://api.github.com/users/Siddhant021295/orgs",
"repos_url": "https://api.github.com/users/Siddhant021295/repos",
"events_url": "https://api.github.com/users/Siddhant021295/events{/privacy}",
"received_events_url": "https://api.github.com/users/Siddhant021295/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think the notebook you are linking is trying to access the `name` attribute of `BartTokenizer` which does not exist indeed. It looks like the failure should be reported to the author of that notebook, it's not a bug in transformers.",
"Pinging @ohmeow ",
"Yah, I'm here :)\n\nOn Mon, Aug 10, 2020, 7:36 AM Suraj Patil <[email protected]> wrote:\n\n> Pinging @ohmeow <https://github.com/ohmeow>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/6384#issuecomment-671393810>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAADNMC6B4X2AQBMA7CQIZDSAAAVFANCNFSM4PZ5H4RQ>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,603 | 1,603 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: don't know
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Using the script provided on Hugging face library
: https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6383/comments | https://api.github.com/repos/huggingface/transformers/issues/6383/events | https://github.com/huggingface/transformers/issues/6383 | 676,095,038 | MDU6SXNzdWU2NzYwOTUwMzg= | 6,383 | hi | {
"login": "xinghao302001",
"id": 49721621,
"node_id": "MDQ6VXNlcjQ5NzIxNjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/49721621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghao302001",
"html_url": "https://github.com/xinghao302001",
"followers_url": "https://api.github.com/users/xinghao302001/followers",
"following_url": "https://api.github.com/users/xinghao302001/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghao302001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghao302001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghao302001/subscriptions",
"organizations_url": "https://api.github.com/users/xinghao302001/orgs",
"repos_url": "https://api.github.com/users/xinghao302001/repos",
"events_url": "https://api.github.com/users/xinghao302001/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghao302001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6382/comments | https://api.github.com/repos/huggingface/transformers/issues/6382/events | https://github.com/huggingface/transformers/pull/6382 | 676,079,692 | MDExOlB1bGxSZXF1ZXN0NDY1NDU0NDQx | 6,382 | Ci GitHub caching | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,597 | 1,597 | 1,597 | MEMBER | null | Same as https://github.com/huggingface/transformers/pull/6287 but for Github Actions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6382/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6382",
"html_url": "https://github.com/huggingface/transformers/pull/6382",
"diff_url": "https://github.com/huggingface/transformers/pull/6382.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6382.patch",
"merged_at": 1597070371000
} |
https://api.github.com/repos/huggingface/transformers/issues/6381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6381/comments | https://api.github.com/repos/huggingface/transformers/issues/6381/events | https://github.com/huggingface/transformers/pull/6381 | 676,076,941 | MDExOlB1bGxSZXF1ZXN0NDY1NDUyMTA1 | 6,381 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=h1) Report\n> Merging [#6381](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.68%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6381 +/- ##\n==========================================\n- Coverage 79.05% 78.37% -0.69% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21501 21316 -185 \n- Misses 5695 5880 +185 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=footer). Last update [6028ed9...6b1cce3](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6381/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6381",
"html_url": "https://github.com/huggingface/transformers/pull/6381",
"diff_url": "https://github.com/huggingface/transformers/pull/6381.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6381.patch",
"merged_at": 1597185251000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6380/comments | https://api.github.com/repos/huggingface/transformers/issues/6380/events | https://github.com/huggingface/transformers/pull/6380 | 676,054,824 | MDExOlB1bGxSZXF1ZXN0NDY1NDMzNDMx | 6,380 | Add metadata to be indexed properly | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=h1) Report\n> Merging [#6380](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.67%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6380 +/- ##\n==========================================\n- Coverage 79.05% 78.38% -0.68% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21501 21317 -184 \n- Misses 5695 5879 +184 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=footer). Last update [6028ed9...33142a4](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6380/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6380",
"html_url": "https://github.com/huggingface/transformers/pull/6380",
"diff_url": "https://github.com/huggingface/transformers/pull/6380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6380.patch",
"merged_at": 1597185149000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6379/comments | https://api.github.com/repos/huggingface/transformers/issues/6379/events | https://github.com/huggingface/transformers/pull/6379 | 676,054,111 | MDExOlB1bGxSZXF1ZXN0NDY1NDMyODI2 | 6,379 | Change metadata to be indexed correctly | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=h1) Report\n> Merging [#6379](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **increase** coverage by `0.47%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6379 +/- ##\n==========================================\n+ Coverage 79.05% 79.53% +0.47% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21501 21631 +130 \n+ Misses 5695 5565 -130 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.31% <0.00%> (-26.18%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=footer). Last update [6028ed9...5e6947b](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6379/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6379",
"html_url": "https://github.com/huggingface/transformers/pull/6379",
"diff_url": "https://github.com/huggingface/transformers/pull/6379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6379.patch",
"merged_at": 1597185139000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6378/comments | https://api.github.com/repos/huggingface/transformers/issues/6378/events | https://github.com/huggingface/transformers/pull/6378 | 676,052,640 | MDExOlB1bGxSZXF1ZXN0NDY1NDMxNjA0 | 6,378 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=h1) Report\n> Merging [#6378](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.68%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6378 +/- ##\n==========================================\n- Coverage 79.05% 78.37% -0.69% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21501 21316 -185 \n- Misses 5695 5880 +185 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=footer). Last update [6028ed9...65e6341](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6378",
"html_url": "https://github.com/huggingface/transformers/pull/6378",
"diff_url": "https://github.com/huggingface/transformers/pull/6378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6378.patch",
"merged_at": 1597185158000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6377/comments | https://api.github.com/repos/huggingface/transformers/issues/6377/events | https://github.com/huggingface/transformers/pull/6377 | 676,031,900 | MDExOlB1bGxSZXF1ZXN0NDY1NDE0MjQ2 | 6,377 | [EncoderDecoderModel] add a `add_cross_attention` boolean to config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=h1) Report\n> Merging [#6377](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1429b920d44d610eaa0a6f48de43853da52e9c03&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `90.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6377 +/- ##\n==========================================\n- Coverage 78.38% 78.36% -0.03% \n==========================================\n Files 148 148 \n Lines 27196 27202 +6 \n==========================================\n- Hits 21317 21316 -1 \n- Misses 5879 5886 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <66.66%> (-1.19%)` | :arrow_down: |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.45% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=footer). Last update [1429b92...e2fcc0d](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"`All EncoderDecoderModel models have to be updated with add_cross_attention=True.`\r\n\r\nHow do I _exactly_ do this? I got hit by `AttributeError: 'GPT2Config' object has no attribute 'add_cross_attention'` after updating to newest release.",
"Hey @xxbidiao, \r\n\r\nYou have to set `gpt2.config.add_cross_attention = True` and then save this config. Or you can directly add the parameter `add_cross_attention=True` to the gpt2 config json file"
] | 1,597 | 1,600 | 1,597 | MEMBER | null | The `EncoderDecoderModel` uses models from `AUTO_MODEL_FOR_CAUSAL_LM` as their decoder models. The problem is that these models can be used in two ways:
1) As a stand-alone decoder model (GPT2) like **without** cross-attention layers
2) As part of a `EncoderDecoderModel` **with** cross-attention layers.
Currently it is decided via the parameter `config.is_decoder` whether cross-attention layers should be added. The problem is that `config.is_decoder` is `True` for both 1) and 2), which is correct since both 1) and 2) should use a causal mask, but means that for 1) cross-attention layers are added without ever being used.
This PR solves this problem by introducing a new config param called `add_cross_attention` which is only relevant for models in `AUTO_MODEL_FOR_CAUSAL_LM`.
I also played around with the idea of not having the flag in the config, but just passing it along the `init` function, such as:
```python
super().__init__(config, add_cross_attention=False)
```
in
and then calling setting this param to `True` for all encoder-decoder models. I decided to put the param in the config instead because:
a) The init signature does not have to change and
b) EncoderDecoderModels make extensive use of `AutoModelForCausalLM.from_pretrained(...)` which would have meant that all models that are part of `MODEL_FOR_CAUSAL_LM_MAPPING` have to have this signature.
Taking all this into account I think the first solution (putting `add_cross_attenion` into the config) is the better way to go here.
# IMPORTANT: This PR introduces a breaking change. All `EncoderDecoderModel` models have to be updated with `add_cross_attention=True`.
=> All "bert2bert" models were updated: https://huggingface.co/models?search=bert2bert
## TODO:
After this, I think the framework is flexible enough to handle all other models and I can extend `EncoderDecoderModel` to GPT2, Roberta, Longformer and maybe Reformer as well.
EncoderDecoder is not yet officially released, I think, so this slightly backwards compatibility breaking change is OK. I will updated all Bert2Bert models on the model hub with `add_cross_attention=True` and add a bigger message in this PR when merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6377/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6377",
"html_url": "https://github.com/huggingface/transformers/pull/6377",
"diff_url": "https://github.com/huggingface/transformers/pull/6377.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6377.patch",
"merged_at": 1597081608000
} |
https://api.github.com/repos/huggingface/transformers/issues/6376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6376/comments | https://api.github.com/repos/huggingface/transformers/issues/6376/events | https://github.com/huggingface/transformers/pull/6376 | 675,928,317 | MDExOlB1bGxSZXF1ZXN0NDY1MzI4NDc0 | 6,376 | Introduce dataset and data collator for Bert pretrain NSP | {
"login": "choidongyeon",
"id": 54914459,
"node_id": "MDQ6VXNlcjU0OTE0NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/54914459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/choidongyeon",
"html_url": "https://github.com/choidongyeon",
"followers_url": "https://api.github.com/users/choidongyeon/followers",
"following_url": "https://api.github.com/users/choidongyeon/following{/other_user}",
"gists_url": "https://api.github.com/users/choidongyeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/choidongyeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/choidongyeon/subscriptions",
"organizations_url": "https://api.github.com/users/choidongyeon/orgs",
"repos_url": "https://api.github.com/users/choidongyeon/repos",
"events_url": "https://api.github.com/users/choidongyeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/choidongyeon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Superseded by #6644, thanks a lot for your contribution!"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | Follow up from discussion in https://github.com/huggingface/transformers/issues/6330
This PR introduces changes to allow both the LMML and NSP objectives to be run using ```BertForPretraining```. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6376/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6376",
"html_url": "https://github.com/huggingface/transformers/pull/6376",
"diff_url": "https://github.com/huggingface/transformers/pull/6376.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6376.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6375/comments | https://api.github.com/repos/huggingface/transformers/issues/6375/events | https://github.com/huggingface/transformers/issues/6375 | 675,901,486 | MDU6SXNzdWU2NzU5MDE0ODY= | 6,375 | CUDA Out of Memory | {
"login": "mc2259",
"id": 57819870,
"node_id": "MDQ6VXNlcjU3ODE5ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57819870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mc2259",
"html_url": "https://github.com/mc2259",
"followers_url": "https://api.github.com/users/mc2259/followers",
"following_url": "https://api.github.com/users/mc2259/following{/other_user}",
"gists_url": "https://api.github.com/users/mc2259/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mc2259/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mc2259/subscriptions",
"organizations_url": "https://api.github.com/users/mc2259/orgs",
"repos_url": "https://api.github.com/users/mc2259/repos",
"events_url": "https://api.github.com/users/mc2259/events{/privacy}",
"received_events_url": "https://api.github.com/users/mc2259/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes it seems like the GPU that was allocated to you does not provide enough GPU memory for the model"
] | 1,597 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
https://stackoverflow.com/questions/63335442/how-do-i-deal-with-cuda-out-of-memory-while-finetuning-bart
-->
## Details
<!-- Description of your issue -->
I was trying to finetune BART on google collab using the xsum dataset and the finetuning script and I got this:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 13.67 GiB already allocated; 15.88 MiB free; 13.72 GiB reserved in total by PyTorch)
Does this mean I have to use a smaller model?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
https://stackoverflow.com/questions/63335442/how-do-i-deal-with-cuda-out-of-memory-while-finetuning-bart | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6374/comments | https://api.github.com/repos/huggingface/transformers/issues/6374/events | https://github.com/huggingface/transformers/issues/6374 | 675,833,943 | MDU6SXNzdWU2NzU4MzM5NDM= | 6,374 | [s2s] remove lr_scheduler redundancy | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Proposed solution here: https://github.com/huggingface/transformers/pull/6402\r\n"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | in `get_train_dataloader` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6373/comments | https://api.github.com/repos/huggingface/transformers/issues/6373/events | https://github.com/huggingface/transformers/issues/6373 | 675,833,438 | MDU6SXNzdWU2NzU4MzM0Mzg= | 6,373 | Pegasus finetuning diary | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"freeze_embeds didn't matter (before mask fix)\r\nmask fix explanation: \r\nwe need `decoder_start_token_id=pad_token_id` to avoid the first word issue, but `decoder_padding_mask` should NOT tell the model to ignore that decoder_start_token_id, else you get nan.\r\n\r\nThis fix makes 1 example per batch not have eos in decoder_input_ids (t5,bart=same problem).\r\nBut maybe that can explain batch_size=1 truncation.\r\n\r\nOriginal repo uses adafactor.",
"pegasus finetuning Running on fork branch has rouge2 23 with full beam search after 1.5 epochs\r\nhttps://app.wandb.ai/sshleifer/transformers_fork-examples_seq2seq/runs/3cz2fe87?workspace=user-sshleifer\r\n\r\nXSUM Metrics from today:\r\nModels train on hack-pegasus-batches branch.\r\n```\r\nfinetune: {'rouge1': 45.6515, 'rouge2': 22.9858, 'rougeL': 37.7569, 'n_obs': 11333, 'runtime': 4175.217807531357, 'seconds_per_sample': 0.3684}\r\ndpx8 {'rouge1': 45.9739, 'rouge2': 23.1417, 'rougeL': 38.1625, 'n_obs': 11333, 'runtime': 2207.9071719646454, 'seconds_per_sample': 0.1948}\r\ndpx4 {'rouge1': 43.0961, 'rouge2': 20.1954, 'rougeL': 35.5679, 'n_obs': 11333, 'runtime': 1813.8934507369995, 'seconds_per_sample': 0.1601}\t\r\n```\r\n\r\n(10% chance 1st two rows are flipped)",
"Adafactor saves a lot of memory. All of those runs use adafactor."
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | Best score so far .2065 Rouge, much worse than paper. Generations appear to start lower case/be missing words at the beginning.
Clues:
- adding `<pad>` as prefix (like for generation) makes loss nan for at least 1000 steps (I killed it).
- Without prefix, loss is nan for 5 steps, them improves.
- distillation with teacher produces huge hidden state MSE losses. This is probably unrelated and caused by the same large activations that break fp16.
Suspects:
- different causal mask than tf?
- tf doesn't shift labels or add a decoder prefix token. We shift labels but don't add a prefix token. there is a suraj issue where this appears to be suboptimal for t5 (which also has no bos token).
- bug in label smoothing
Best run:


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6373/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6372/comments | https://api.github.com/repos/huggingface/transformers/issues/6372/events | https://github.com/huggingface/transformers/pull/6372 | 675,758,989 | MDExOlB1bGxSZXF1ZXN0NDY1MTkzNDIy | 6,372 | Update modeling_tf_utils.py | {
"login": "ameasure",
"id": 571959,
"node_id": "MDQ6VXNlcjU3MTk1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/571959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameasure",
"html_url": "https://github.com/ameasure",
"followers_url": "https://api.github.com/users/ameasure/followers",
"following_url": "https://api.github.com/users/ameasure/following{/other_user}",
"gists_url": "https://api.github.com/users/ameasure/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameasure/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameasure/subscriptions",
"organizations_url": "https://api.github.com/users/ameasure/orgs",
"repos_url": "https://api.github.com/users/ameasure/repos",
"events_url": "https://api.github.com/users/ameasure/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameasure/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=h1) Report\n> Merging [#6372](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6372 +/- ##\n==========================================\n- Coverage 79.34% 78.37% -0.97% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21579 21316 -263 \n- Misses 5617 5880 +263 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=footer). Last update [6e8a385...1dc65e3](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,597 | 1,597 | CONTRIBUTOR | null | fix typo: ckeckpoint->checkpoint | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6372/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6372",
"html_url": "https://github.com/huggingface/transformers/pull/6372",
"diff_url": "https://github.com/huggingface/transformers/pull/6372.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6372.patch",
"merged_at": 1597042512000
} |
https://api.github.com/repos/huggingface/transformers/issues/6371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6371/comments | https://api.github.com/repos/huggingface/transformers/issues/6371/events | https://github.com/huggingface/transformers/pull/6371 | 675,748,366 | MDExOlB1bGxSZXF1ZXN0NDY1MTg1ODU1 | 6,371 | the test now works again | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=h1) Report\n> Merging [#6371](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **increase** coverage by `0.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6371 +/- ##\n==========================================\n+ Coverage 79.34% 79.71% +0.37% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21579 21680 +101 \n+ Misses 5617 5516 -101 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=footer). Last update [6e8a385...4d9d35c](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | `test_finetune_lr_shedulers` can now run after https://github.com/huggingface/transformers/pull/6358 was merged | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6371",
"html_url": "https://github.com/huggingface/transformers/pull/6371",
"diff_url": "https://github.com/huggingface/transformers/pull/6371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6371.patch",
"merged_at": 1597042552000
} |
https://api.github.com/repos/huggingface/transformers/issues/6370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6370/comments | https://api.github.com/repos/huggingface/transformers/issues/6370/events | https://github.com/huggingface/transformers/issues/6370 | 675,741,824 | MDU6SXNzdWU2NzU3NDE4MjQ= | 6,370 | FastTokenizer not returning batch_size for offset_mapping for short texts | {
"login": "xbelda",
"id": 39069279,
"node_id": "MDQ6VXNlcjM5MDY5Mjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/39069279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xbelda",
"html_url": "https://github.com/xbelda",
"followers_url": "https://api.github.com/users/xbelda/followers",
"following_url": "https://api.github.com/users/xbelda/following{/other_user}",
"gists_url": "https://api.github.com/users/xbelda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xbelda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xbelda/subscriptions",
"organizations_url": "https://api.github.com/users/xbelda/orgs",
"repos_url": "https://api.github.com/users/xbelda/repos",
"events_url": "https://api.github.com/users/xbelda/events{/privacy}",
"received_events_url": "https://api.github.com/users/xbelda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"After inspecting the code, it looks that the cause can be found inside `tokenization_utils_base.py`.\r\n\r\nThen, in the method `convert_to_tensors` form `BatchEncoding` there are the following lines:\r\n```python3\r\n# Do the tensor conversion in batch\r\nfor key, value in self.items():\r\n try:\r\n if prepend_batch_axis:\r\n value = [value]\r\n\r\n tensor = as_tensor(value)\r\n\r\n # at-least2d\r\n if tensor.ndim > 2:\r\n tensor = tensor.squeeze(0)\r\n elif tensor.ndim < 2:\r\n tensor = tensor[None, :]\r\n```\r\n\r\nIn this case, right before the squeeze, the `offset_mapping` tensor is of shape [1, 512, 2], which becomes [512, 2] after being squeezed.\r\n\r\nThis would explain why it doesn't fail with longer sequences, since squeezing a tensor of shape [n, 512, 2] (n>1) leaves the tensor unaltered.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,603 | 1,603 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
# Who can help
tokenizers: @mfuntowicz
## Information
When working with `padding` and `truncation` on short texts (smaller than `max_len`),
the FastTokenizer will return the batch_size dimension if `return_tensors=None`.
However, when `return_tensors="pt"` or `return_tensors="np"` are enabled (I haven't tested it on Tensorflow), they **won't return the batch dimension**.
## To reproduce
Loading fast tokenizer:
```python3
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True)
```
Behavior on "short" texts without `return_tensors`:
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
# Convert to tensor outside the tokenizer
print(torch.tensor(out["offset_mapping"]).shape)
>>> torch.Size([1, 512, 2])
```
Behavior with `return_tensors`:
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_tensors="pt" # Similarly with "np"
)
print(out["offset_mapping"].shape)
>>> torch.Size([512, 2])
```
## Expected behavior
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_tensors="pt"
)
print(out["offset_mapping"].shape)
>>> torch.Size([1, 512, 2])
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6370/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6369/comments | https://api.github.com/repos/huggingface/transformers/issues/6369/events | https://github.com/huggingface/transformers/issues/6369 | 675,735,209 | MDU6SXNzdWU2NzU3MzUyMDk= | 6,369 | trainer/lightning_base: Arbitrary config updates through command line | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"see `argparse.remainder` https://stackoverflow.com/questions/22850332/getting-the-remaining-arguments-in-argparse/46250042#46250042\r\n\r\nh/t @stas00 \r\n\r\n",
"You could probably extract all the known args via `argparse`'s:\r\n```\r\nargs, unknown = parser.parse_known_args()\r\n```\r\nand then use another tool to parse `unknown` (which is just `argv` minus known args) e.g. could even use `fire` or write a core function to do that.\r\n\r\nsurely a cheat, but it would make your query work, while having `arparse` as the main solution still.",
"You could also take a look at https://github.com/huggingface/transformers/blob/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff/src/transformers/hf_argparser.py#L128-L146",
"If I read the code correctly, it doesn't do anything with `remaining_args`. It either just returns them as is (`argv` list) or throws an error.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Fixed by @stas00 for both trainers, thanks!"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | this issue [https://github.com/huggingface/transformers/issues/6367], and a recent one to add dropout to the command line, as well as the usage of task_specific_params during finetuning, are all one off solutions to address a larger problem. During finetuning/training, it is very difficult to arbitrarily set config attributes. For `examples/lightning_base.py`, you need to save a whole new config to json and put it in a directory, which is fairly annoying method for changing hyperparameters, so we add lots of them, like `--dropout --attention_dropout --encoder_layerdrop --decoder_layerdrop` through `argparse.add_argument`.
It would be a better user experience if I could just pass any kwarg without editing the code.
This seems possible with the `fire` package. But I would prefer an `argparse` solution as there is another issue open to delete the `fire` dependency, I also asked a similar question on
[stackoverflow](https://stackoverflow.com/questions/63329044/python-argparse-allow-unregistered-arguments) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6369/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6369/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6368/comments | https://api.github.com/repos/huggingface/transformers/issues/6368/events | https://github.com/huggingface/transformers/issues/6368 | 675,735,191 | MDU6SXNzdWU2NzU3MzUxOTE= | 6,368 | Can't load a saved tokenizer with AutoTokenizer.from_pretrained without saving Config as well | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@eladsegal I think that the `AutoTokenizer` requires the config file to determine what model to use. In https://huggingface.co/transformers/model_doc/auto.html it states that: \r\n\r\n> The from_pretrained() method takes care of returning the correct tokenizer class instance based on the model_type property of the config object, or when it’s missing, falling back to using pattern matching on the pretrained_model_name_or_path string.\r\n\r\nSo I think that if your model path variable includes the name of the model that it was using, it should be able to load the right tokenizer. If it doesn't it expects to have a config file.",
"@TarasPriadka When providing a path, a config file is required even if the model name is in the path (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/tokenization_auto.py#L205). \r\nThe model name in the path is used only when an existing config file is missing the model_type property (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/configuration_auto.py#L203-L212).\r\n",
"I bumped on that as well.\r\n\r\n~~I believe the issue is purely due to mismatch in filename convention AutoTokenizer throws an exception of './config.json' missing, while the file saved is called 'tokenizer_config.json'~~\r\n\r\nMaybe it is a different case - looks like when you want to instantiate BertTokenizer it just needs tokenizer_config.json but when you want to instantiate AutoTokenizer it requires config.json - the config of whole model.\r\n\r\nSo the simplest steps to reproduce are just:\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\nAutoTokenizer.from_pretrained(\"bert-base-cased\").save_pretrained(\".\")\r\nAutoTokenizer.from_pretrained(\".\") # throws exception\r\n```\r\n\r\nlooking at the source code - a workaround is to call\r\n```\r\nAutoTokenizer.from_pretrained(tokenizer_path, config=AutoConfig.from_pretrained(model_path))\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Here is another workaround by using directly the corresponding tokenizer class such as BertTokenizer.from_pretrained instead of AutoTokenizer.\r\nhttps://stackoverflow.com/questions/62472238/autotokenizer-from-pretrained-fails-to-load-locally-saved-pretrained-tokenizer/62664374#62664374",
"I had the same issue and I realized some wired things going on. I'm running IDLEX and Jupyter Note book, both on Windows 10. I installed my python system on \"D:\\WPy64-3740\". IDLEX can successfully loads pretrained model but Jupyter Notebook can not. But for some reason, it does load pretrained model when I load .py file with import directive.\r\n\r\n# Issue\r\n\r\nUsually, I directly launch IDEX.exe from the path above. In that case, it doesn't cause any problem. For example, some code like:\r\n\r\n```python\r\n# On IDLEX.exe\r\n>>> tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')\r\n```\r\nworks fine. But when I use Jupyter Notebook, usually launch from the same directory, causes an Error. This is the part of the error message\r\n\r\n```python\r\n# On Jupyter Notebook.exe\r\ntokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')\r\n\r\n# Output of the cell\r\nCould not locate the tokenizer configuration file, will try to use the model config instead.\r\nloading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\\WPy64-3740\\settings/.cache\\huggingface\\transformers\\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d\r\nModel config BertConfig {\r\n \"_name_or_path\": \"prajjwal1/bert-tiny\",\r\n ...\r\n \"transformers_version\": \"4.20.1\",\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nloading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/vocab.txt from cache at D:\\WPy64-3740\\settings/.cache\\huggingface\\transformers\\585ac1c3dedc6b808dd35e8770afafe10905d3e723a02617af749d39db780e09.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99\r\nloading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/tokenizer.json from cache at None\r\nloading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/added_tokens.json from cache at None\r\nloading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/special_tokens_map.json from cache at None\r\nloading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/tokenizer_config.json from cache at None\r\nloading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\\WPy64-3740\\settings/.cache\\huggingface\\transformers\\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d\r\nModel config BertConfig {\r\n \"_name_or_path\": \"prajjwal1/bert-tiny\"\r\n ...\r\n \"type_vocab_size\": 2,\r\n \"use_cache\": true,\r\n \"vocab_size\": 30522\r\n}\r\n\r\nloading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\\WPy64-3740\\settings/.cache\\huggingface\\transformers\\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d\r\nModel config BertConfig {\r\n \"_name_or_path\": \"prajjwal1/bert-tiny\",\r\n \"attention_probs_dropout_prob\": 0.1,\r\n ...\r\n```\r\n\r\nI thought maybe Notebook cause an error because it working on some different directory. So I checked the current working directory on both environment. Here is the code, I used for it.\r\n\r\n```python\r\nimport os\r\nos.getcwd()\r\n```\r\n\r\nAs the result, I confirmed both program working on the same directory (or folder, whatever). I also confirmed Python version on shell/Notebook and it was the same, too. By the way, the location of python.exe is \"D:\\WPy64-3740\\python-3.7.4.amd64\". Both IDEX and Notebook uses same python.exe....I suppose.\r\n\r\n# Wired behaviour\r\n\r\nThe funny thing about the issue is when I load .py file from Jupyter Notebook, it can load pretrained model. For example,\r\n\r\n```python\r\n# On Jupyter Notebook\r\n\r\n# load some module loads pretrained model.\r\n# This code will import symbol, \"tokenizer\", an instance of AutoTokenizer initialized with from_pretrained method.\r\nfrom transformer_fine_tune_2 import *\r\n\r\n\r\ntokenizer\r\n\r\n# Output of the console\r\nreTrainedTokenizerFast(name_or_path='prajjwal1/bert-tiny', vocab_size=30522, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})\r\n```\r\n\r\nworks file. I even trained my model in this way. So I suspect this could be a bug, somehow path related, or maybe one of those \"Windows things\" or something else.\r\n\r\nHope this information helps.\r\n\r\n\r\n\r\n"
] | 1,596 | 1,659 | 1,604 | CONTRIBUTOR | null | ### Environment info
- `transformers` version: master (https://github.com/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7)
### Who can help
tokenizers: @mfuntowicz
### Information
When saving a tokenizer with .save_pretrained, it can be loaded with the class it was saved with but not with AutoTokenizer:
```
from transformers import BertTokenizer, AutoTokenizer
BertTokenizer.from_pretrained("bert-base-cased").save_pretrained(".")
BertTokenizer.from_pretrained(".") # works
AutoTokenizer.from_pretrained(".") # throws exception
```
The error is:
```
Traceback (most recent call last):
File "/home/transformers/src/transformers/configuration_utils.py", line 333, in get_config_dict
local_files_only=local_files_only,
File "/home/transformers/src/transformers/file_utils.py", line 684, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file ./config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/transformers/src/transformers/tokenization_auto.py", line 205, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/transformers/src/transformers/configuration_auto.py", line 203, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/transformers/src/transformers/configuration_utils.py", line 346, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for '.'. Make sure that:
- '.' is a correct model identifier listed on 'https://huggingface.co/models'
- or '.' is the correct path to a directory containing a config.json file
```
If a configuration is saved as well, then loading with AutoTokenizer does work:
```
from transformers import BertTokenizer, BertConfig, AutoTokenizer
BertConfig.from_pretrained("bert-base-cased").save_pretrained(".")
BertTokenizer.from_pretrained("bert-base-cased").save_pretrained(".")
AutoTokenizer.from_pretrained(".") # works
```
### Expected behavior
I'd expect that loading a tokenizer with AutoTokenizer would require the same files as a dedicated tokenizer class (e.g. BertTokenizer) requires.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6368/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6368/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6367/comments | https://api.github.com/repos/huggingface/transformers/issues/6367/events | https://github.com/huggingface/transformers/issues/6367 | 675,730,195 | MDU6SXNzdWU2NzU3MzAxOTU= | 6,367 | [s2s] pass max_length to config through command line | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,596 | 1,600 | 1,600 | CONTRIBUTOR | null | Problem:
In summarization, ideal beam search params vary between finetuning datasets. If you are finetuning pegasus-large on xsum, you want config.max_length=56, if you are finetuning pegasus-large on cnn-dailymail you want config.max_length=128.
### Solutions
- the command line arg should be called `max_generate_length`
- This could also be addressed through adding `task_specific_params` for every dataset. Then you could pass `--task summarize_xsum` to finetune.py and things would work. Kinda lame though. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6367/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6366/comments | https://api.github.com/repos/huggingface/transformers/issues/6366/events | https://github.com/huggingface/transformers/pull/6366 | 675,717,472 | MDExOlB1bGxSZXF1ZXN0NDY1MTY0MjU1 | 6,366 | [WIP] Lm loss feed forward chunking | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=h1) Report\n> Merging [#6366](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6366 +/- ##\n==========================================\n- Coverage 79.34% 78.38% -0.97% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21579 21317 -262 \n- Misses 5617 5879 +262 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=footer). Last update [6e8a385...a258372](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten quick question here: Did you mean to chunk the projection onto vocab_size as in:\r\nhttps://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L527-L530\r\nor the transformation that happens before that:\r\nhttps://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L506-L510 \r\n",
"This one is a bit harder actually. I meant the both the calculation in https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L795 and https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L1015 should be put into one chunk function. Here a lot of memory can be saved because currently go from the `last_hidden_state` tensor of size `[batch_size, seq_len, hidden_size]` to a `[batch_size, seq_len, vocab_size]` logit tensor and then reduce it to `[1]` loss scalar. Note that `vocab_size` is much larger than `hidden_size` and often is the bottleneck of a model. We don't need to compute `[batch_size, seq_len, vocab_size]` though if we apply chunked \"loss\" calculation from `last_hidden_states` to `loss` directly. Here we could greatly reduce memory consumption. But definitely leave this PR for later, we need to more carefully think about possible design changes as the two lines in question (linked above) are in different spots in the code. When we have the other `feed_forward_chunking` implemented we can take a look at this again :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,603 | 1,603 | CONTRIBUTOR | null | Final word embedding layer in LM loss calculation presents a bottleneck as the entire time dim - can be chunked similar to feed forward layers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6366/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6366",
"html_url": "https://github.com/huggingface/transformers/pull/6366",
"diff_url": "https://github.com/huggingface/transformers/pull/6366.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6366.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6365/comments | https://api.github.com/repos/huggingface/transformers/issues/6365/events | https://github.com/huggingface/transformers/pull/6365 | 675,716,980 | MDExOlB1bGxSZXF1ZXN0NDY1MTYzODc0 | 6,365 | Feed forward chunking others | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=h1) Report\n> Merging [#6365](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fb7330b30ebfbb3f07b87203f0405ee09905eeda&el=desc) will **increase** coverage by `2.04%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6365 +/- ##\n==========================================\n+ Coverage 78.42% 80.47% +2.04% \n==========================================\n Files 156 156 \n Lines 28129 28152 +23 \n==========================================\n+ Hits 22061 22655 +594 \n+ Misses 6068 5497 -571 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <ø> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (+0.19%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.62% <100.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.50% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <100.00%> (+1.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.02% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.09% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.42% <100.00%> (+0.11%)` | :arrow_up: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=footer). Last update [fb7330b...80c6b27](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"https://github.com/huggingface/transformers/pull/6024 is merged :-) Great work @Pradhy729! It would be a good idea to rebase this PR to current master so that you can easily leverage the tests that were added in https://github.com/huggingface/transformers/pull/6024 just by setting the flag `test_chunking=True` for all models you want to add here.",
"Yes - definitely will do. Was just waiting for the merge. Thanks for adding the tests.",
"@patrickvonplaten Feed forward chunking has been added for the following:\r\n1. Albert\r\n2. Distillbert\r\n3. Longformer\r\n4. XLNet\r\n5. XLM\r\n\r\nAlso, changed model signature to have callable as first positional argument.",
"Hi @patrickvonplaten --> Can you review and approve if this looks good?\r\n",
"Hey @Pradhy729 - this looks great! \r\n1) Can you add the docstrings for `chunk_size_feed_forward` as explained in the comment above and delete the corresponding config param in Reformer and the Reformer docstring (You can just cut & paste the Reformer docstring here)\r\n2) Can you please remove the `test_chunking=True` statements in the model specific test files -> I think it's only in test_modeling_bert.py actually.\r\n3) It would be awesome if you try to rebase the branch to master (`git fetch upstream master`, `git rebase upstream/master`).\r\nIf you have too many merge conflicts - then I'll do it :-) ",
"@patrickvonplaten \r\nDone. Please review and let me know if there's anything else.",
"LGTM! @Pradhy729 - great work!",
"Merging! Good job @Pradhy729 "
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | Adding feed forward chunking to other models. Based on #6024 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6365",
"html_url": "https://github.com/huggingface/transformers/pull/6365",
"diff_url": "https://github.com/huggingface/transformers/pull/6365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6365.patch",
"merged_at": 1597840271000
} |
https://api.github.com/repos/huggingface/transformers/issues/6364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6364/comments | https://api.github.com/repos/huggingface/transformers/issues/6364/events | https://github.com/huggingface/transformers/pull/6364 | 675,697,625 | MDExOlB1bGxSZXF1ZXN0NDY1MTUwNTQ1 | 6,364 | correct pl link in readme | {
"login": "rohitgr7",
"id": 30778939,
"node_id": "MDQ6VXNlcjMwNzc4OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohitgr7",
"html_url": "https://github.com/rohitgr7",
"followers_url": "https://api.github.com/users/rohitgr7/followers",
"following_url": "https://api.github.com/users/rohitgr7/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions",
"organizations_url": "https://api.github.com/users/rohitgr7/orgs",
"repos_url": "https://api.github.com/users/rohitgr7/repos",
"events_url": "https://api.github.com/users/rohitgr7/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohitgr7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=h1) Report\n> Merging [#6364](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.98%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6364 +/- ##\n==========================================\n- Coverage 79.34% 78.36% -0.99% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21579 21312 -267 \n- Misses 5617 5884 +267 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=footer). Last update [6e8a385...86f91f3](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6364/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6364",
"html_url": "https://github.com/huggingface/transformers/pull/6364",
"diff_url": "https://github.com/huggingface/transformers/pull/6364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6364.patch",
"merged_at": 1597043327000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6363/comments | https://api.github.com/repos/huggingface/transformers/issues/6363/events | https://github.com/huggingface/transformers/pull/6363 | 675,646,461 | MDExOlB1bGxSZXF1ZXN0NDY1MTE0MDkz | 6,363 | [s2s] add BartTranslationDistiller for distilling mBART | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=h1) Report\n> Merging [#6363](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/be1520d3a3c09d729649c49fa3163bd938b6a238&el=desc) will **decrease** coverage by `1.55%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6363 +/- ##\n==========================================\n- Coverage 79.93% 78.37% -1.56% \n==========================================\n Files 153 148 -5 \n Lines 27888 27196 -692 \n==========================================\n- Hits 22293 21316 -977 \n- Misses 5595 5880 +285 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `41.66% <0.00%> (-40.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (-7.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `81.74% <0.00%> (-6.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (-1.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-1.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `95.48% <0.00%> (-1.10%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <0.00%> (-0.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.66% <0.00%> (-0.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (-0.18%)` | :arrow_down: |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=footer). Last update [be1520d...0718179](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | New class `BartTranslationDistiller` does the same distillation method as `SummarizationDistiller`, but computes BLEU scores instead of ROUGE scores. It also accepts `--src_lang` and `--tgt_lang` arguments from the command line.
There is one strong checkpoint already posted at `sshleifer/distillmbart-12-6/`. I will post more in the coming days. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6363",
"html_url": "https://github.com/huggingface/transformers/pull/6363",
"diff_url": "https://github.com/huggingface/transformers/pull/6363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6363.patch",
"merged_at": 1597246864000
} |
https://api.github.com/repos/huggingface/transformers/issues/6362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6362/comments | https://api.github.com/repos/huggingface/transformers/issues/6362/events | https://github.com/huggingface/transformers/issues/6362 | 675,635,648 | MDU6SXNzdWU2NzU2MzU2NDg= | 6,362 | [TFTrainer] Error "iterating over `tf.Tensor` is not allowed" | {
"login": "EibrielInv",
"id": 172656,
"node_id": "MDQ6VXNlcjE3MjY1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/172656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EibrielInv",
"html_url": "https://github.com/EibrielInv",
"followers_url": "https://api.github.com/users/EibrielInv/followers",
"following_url": "https://api.github.com/users/EibrielInv/following{/other_user}",
"gists_url": "https://api.github.com/users/EibrielInv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EibrielInv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EibrielInv/subscriptions",
"organizations_url": "https://api.github.com/users/EibrielInv/orgs",
"repos_url": "https://api.github.com/users/EibrielInv/repos",
"events_url": "https://api.github.com/users/EibrielInv/events{/privacy}",
"received_events_url": "https://api.github.com/users/EibrielInv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The following bug on Tensorflow could be related: https://github.com/tensorflow/tensorflow/issues/42119",
"Was just a Dataset setup issue. The correct setup for the Dataset can be seen here https://github.com/huggingface/transformers/issues/6551"
] | 1,596 | 1,598 | 1,598 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 (from pip)
- Platform: Linux-4.15.0-91-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.0 (True) (Same error on TF2.2 and TF2.1)
- Using GPU in script?: Yes - GeForce GTX 1080 Ti
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger tensorflow: @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install Tensorflow 2.3.0, Transformers 3.0.2
1. Run the following code:
```python3
from transformers import TFGPT2LMHeadModel, TFTrainer, TFTrainingArguments
import tensorflow as tf
tfds_train_dataset = tf.data.Dataset.from_tensor_slices(
tf.random.uniform([4000, 1024], minval=1, maxval=10, dtype=tf.int32))
model = TFGPT2LMHeadModel.from_pretrained("gpt2")
training_args = TFTrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=tfds_train_dataset,
)
trainer.train()
```
2. Results in the following output + error:
```
2020-08-09 01:41:28.331697: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.461375: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-08-09 01:41:30.466239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-08-09 01:41:30.466271: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.468575: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-08-09 01:41:30.470629: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-08-09 01:41:30.471013: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-08-09 01:41:30.473522: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-08-09 01:41:30.474947: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-08-09 01:41:30.481193: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-08-09 01:41:30.482710: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-08-09 01:41:30.483080: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-08-09 01:41:30.512602: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3210790000 Hz
2020-08-09 01:41:30.514335: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c678f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-09 01:41:30.514408: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-08-09 01:41:30.648534: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c92000 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-09 01:41:30.648597: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-08-09 01:41:30.650365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-08-09 01:41:30.650446: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.650523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-08-09 01:41:30.650586: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-08-09 01:41:30.650646: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-08-09 01:41:30.650708: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-08-09 01:41:30.650767: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-08-09 01:41:30.650829: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-08-09 01:41:30.653179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-08-09 01:41:30.653232: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:31.392168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-09 01:41:31.392212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-08-09 01:41:31.392225: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-08-09 01:41:31.393566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7389 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-08-09 01:41:34.003855: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-08-09 01:41:34.145974: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
All model checkpoint weights were used when initializing TFGPT2LMHeadModel.
All the weights of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Traceback (most recent call last):
File "gpt2-training_bug.py", line 26, in <module>
trainer.train()
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 412, in train
for step, training_loss in enumerate(self._training_steps(train_ds, optimizer)):
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 459, in _training_steps
for i, loss in enumerate(self._accumulate_next_gradients(ds)):
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 492, in _accumulate_next_gradients
yield _accumulate_next()
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 697, in _initialize
*args, **kwds))
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3075, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: in user code:
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py:486 _accumulate_next *
per_replica_features, per_replica_labels = next(iterator)
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:503 __iter__
self._disallow_iteration()
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:496 _disallow_iteration
self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:474 _disallow_when_autograph_enabled
" indicate you are trying to use an unsupported feature.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Start Training
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6362/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6361/comments | https://api.github.com/repos/huggingface/transformers/issues/6361/events | https://github.com/huggingface/transformers/pull/6361 | 675,632,004 | MDExOlB1bGxSZXF1ZXN0NDY1MTAzOTIx | 6,361 | lr_schedulers: add get_polynomial_decay_schedule_with_warmup | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=h1) Report\n> Merging [#6361](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bd0eab351a338175053998ddfc059f1cb6424ab4&el=desc) will **increase** coverage by `0.48%`.\n> The diff coverage is `81.05%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6361 +/- ##\n==========================================\n+ Coverage 79.29% 79.77% +0.48% \n==========================================\n Files 146 148 +2 \n Lines 26684 27214 +530 \n==========================================\n+ Hits 21158 21710 +552 \n+ Misses 5526 5504 -22 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <ø> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (-1.64%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <ø> (+14.28%)` | :arrow_up: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.37% <14.81%> (-0.08%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.02% <32.25%> (-1.04%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <39.13%> (+0.06%)` | :arrow_up: |\n| ... and [45 more](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=footer). Last update [bd0eab3...6e0b1dc](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"All done, just need to decide whether to use the default of 1.0 for `power` as in fairseq, or 2.0 (or another value) as it actually does something polynomial.",
"I run fairseq as recommended [here](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro) and no, it is using the default power=1.0 at runtime.\r\n\r\nI double checked their code, it doesn't get overriden anywhere:\r\n```\r\nfairseq/optim/lr_scheduler/polynomial_decay_schedule.py: self.power = args.power\r\nfairseq/optim/lr_scheduler/polynomial_decay_schedule.py: parser.add_argument('--power', default=1.0, type=float)\r\nfairseq/optim/lr_scheduler/polynomial_decay_schedule.py: print(\"POWER:\", self.power)\r\nfairseq/optim/lr_scheduler/polynomial_decay_schedule.py: lr = lr_range * pct_remaining ** (self.power) + self.end_learning_rate\r\n```\r\n\r\nI will open an issue there and report back. https://github.com/pytorch/fairseq/issues/2466\r\n",
"👍 "
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | this PR adds a new scheduler plus test, code is based on amalgamation of a few different implementations
I'm not sure it's 100% correct - needs more experimenting - but feedback is welcome.
For reference here are 3 different implementations of this scheduler:
1. https://github.com/pyprob/pyprob/blob/master/pyprob/nn/inference_network.py#L357
2. https://github.com/cmpark0126/pytorch-polynomial-lr-decay/blob/master/torch_poly_lr_decay/torch_poly_lr_decay.py#L5
3. https://github.com/pytorch/fairseq/blob/master/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py - this one has an extra feature `--force-anneal`
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6361",
"html_url": "https://github.com/huggingface/transformers/pull/6361",
"diff_url": "https://github.com/huggingface/transformers/pull/6361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6361.patch",
"merged_at": 1597183002000
} |
https://api.github.com/repos/huggingface/transformers/issues/6360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6360/comments | https://api.github.com/repos/huggingface/transformers/issues/6360/events | https://github.com/huggingface/transformers/issues/6360 | 675,624,965 | MDU6SXNzdWU2NzU2MjQ5NjU= | 6,360 | Bug in squad example with XLNet | {
"login": "zhiqihuang",
"id": 11265691,
"node_id": "MDQ6VXNlcjExMjY1Njkx",
"avatar_url": "https://avatars.githubusercontent.com/u/11265691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhiqihuang",
"html_url": "https://github.com/zhiqihuang",
"followers_url": "https://api.github.com/users/zhiqihuang/followers",
"following_url": "https://api.github.com/users/zhiqihuang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhiqihuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhiqihuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhiqihuang/subscriptions",
"organizations_url": "https://api.github.com/users/zhiqihuang/orgs",
"repos_url": "https://api.github.com/users/zhiqihuang/repos",
"events_url": "https://api.github.com/users/zhiqihuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhiqihuang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"see also #3535",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-957.21.2.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
XLNet
The problem arises when using:
the official example scripts: (give details below)
The tasks I am working on is:
an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
1. run_squad.py with xlnet as the mode type
2. I think because the AutoModelForQuestionAnswering is mapping xlnet to XLNetForQuestionAnsweringSimple, it will have input error. Since XLNetForQuestionAnsweringSimple does not require cls_index, it will throw an error
3. https://github.com/huggingface/transformers/blob/d9149f00d1a4650bafa7e1cd73e10398193c852c/examples/question-answering/run_squad.py#L194
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6359/comments | https://api.github.com/repos/huggingface/transformers/issues/6359/events | https://github.com/huggingface/transformers/pull/6359 | 675,621,686 | MDExOlB1bGxSZXF1ZXN0NDY1MDk2OTgw | 6,359 | Mult rouge by 100: standard units | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have no knowledge of ROUGE and why this would be necessary, so probably not the best person to review :-)"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6359",
"html_url": "https://github.com/huggingface/transformers/pull/6359",
"diff_url": "https://github.com/huggingface/transformers/pull/6359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6359.patch",
"merged_at": 1597335355000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6358/comments | https://api.github.com/repos/huggingface/transformers/issues/6358/events | https://github.com/huggingface/transformers/pull/6358 | 675,606,681 | MDExOlB1bGxSZXF1ZXN0NDY1MDg3MDYw | 6,358 | [s2s] fix --gpus clarg collision | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=h1) Report\n> Merging [#6358](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1aec991643a6fec0e7d504626fc68347fe93b658&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6358 +/- ##\n==========================================\n+ Coverage 78.20% 78.38% +0.17% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21269 21317 +48 \n+ Misses 5927 5879 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.11%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+9.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=footer). Last update [1aec991...b466e25](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is the issue I opened about it: https://github.com/huggingface/transformers/issues/6310\r\n\r\nit's more than just `--gpus`",
"which other ones besides `--gpus`?",
"Anything else that is defined both in `lightening_base.add_generic_args` and PL's `pl.Trainer.add_argparse_args(parser)`, if both get called.\r\n\r\nWith your PR nothing collides at the moment. \r\n\r\nIf we go into the direction of each module (and the base) defining its own args, most likely `finetune.py`needs to do the same and not use `pl.Trainer.add_argparse_args(parser)`.\r\n\r\nOn the other hand, copying the same common args to every module is less than optimal. If transformers support `--gpus`, it shouldn't be too difficult to make all examples support it - or fail it's passed and it can't support it. then these common args can go into `lightening_base` and not be redefined by each module.\r\n\r\nAdditionally, we can make any of these args optional like it was done recently with https://github.com/huggingface/transformers/pull/6149, so if the arg is not there, it will not fail if the example doesn't support it.",
"I don't understand exactly what you're proposing I don't think. This is just meant to fix a bug.\r\nI agree that the current setup where only finetune.py uses `Trainer.from_argparse_args` is suboptimal, but I don't really want to mess with it since it's working and our test coverage isn't good enough to know if we've broken things.",
"I'm trying to communicate that currently adding new args is difficult because they are scattered in various places. It's not easy to tell when to put them in `lightning_base`, and when inside an example class and the issue https://github.com/huggingface/transformers/issues/6310 points to further collision with `pl.Trainer.add_argparse_args(parser)` use in `finetune.py`.\r\n\r\nThis PR duplicated a cl arg `--gpus` that ideally should be registered only once in `lightning_base`, and not repeated in every example, IMHO. You had to do it because `finetune.py` does things differently than the rest of examples and so it can't use `lightening_base` normally. And it's not over since other examples will want `--gpus` too.\r\n\r\nReplacing `pl.Trainer.add_argparse_args(parser)` in `finetune.py` with the approach all other examples use will quickly uncover any missing cl args that it needs to register, and a quick grep will show them all:\r\n```\r\nperl -lne 'm|hparams\\.(\\w+)| && print $1' finetune.py | sort | uniq\r\naccumulate_grad_batches\r\ndata_dir\r\neval_batch_size\r\nfreeze_embeds\r\nfreeze_encoder\r\ngit_sha\r\ngpus\r\nlabel_smoothing\r\nmax_epochs\r\nmax_source_length\r\nmax_target_length\r\nn_test\r\nn_train\r\nnum_workers\r\nn_val\r\noutput_dir\r\npkl\r\nsortish_sampler\r\nsrc_lang\r\ntest_checkpoint\r\ntest_max_target_length\r\ntgt_lang\r\ntrain_batch_size\r\nval_max_target_length\r\nwarmup_steps\r\n```\r\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | ### Problem
finetune.py adds all the default pl args with this line
```
parser = pl.Trainer.add_argparse_args(parser)
```
and all the generic args from `add_generic_args`.
### Solution
This moves the overlapping arg from lightning_base.py to the 2 pl examples that need it.
CC @stas00 @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6358",
"html_url": "https://github.com/huggingface/transformers/pull/6358",
"diff_url": "https://github.com/huggingface/transformers/pull/6358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6358.patch",
"merged_at": 1596937898000
} |
https://api.github.com/repos/huggingface/transformers/issues/6357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6357/comments | https://api.github.com/repos/huggingface/transformers/issues/6357/events | https://github.com/huggingface/transformers/pull/6357 | 675,602,098 | MDExOlB1bGxSZXF1ZXN0NDY1MDgzOTQ2 | 6,357 | Create Model Card File | {
"login": "pranavpsv",
"id": 30323565,
"node_id": "MDQ6VXNlcjMwMzIzNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/30323565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranavpsv",
"html_url": "https://github.com/pranavpsv",
"followers_url": "https://api.github.com/users/pranavpsv/followers",
"following_url": "https://api.github.com/users/pranavpsv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranavpsv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranavpsv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranavpsv/subscriptions",
"organizations_url": "https://api.github.com/users/pranavpsv/orgs",
"repos_url": "https://api.github.com/users/pranavpsv/repos",
"events_url": "https://api.github.com/users/pranavpsv/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranavpsv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=h1) Report\n> Merging [#6357](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1aec991643a6fec0e7d504626fc68347fe93b658&el=desc) will **increase** coverage by `1.38%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6357 +/- ##\n==========================================\n+ Coverage 78.20% 79.59% +1.38% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21269 21646 +377 \n+ Misses 5927 5550 -377 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.11%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+9.27%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `94.63% <0.00%> (+70.08%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=footer). Last update [1aec991...712b725](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6357",
"html_url": "https://github.com/huggingface/transformers/pull/6357",
"diff_url": "https://github.com/huggingface/transformers/pull/6357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6357.patch",
"merged_at": 1597156575000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6356/comments | https://api.github.com/repos/huggingface/transformers/issues/6356/events | https://github.com/huggingface/transformers/pull/6356 | 675,598,624 | MDExOlB1bGxSZXF1ZXN0NDY1MDgxNjMz | 6,356 | Create Model Card | {
"login": "pranavpsv",
"id": 30323565,
"node_id": "MDQ6VXNlcjMwMzIzNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/30323565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranavpsv",
"html_url": "https://github.com/pranavpsv",
"followers_url": "https://api.github.com/users/pranavpsv/followers",
"following_url": "https://api.github.com/users/pranavpsv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranavpsv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranavpsv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranavpsv/subscriptions",
"organizations_url": "https://api.github.com/users/pranavpsv/orgs",
"repos_url": "https://api.github.com/users/pranavpsv/repos",
"events_url": "https://api.github.com/users/pranavpsv/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranavpsv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6356",
"html_url": "https://github.com/huggingface/transformers/pull/6356",
"diff_url": "https://github.com/huggingface/transformers/pull/6356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6356.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6355/comments | https://api.github.com/repos/huggingface/transformers/issues/6355/events | https://github.com/huggingface/transformers/pull/6355 | 675,597,763 | MDExOlB1bGxSZXF1ZXN0NDY1MDgxMDA1 | 6,355 | Create Model Card File | {
"login": "pranavpsv",
"id": 30323565,
"node_id": "MDQ6VXNlcjMwMzIzNTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/30323565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranavpsv",
"html_url": "https://github.com/pranavpsv",
"followers_url": "https://api.github.com/users/pranavpsv/followers",
"following_url": "https://api.github.com/users/pranavpsv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranavpsv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranavpsv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranavpsv/subscriptions",
"organizations_url": "https://api.github.com/users/pranavpsv/orgs",
"repos_url": "https://api.github.com/users/pranavpsv/repos",
"events_url": "https://api.github.com/users/pranavpsv/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranavpsv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6355",
"html_url": "https://github.com/huggingface/transformers/pull/6355",
"diff_url": "https://github.com/huggingface/transformers/pull/6355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6355.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6354/comments | https://api.github.com/repos/huggingface/transformers/issues/6354/events | https://github.com/huggingface/transformers/issues/6354 | 675,577,174 | MDU6SXNzdWU2NzU1NzcxNzQ= | 6,354 | GPU memory consumption increases while training | {
"login": "sangnguyen7",
"id": 4648887,
"node_id": "MDQ6VXNlcjQ2NDg4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4648887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangnguyen7",
"html_url": "https://github.com/sangnguyen7",
"followers_url": "https://api.github.com/users/sangnguyen7/followers",
"following_url": "https://api.github.com/users/sangnguyen7/following{/other_user}",
"gists_url": "https://api.github.com/users/sangnguyen7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangnguyen7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangnguyen7/subscriptions",
"organizations_url": "https://api.github.com/users/sangnguyen7/orgs",
"repos_url": "https://api.github.com/users/sangnguyen7/repos",
"events_url": "https://api.github.com/users/sangnguyen7/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangnguyen7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @sangnguyen7, \r\n\r\nIs there a reason to rerun these steps:\r\n\r\n```python\r\nmodel.train()\r\nmodel(**encode)\r\n!nvidia-smi\r\n```\r\nover and over again. I think what might happen here is that pytorch is saving more and more activations of the forward pass in each node of the model and thus will run out of memory eventually. Not sure why you would have do re-run the above steps again and again though.",
"Hey @patrickvonplaten, thanks for your response and sorry for late reply. \r\n\r\nYour point might be the case. However, if that is case then it should not be affected by the batch size right? Because if I understood correctly , activation functions are only saved on parameters/weights of the model and they are fixed on each model.\r\n```\r\nNot sure why you would have do re-run the above steps again and again though.\r\n```\r\nThe reason why I'm doing this because I want to mimic the training step on the Trainer class to debug which causing the run out of memory... not sure that I missing anything... ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,604 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @sgugger
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): XLM Multi-lingual
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Please see below steps to reproduce
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Importing Python Libraries and preparing the environment
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import (
AutoTokenizer,
AutoConfig,
AutoModelForSequenceClassification
)
from torch import cuda
device = 'cuda' if cuda.is_available() else 'cpu'
```
2. Loading a pretrained model "xlm-mlm-tlm-xnli15-1024"
```python
MODEL_NAME_OR_PATH = 'xlm-mlm-tlm-xnli15-1024'
CACHE_DIR='cache'
config = AutoConfig.from_pretrained(
MODEL_NAME_OR_PATH,
num_labels=7,
cache_dir=CACHE_DIR,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
cache_dir=CACHE_DIR,
)
model = AutoModelForSequenceClassification.from_pretrained(
MODEL_NAME_OR_PATH,
from_tf=bool(".ckpt" in MODEL_NAME_OR_PATH),
config=config,
cache_dir=CACHE_DIR
)
```
3. Check GPU usage
```
!nvidia-smi
```
4. Moving the model to CUDA
```
model.to(device)
```
then check GPU usage again
```
!nvidia-smi
```
5. Creating test inputs
```python
texts = [
"aloe vera , wassernabelkrautextrakt , ackerschachtelhalm extrakt , geranium extract , dandelion extract , natriummethyl two sulfolaurate dinatrium two sulfolaurate , sodium cocoyl isethionat , cocamidopropylbetain , cocamidopropylhydroxysultain , kokosglucoside , natrium chlorid , glyceryl oleat , natriumenzoat , guar hydroxypropyltrimonium chloride , tetrasodium glutamat diacetat , decyl glucoside , sodium levulinate , hydroxamsäure , sodium pca , caprylyl glycol , zitronensäure , als koscher zertifizierte pflanzliches glycerin , eukalyptusöl , pfefferminzöl , zitronengrassöl . zertifiziert als organisch . wir verwenden nur die besten natürlichen zutaten . wenn möglich , verwenden wir onezerozero percent zertifizierte organische zutaten und niemals : petrochemikalien , sulfate , parabene , phthalate oder synthetische duftstoffe oder farben , tea , dea , glycol , silikon oder pegs . nur an menschen getested . in anderen worten : wir stellen nur absolut reine produkte her und garantieren mit onezerozero percent sicherheit , dass sie ihrem körper keine chemikalien zuführen .",
"was es bewirkt das waschgel auf kokosnussbasis entfernt überschüssiges hautfett , während das darin enthaltene aloe vera gel die haut erneuert . das gesichtspflege gel für eine tiefenwirksame porenreinigung . "
"stimmungsaufhellendes orangenöl für die massage ( kein ätherisches öl für duftlampen ) . ohne paraffin ohne mineralöl , ohne parabene , ohne konservierungsmittel , selbstverständlich ohne tierversuche , vegan",
"onezerozero percent natives kaltgepresstes biomandelöl aus one . kaltpressung . sanfte und schonende mechanische verarbeitung in deutschland . ",
"aloe vera , wassernabelkrautextrakt , ackerschachtelhalm extrakt , geranium extract , dandelion extract , natriummethyl two sulfolaurate dinatrium two sulfolaurate , sodium cocoyl isethionat , cocamidopropylbetain , cocamidopropylhydroxysultain , kokosglucoside , natrium chlorid , glyceryl oleat , natriumenzoat , guar hydroxypropyltrimonium chloride , tetrasodium glutamat diacetat , decyl glucoside , sodium levulinate , hydroxamsäure , sodium pca , caprylyl glycol , zitronensäure , als koscher zertifizierte pflanzliches glycerin , eukalyptusöl , pfefferminzöl , zitronengrassöl . zertifiziert als organisch . wir verwenden nur die besten natürlichen zutaten . wenn möglich , verwenden wir onezerozero percent zertifizierte organische zutaten und niemals : petrochemikalien , sulfate , parabene , phthalate oder synthetische duftstoffe oder farben , tea , dea , glycol , silikon oder pegs . nur an menschen getested . in anderen worten : wir stellen nur absolut reine produkte her und garantieren mit onezerozero percent sicherheit , dass sie ihrem körper keine chemikalien zuführen .",
"was es bewirkt das waschgel auf kokosnussbasis entfernt überschüssiges hautfett , während das darin enthaltene aloe vera gel die haut erneuert . das gesichtspflege gel für eine tiefenwirksame porenreinigung . "
"stimmungsaufhellendes orangenöl für die massage ( kein ätherisches öl für duftlampen ) . ohne paraffin ohne mineralöl , ohne parabene , ohne konservierungsmittel , selbstverständlich ohne tierversuche , vegan",
"onezerozero percent natives kaltgepresstes biomandelöl aus one . kaltpressung . sanfte und schonende mechanische verarbeitung in deutschland . ",
]
encode = tokenizer(texts, padding='max_length', max_length=200, truncation=True, return_tensors='pt')
for k in encode:
encode[k] = encode[k].to(device)
```
6. Re-run steps below to see the GPU usage increases every time we run
```python
model.train()
model(**encode)
!nvidia-smi
```
Got error as below:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-242-decae3a1d2bf> in <module>()
1 model.train()
----> 2 model(**encode)
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1674 ret = torch.addmm(bias, input, weight.t())
1675 else:
-> 1676 output = input.matmul(weight.t())
1677 if bias is not None:
1678 output += bias
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.17 GiB total capacity; 10.10 GiB already allocated; 13.81 MiB free; 10.74 GiB reserved in total by PyTorch)
```
However, if you modify the code in the step 6 as follow:
```
model.train()
output = model(**encode)
print(output)
del output
!nvidia-smi
```
The GPU usage will be stable and the same as every run.
When I'm using batch_size >= 16 with Trainer class I have been facing this issue
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The GPU usage should stay the same for every run. So that we can run a much bigger batch size.
Right now, I can only use per_device_batch_size <=12 with Trainer class.
Looking forward to learning from you and thank you so much!
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6354/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6353/comments | https://api.github.com/repos/huggingface/transformers/issues/6353/events | https://github.com/huggingface/transformers/issues/6353 | 675,569,125 | MDU6SXNzdWU2NzU1NjkxMjU= | 6,353 | BartModel decodes sequence of incorrect length when decoder_input_ids is specified / Output shape mismatch due to when `use_cache` True/False | {
"login": "xhluca",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhluca",
"html_url": "https://github.com/xhluca",
"followers_url": "https://api.github.com/users/xhluca/followers",
"following_url": "https://api.github.com/users/xhluca/following{/other_user}",
"gists_url": "https://api.github.com/users/xhluca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhluca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhluca/subscriptions",
"organizations_url": "https://api.github.com/users/xhluca/orgs",
"repos_url": "https://api.github.com/users/xhluca/repos",
"events_url": "https://api.github.com/users/xhluca/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhluca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patrickvonplaten Actually, going over [the source code ](https://huggingface.co/transformers/_modules/transformers/modeling_bart.html#BartModel), I found that the exact line in the definition of `class BartModel(PretrainedBartModel)` that was causing this problem is:\r\n```python\r\nuse_cache = use_cache if use_cache is not None else self.config.use_cache\r\n```\r\n\r\nIn the `forward` method. Since use_cache is set to `False` when `decoder_input_ids` is `None`, this line forces `use_cache` value to always be True if `decoder_input_ids` is a tensor. \r\n\r\nI posted my [experiments here](https://www.kaggle.com/xhlulu/bart-experiments) in case they are useful.",
"I realized that this is actually wrong:\r\n> In the forward method. Since use_cache is set to False when decoder_input_ids is None, this line forces use_cache value to always be True if decoder_input_ids is a tensor.\r\n\r\nRe-reading the code made me realize that my problem could be solved by explicitly specifying `use_cache=False` when calling `model.forward`. This is likely because when the `use_cache` attribute in `model.forward` is `None`, it falls back to `model.config.use_cache`, which is set to True by default.\r\n\r\nI'm not sure whether what we have here is the intended behavior for BART, so I'll let @sshleifer @patrickvonplaten make the decision to close this :)",
"This seems to be related to https://github.com/huggingface/transformers/issues/6348. @sshleifer do you want to take a look at this?",
"@sshleifer I think this is a problem because in the first pass when the `cache` is still empty, `use_cache=True` and `decoder_input_ids` is of length 9 then the `last_hidden_state` should also be of size 9 **and** the cache should be returned. I can take a look this week if you are very busy - let me know!",
"Yes that would be helpful @patrickvonplaten !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@patrickvonplaten Make sure to remove `use_cache` in this PR to solve problem."
] | 1,596 | 1,607 | 1,607 | CONTRIBUTOR | null | From the [Bart docs](https://huggingface.co/transformers/model_doc/bart.html#bartmodel), the `decoder_input_ids` attribute should be a tensor of shape `(batch_size, target_sequence_length)`. If we call a `BartModel` without specifying `decoder_input_ids`, the decoded sequence length correctly matches that of `input_ids`. When it is specified, the output sequence is not of shape `target_sequence_length`.
## Environment
Name: torch
Version: 1.6.0+cu101
Name: transformers
Version: 3.0.2
Name: tokenizers
Version: 0.8.1rc1
The error can be reproduced in Colab or Kaggle. See [this notebook ](https://colab.research.google.com/gist/xhlulu/dd989fc7f96b777c01c083762375dfbe/bart-sequence-problems.ipynb)for example.
## Example
```python
import transformers as tfm
model = tfm.BartModel.from_pretrained('facebook/bart-base')
tokenizer = tfm.BartTokenizer.from_pretrained('facebook/bart-base')
input_seq = [
"What's the capital of Canada?",
"What's the capital of USA?"
]
output_seq = [
"It's Ottawa",
"It's Washington"
]
input_tokens = tokenizer.batch_encode_plus(input_seq, return_tensors='pt', padding=True)
input_ids = input_tokens['input_ids']
output_tokens = tokenizer.batch_encode_plus(output_seq, return_tensors='pt', padding=True)
output_ids = output_tokens['input_ids']
print(input_ids.size(), output_ids.size()) # Returns torch.Size([2, 9]) torch.Size([2, 5])
# Okay
outputs = model.forward(input_ids)
outputs[0].size() # Returns `torch.Size([2, 9, 768])`
# Incorrect
outputs = model.forward(input_ids, decoder_input_ids=output_ids)
outputs[0].size() # Returns torch.Size([2, 1, 768])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6353/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6352/comments | https://api.github.com/repos/huggingface/transformers/issues/6352/events | https://github.com/huggingface/transformers/pull/6352 | 675,568,327 | MDExOlB1bGxSZXF1ZXN0NDY1MDYwODQx | 6,352 | [GPT2] Correct typo in docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=h1) Report\n> Merging [#6352](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f57e39f7165fa8bd6ac911852221a76d4b79ebe&el=desc) will **decrease** coverage by `0.31%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6352 +/- ##\n==========================================\n- Coverage 79.79% 79.47% -0.32% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21701 21615 -86 \n- Misses 5495 5581 +86 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <ø> (ø)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=footer). Last update [9f57e39...acc3016](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6352",
"html_url": "https://github.com/huggingface/transformers/pull/6352",
"diff_url": "https://github.com/huggingface/transformers/pull/6352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6352.patch",
"merged_at": 1596911849000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6351/comments | https://api.github.com/repos/huggingface/transformers/issues/6351/events | https://github.com/huggingface/transformers/issues/6351 | 675,564,939 | MDU6SXNzdWU2NzU1NjQ5Mzk= | 6,351 | Why is distillbart-cnn done with no teacher and distilbart-xsum has a teacher? | {
"login": "moyid",
"id": 46605732,
"node_id": "MDQ6VXNlcjQ2NjA1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/46605732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyid",
"html_url": "https://github.com/moyid",
"followers_url": "https://api.github.com/users/moyid/followers",
"following_url": "https://api.github.com/users/moyid/following{/other_user}",
"gists_url": "https://api.github.com/users/moyid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moyid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moyid/subscriptions",
"organizations_url": "https://api.github.com/users/moyid/orgs",
"repos_url": "https://api.github.com/users/moyid/repos",
"events_url": "https://api.github.com/users/moyid/events{/privacy}",
"received_events_url": "https://api.github.com/users/moyid/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It's purely empirical. `distilbart-xsum` variants perform about 1-2 ROUGE pts worse without a teacher, the gap is basically 0 for the `distilbart-cnn` variants. For translation, it seems like teacher also helps a bit.\r\n",
"In the future, you can tag me on discussion questions on discuss.huggingface.co !",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | @sshleifer Can you expand on why distillbart-xsum is done with a teacher and distillbart-cnn is not?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6351/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6350/comments | https://api.github.com/repos/huggingface/transformers/issues/6350/events | https://github.com/huggingface/transformers/pull/6350 | 675,555,063 | MDExOlB1bGxSZXF1ZXN0NDY1MDUxNzM1 | 6,350 | Add model card for electra-base-turkish-cased-ner | {
"login": "monatis",
"id": 18634956,
"node_id": "MDQ6VXNlcjE4NjM0OTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18634956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monatis",
"html_url": "https://github.com/monatis",
"followers_url": "https://api.github.com/users/monatis/followers",
"following_url": "https://api.github.com/users/monatis/following{/other_user}",
"gists_url": "https://api.github.com/users/monatis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monatis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monatis/subscriptions",
"organizations_url": "https://api.github.com/users/monatis/orgs",
"repos_url": "https://api.github.com/users/monatis/repos",
"events_url": "https://api.github.com/users/monatis/events{/privacy}",
"received_events_url": "https://api.github.com/users/monatis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Why is the test for `build_doc` failing?",
"The CI failure is unrelated. Thanks for sharing!"
] | 1,596 | 1,598 | 1,596 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6350",
"html_url": "https://github.com/huggingface/transformers/pull/6350",
"diff_url": "https://github.com/huggingface/transformers/pull/6350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6350.patch",
"merged_at": 1596958792000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6349/comments | https://api.github.com/repos/huggingface/transformers/issues/6349/events | https://github.com/huggingface/transformers/issues/6349 | 675,551,466 | MDU6SXNzdWU2NzU1NTE0NjY= | 6,349 | [testing] USE_CUDA default and intuitive skip decorators | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Perhaps, while we are at it, it'd be good to discuss the related skip decorators. **Please let me know whether this is different enough and should be in its own issue.**\r\n\r\nThese are the torch-related skip decorators we [currently have](https://github.com/huggingface/transformers/blob/master/src/transformers/testing_utils.py#L73):\r\n\r\n* `require_torch`\r\n* `require_multigpu`\r\n* `require_torch_and_cuda`\r\n* `require_torch_tpu`\r\n\r\nCurrently there is `require_multigpu`, but no `require_gpu` - I tried using `require_torch_and_cuda` but it only works if USE_CUDA is set. And `require_torch_and_cuda` name is non-intuitive/inconsistent next to `require_multigpu`.\r\n\r\nAnd `require_multigpu` should behave like `require_torch_and_cuda`, except require `gpus>1` - i.e. whatever the USE_CUDA discussion outcome will be - it should behave the same. Currently it **does not** respect the `USE_CUDA ` setting!\r\n\r\nThe `require_torch` decorator name is somewhat ambiguous - is it asking for just having `torch` installed or choosing torch vs tf?\r\n\r\nFinally `require_torch_tpu` is again weirdly mismatching other decorator naming - should all of them have `_torch_` in the name or some of them? `require_multigpu` is torch-specific.\r\n\r\nMy thinking is perhaps we need:\r\n1. `require_torch` - this test will run only under torch\r\n2. `require_torch_gpu` - as `require_torch` plus at least 1 gpu\r\n3. `require_torch_multigpu` - as `require_torch` plus at least 2 gpus\r\n4. `require_torch_tpu` - as `require_torch` plus at least 1 tpu\r\n\r\nthat's if we sort out `USE_CUDA` to not need `require_torch_and_cuda`.\r\n\r\nAnd perhaps there might be a situation where we want: `gpu|tpu` - that is skip this test unless at least 1 gpu or 1 tpu is available, as perhaps it'd be too slow on cpu. `require_torch_gpu_or_tpu` or `require_torch_non_cpu`? Is there a common name/term for an environment that has either gpu or tpu?\r\n\r\nAnd then `require_torch_cpu_only` - skip this test if either gpu or tpu is available? i.e. this test needs to be run under cpu.\r\n\r\nSo 2 more:\r\n\r\n5. `require_torch_non_cpu` - as `require_torch` plus at least 1 gpu or 1 tpu\r\n6. `require_torch_cpu_only`- as `require_torch` plus must have neither gpus nor tpus\r\n\r\nAnd as discussed at the end of the comment above, in addition to the skip decorators we will find a good use for `has_` accessors with the same names (e.g. `has_torch_gpu`), so that a test could potentially behave differently depending on the environment, which could be changed globally by `USE_CUDA` or `CUDA_VISIBLE_DEVICES`.",
"I think life would be marginally better if we used `CUDA_VISIBLE_DEVICES` and your first 4 `@require` decorators. Basically just delete `USE_CUDA`. But @julien-c has more context.\r\n",
"@julien-c suggested we check in with @LysandreJik, @patrickvonplaten, @sgugger and @thomwolf so there is a full agreement, before we make the change.\r\n",
"I agree with this change. Not sure we need decorators 5 and 6 though. I'd wait for an occasion to see if they are needed.",
"Ok for clarifying this and making it more robust. I'm also not opposed to changing the `USE_CUDA` flag to `True` by default either.",
"I agree with @sgugger here",
"@thomwolf, @julien-c asked to confirm that you're in agreement with this proposal. Thank you! ",
"I think you have waited long enough to PR this @stas00 .\r\nApologies in advance if there is already a PR that I have not seen.",
"Thank you for affirming that, @sshleifer. ",
"I agree with @sshleifer, feel free to open a PR @stas00!",
"Thank you, @LysandreJik. I will work on that once I finish sorting out the fsmt nuances."
] | 1,596 | 1,603 | 1,603 | CONTRIBUTOR | null | This library's primarily use is for gpu work, and currently many tests won't run even if gpu is available, since the current setup wants env var `USE_CUDA` to be true for anything to happen. It's easy to forget to manually add this env var to pytest command line.
To maximize the testing potential I propose that only if `USE_CUDA=False` then the test is skipped (for CI jobs that need to test library's work on cpu), otherwise if `if torch.cuda.is_available()` cuda tests can be run.
In a brief discussion @julien-c suggested that:
> The original thinking was that we wanted to make sure that when we wanted to run on GPU it actually ran on GPU. i.e. it should even fail if you do `USE_CUDA` and there's no GPU, to prevent silent failures on GPU
and the discussion stopped there. This ticket was opened to complete this discussion.
@julien-c, could you please share a specific scenario based on the design intention you shared?
also `CUDA_VISIBLE_DEVICES=""` could be used to easily emulate a non-gpu environment if need be, w/o introducing new env vars. i.e. it'd be a built-in equivalent of `USE_CUDA=False`.
Further, `USE_CUDA` is currently only used for skip decorators. This setting cannot currently be respected from within a test. e.g. in a test I'm currently working on I have:
```
if torch.cuda.is_available():
testargs += ['--fp16', '--gpus=1']
```
so it'll ignore `USE_CUDA`, as the test must always run whether there is a gpu or not, so no skip decorator was used. This ignoring conceptually won't do the right thing then as it'll run the test on gpu even if `USE_CUDA==False` (or unset). So if `USE_CUDA`-functionalty remains, there is a need for an accessor that is not a [skip decorator](https://github.com/huggingface/transformers/blob/master/src/transformers/testing_utils.py#L126).
`require_multigpu` also currently ignores `USE_CUDA`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6348/comments | https://api.github.com/repos/huggingface/transformers/issues/6348/events | https://github.com/huggingface/transformers/issues/6348 | 675,541,905 | MDU6SXNzdWU2NzU1NDE5MDU= | 6,348 | [Bart] Cannot use Bart decoder cache with torchscript | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"After having fixed the bug it would be great if this line: https://github.com/huggingface/transformers/blob/ac001c48b8df1f5aadcc8cf2c71d7c1116c05250/tests/test_modeling_common.py#L252 can be removed so that a test for Bart + `past_key_value` is enabled.",
"thanks for writing such a good issue, I'll take a look tomorrow.",
"This should be looked at again after https://github.com/huggingface/transformers/pull/7474 is merged",
"Refactor resolves the problem -> should be fine after merge"
] | 1,596 | 1,607 | 1,607 | MEMBER | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-111-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0+cpu (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @sshleifer
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
When trying to use torchscript for Bart while passing `decoder_input_ids`:
```python
from transformers import BartModel
import torch
model = BartModel.from_pretrained("sshleifer/bart-tiny-random")
input_ids = decoder_input_ids = torch.tensor([19 * [1] + [model.config.eos_token_id]])
traced_model = torch.jit.trace(model, (input_ids, decoder_input_ids))
```
the following error occurs:
```
RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions
```
On the other hand if one disables the past via `model.config.use_cache = False`, then no
error occurs. This could mean that the cache data structure should be updated to correctly work with Torchscript.
## Expected behavior
No error should occur when using Bart + Torchscript in the way explained above.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6348/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6347/comments | https://api.github.com/repos/huggingface/transformers/issues/6347/events | https://github.com/huggingface/transformers/issues/6347 | 675,536,130 | MDU6SXNzdWU2NzU1MzYxMzA= | 6,347 | ModuleNotFoundError: No module named 'transformers' on Google Colab | {
"login": "Mohd-Misran",
"id": 55659231,
"node_id": "MDQ6VXNlcjU1NjU5MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/55659231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohd-Misran",
"html_url": "https://github.com/Mohd-Misran",
"followers_url": "https://api.github.com/users/Mohd-Misran/followers",
"following_url": "https://api.github.com/users/Mohd-Misran/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohd-Misran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohd-Misran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohd-Misran/subscriptions",
"organizations_url": "https://api.github.com/users/Mohd-Misran/orgs",
"repos_url": "https://api.github.com/users/Mohd-Misran/repos",
"events_url": "https://api.github.com/users/Mohd-Misran/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohd-Misran/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Mohd-Misran seems to be working for me \r\n. \r\n\r\nMaybe try to open a new colab notebook?\r\n",
"You need to restart your Colab runtime after installing new dependencies"
] | 1,596 | 1,597 | 1,597 | NONE | null | I installed **transformers** using the command `!pip install transformers` on **Google Colab Notebook**
But then I try to `import transformers` it throws an error.
This is the output of the pip install command:
Requirement already satisfied: transformers in /usr/local/lib/python3.6/dist-packages/transformers-3.0.2-py3.6.egg (3.0.2)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages/sacremoses-0.0.43-py3.6.egg (from transformers) (0.0.43)
Requirement already satisfied: sentencepiece!=0.1.92 in /usr/local/lib/python3.6/dist-packages/sentencepiece-0.1.91-py3.6-linux-x86_64.egg (from transformers) (0.1.91)
Requirement already satisfied: tokenizers==0.8.1.rc1 in /usr/local/lib/python3.6/dist-packages/tokenizers-0.8.1rc1-py3.6-linux-x86_64.egg (from transformers) (0.8.1rc1)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.16.0) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6347/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6346/comments | https://api.github.com/repos/huggingface/transformers/issues/6346/events | https://github.com/huggingface/transformers/pull/6346 | 675,535,971 | MDExOlB1bGxSZXF1ZXN0NDY1MDM4NzA4 | 6,346 | Create README.md | {
"login": "rohanrajpal",
"id": 7023147,
"node_id": "MDQ6VXNlcjcwMjMxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7023147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohanrajpal",
"html_url": "https://github.com/rohanrajpal",
"followers_url": "https://api.github.com/users/rohanrajpal/followers",
"following_url": "https://api.github.com/users/rohanrajpal/following{/other_user}",
"gists_url": "https://api.github.com/users/rohanrajpal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohanrajpal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohanrajpal/subscriptions",
"organizations_url": "https://api.github.com/users/rohanrajpal/orgs",
"repos_url": "https://api.github.com/users/rohanrajpal/repos",
"events_url": "https://api.github.com/users/rohanrajpal/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohanrajpal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=h1) Report\n> Merging [#6346](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f57e39f7165fa8bd6ac911852221a76d4b79ebe&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6346 +/- ##\n==========================================\n- Coverage 79.79% 79.61% -0.19% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21701 21652 -49 \n- Misses 5495 5544 +49 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.95% <0.00%> (-25.22%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=footer). Last update [9f57e39...9591bf9](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6346",
"html_url": "https://github.com/huggingface/transformers/pull/6346",
"diff_url": "https://github.com/huggingface/transformers/pull/6346.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6346.patch",
"merged_at": 1597185095000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6345/comments | https://api.github.com/repos/huggingface/transformers/issues/6345/events | https://github.com/huggingface/transformers/issues/6345 | 675,509,199 | MDU6SXNzdWU2NzU1MDkxOTk= | 6,345 | Is it necessary to provide attention_mask, or model will calculate itself? | {
"login": "saahiluppal",
"id": 47444392,
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saahiluppal",
"html_url": "https://github.com/saahiluppal",
"followers_url": "https://api.github.com/users/saahiluppal/followers",
"following_url": "https://api.github.com/users/saahiluppal/following{/other_user}",
"gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions",
"organizations_url": "https://api.github.com/users/saahiluppal/orgs",
"repos_url": "https://api.github.com/users/saahiluppal/repos",
"events_url": "https://api.github.com/users/saahiluppal/events{/privacy}",
"received_events_url": "https://api.github.com/users/saahiluppal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you do not pass an `attention_mask` then the `attention_mask` will automatically set to all ones (`[1, 1, 1, ...,]`). Meanning that every token is attended to (no token is masked). Also check out the docs here: https://huggingface.co/transformers/glossary.html#attention-mask.\r\n\r\nIf your `input_ids` contain <PAD> tokens, then the `attention_mask` will not automatically be calculated. You can leverage the tokenizers though to automatically retrieve the correct `attention_mask`."
] | 1,596 | 1,596 | 1,596 | NONE | null | Is it necessary to provide attention_mask, or model will calculate itself? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6344/comments | https://api.github.com/repos/huggingface/transformers/issues/6344/events | https://github.com/huggingface/transformers/pull/6344 | 675,483,993 | MDExOlB1bGxSZXF1ZXN0NDY1MDAyMTU1 | 6,344 | [s2s] fix label_smoothed_nll_loss | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think `sum` is good. Ideally, we should divide by the number of non pad tokens, but I'm gunna merge this and then we can experiment with more complicated transformations. Thanks for the fix!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=h1) Report\n> Merging [#6344](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/99f73bcc71e73d747124c476f9028db752fb05f3&el=desc) will **increase** coverage by `0.11%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6344 +/- ##\n==========================================\n+ Coverage 79.47% 79.59% +0.11% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21614 21646 +32 \n+ Misses 5582 5550 -32 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=footer). Last update [99f73bc...cfa9adc](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | MEMBER | null | Regarding issue #4576
Regarding reduction, fairseq does reduce using 'sum' for both [cross_entropy](https://fairseq.readthedocs.io/en/latest/_modules/fairseq/criterions/cross_entropy.html#CrossEntropyCriterion) and [label_smoothed_cross_entropy](https://fairseq.readthedocs.io/en/latest/_modules/fairseq/criterions/label_smoothed_cross_entropy.html) . In transformers, `CrossEntropy` does the default `mean` reduction. Should we do `mean` or `sum` here ?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6344",
"html_url": "https://github.com/huggingface/transformers/pull/6344",
"diff_url": "https://github.com/huggingface/transformers/pull/6344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6344.patch",
"merged_at": 1596874873000
} |
https://api.github.com/repos/huggingface/transformers/issues/6343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6343/comments | https://api.github.com/repos/huggingface/transformers/issues/6343/events | https://github.com/huggingface/transformers/issues/6343 | 675,483,779 | MDU6SXNzdWU2NzU0ODM3Nzk= | 6,343 | The default cache directory is lack of disk capacity, I need change the configure of the default cache directory. | {
"login": "leeivan",
"id": 2181900,
"node_id": "MDQ6VXNlcjIxODE5MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2181900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leeivan",
"html_url": "https://github.com/leeivan",
"followers_url": "https://api.github.com/users/leeivan/followers",
"following_url": "https://api.github.com/users/leeivan/following{/other_user}",
"gists_url": "https://api.github.com/users/leeivan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leeivan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leeivan/subscriptions",
"organizations_url": "https://api.github.com/users/leeivan/orgs",
"repos_url": "https://api.github.com/users/leeivan/repos",
"events_url": "https://api.github.com/users/leeivan/events{/privacy}",
"received_events_url": "https://api.github.com/users/leeivan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please try to write better posts in the future. This is just lazy.\r\n\r\nYou can set the directory for a cache with the `TRANSFORMERS_CACHE` environment variable.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6343/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6342/comments | https://api.github.com/repos/huggingface/transformers/issues/6342/events | https://github.com/huggingface/transformers/pull/6342 | 675,479,556 | MDExOlB1bGxSZXF1ZXN0NDY0OTk4OTg2 | 6,342 | [marian] converter supports models from new Tatoeba project | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2039044877,
"node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/marian",
"name": "marian",
"color": "30cc95",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=h1) Report\n> Merging [#6342](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fb7330b30ebfbb3f07b87203f0405ee09905eeda&el=desc) will **increase** coverage by `0.99%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6342 +/- ##\n==========================================\n+ Coverage 78.42% 79.41% +0.99% \n==========================================\n Files 156 156 \n Lines 28129 28129 \n==========================================\n+ Hits 22061 22340 +279 \n+ Misses 6068 5789 -279 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <0.00%> (+0.83%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <0.00%> (+1.63%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=footer). Last update [fb7330b...c3288e2](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | - no state dict change, just need to read model metadata from a new path
- we only accept models with 7 letter names... like `ara-eng`. This is the new format.
- added integration test for `ara-eng`
Done:
- [x] upload 300 models: all new ones that were not "dominated" by a model we already have, where dominated means same langpair but the first model has a higher BLEU score.
Todo:
- [ ] switch integration test for `ara-eng` -> `ar-en`.
- [ ] automated model cards with correct `tags`, more info on all possible language codes.
- [ ] automated conflict resolution: Don't convert models that are worse than predecessors.
- [ ] decide what to do about naming: move all to 3 letter/all to 2 letter?
- [ ] notebook -> pyfile
- [ ] tweet
cc @julien-c
Dict where keys are old names, and values are new names, filtered to situations where new name has higher BLEU than old name:
```
{'bg-es': 'bul-spa',
'es-eu': 'spa-eus',
'eu-es': 'eus-spa',
'es-bg': 'spa-bul',
'ilo-en': 'ilo-eng',
'es-mk': 'spa-mkd',
'es-ca': 'spa-cat',
'es-af': 'spa-afr',
'lt-es': 'lit-spa',
'bn-en': 'ben-eng',
'th-en': 'tha-eng',
'fr-ca': 'fra-cat',
'ga-en': 'gle-eng',
'en-ga': 'eng-gle',
'ko-fi': 'kor-fin',
'es-uk': 'spa-ukr',
'gl-es': 'glg-spa',
'eo-sv': 'epo-swe',
'ca-de': 'cat-deu',
'az-en': 'aze-eng',
'sv-eo': 'swe-epo',
'de-is': 'deu-isl',
'ceb-en': 'ceb-eng',
'ca-fr': 'cat-fra',
'tl-en': 'tgl-eng',
'is-de': 'isl-deu',
'ko-en': 'kor-eng',
'is-es': 'isl-spa',
'es-gl': 'spa-glg',
'bg-fr': 'bul-fra',
'de-af': 'deu-afr',
'ko-es': 'kor-spa',
'es-is': 'spa-isl',
'af-es': 'afr-spa',
'gl-en': 'glg-eng',
'fi-en': 'fin-eng',
'en-bg': 'eng-bul',
'mk-es': 'mkd-spa',
'ka-en': 'kat-eng',
'en-eu': 'eng-eus',
'de-ca': 'deu-cat',
'ar-de': 'ara-deu'}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6342/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6342",
"html_url": "https://github.com/huggingface/transformers/pull/6342",
"diff_url": "https://github.com/huggingface/transformers/pull/6342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6342.patch",
"merged_at": 1597722943000
} |
https://api.github.com/repos/huggingface/transformers/issues/6341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6341/comments | https://api.github.com/repos/huggingface/transformers/issues/6341/events | https://github.com/huggingface/transformers/pull/6341 | 675,430,771 | MDExOlB1bGxSZXF1ZXN0NDY0OTU0ODIx | 6,341 | [s2s] tiny QOL improvement: run_eval prints scores | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=h1) Report\n> Merging [#6341](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/322dffc6c9a44fd504b24b0efcbcaa419b577a93&el=desc) will **increase** coverage by `0.13%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6341 +/- ##\n==========================================\n+ Coverage 78.37% 78.51% +0.13% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21316 21354 +38 \n+ Misses 5880 5842 -38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=footer). Last update [322dffc...37aad56](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | its annoying to have to cat a file to see the scores after calling run_eval.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6341/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6341",
"html_url": "https://github.com/huggingface/transformers/pull/6341",
"diff_url": "https://github.com/huggingface/transformers/pull/6341.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6341.patch",
"merged_at": 1596869156000
} |
https://api.github.com/repos/huggingface/transformers/issues/6340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6340/comments | https://api.github.com/repos/huggingface/transformers/issues/6340/events | https://github.com/huggingface/transformers/pull/6340 | 675,430,147 | MDExOlB1bGxSZXF1ZXN0NDY0OTU0MzI3 | 6,340 | PegasusForConditionalGeneration (torch version) | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=h1) Report\n> Merging [#6340](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6cb0f806efecb64df40c946dacaad0adad33d53&el=desc) will **increase** coverage by `1.80%`.\n> The diff coverage is `94.50%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6340 +/- ##\n==========================================\n+ Coverage 77.51% 79.32% +1.80% \n==========================================\n Files 150 153 +3 \n Lines 27789 27877 +88 \n==========================================\n+ Hits 21542 22113 +571 \n+ Misses 6247 5764 -483 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: |\n| [src/transformers/configuration\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `90.90% <90.90%> (ø)` | |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `93.54% <93.54%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.33% <100.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.23% <100.00%> (+0.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.61% <100.00%> (+14.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.45% <100.00%> (-2.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| ... and [27 more](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=footer). Last update [f6cb0f8...95e8544](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm going to merge after 2 hours of docs work, then take another pass to document prepare_seq2seq_batch consistently when other tokenizers implement it."
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | This PR adds, [pegasus](https://arxiv.org/abs/1912.08777), a SOTA summarization model ported from [tf1] (https://github.com/google-research/pegasus) in collaboration with @JingqingZ .
More info on the model can be found in `pegasus.rst` under Files changed.
Config: [here](https://s3.amazonaws.com/models.huggingface.co/bert/google/pegasus-xsum/config.json)
#### TODO This PR:
- [x] convert to bart state dict format
- [x] working sentencepiece to
- [x] integration test with good summary on xsum data. (Haven't checked parity).
- [x] beam_alpha -> length_penalty approximation.
- [x] check xsum rouge with length penalty 1. 24.34 vs 24.56 Rouge 2 in paper (very good, no bug). Gap likely from different length penalty.
- [x] convert other checkpoints besides xsum
- [x] tokenizer must know max_source_length (`tokenizer_config.json`)
- [x] `model_doc/pegasus.rst` (document known fp16 issue)
- [x] move all checkpoints to `google/pegasus/{dataset}/`
- [ ] model_cards (S3)
#### Future PR(s):
- [ ] TF 2.0
- [ ] `tokenizer.add_tokens` doesn't work.
- [ ] support for finetuning pegasus-large (WIP see `finetune_pegasus.sh`)
- [ ] potentially add pegasus's `length_normalization` logic if it helps metrics substantially (over equivalent length_penalty).
- [ ] faster tokenizer tests (with smaller sentencepiece model.)
- [ ] try to find a clean way to add the pegasus length penalty.
- [ ] pick checkpoint for summarization pipeline default -- probably cnndm.
#### Known FP16 Issue
fp16 generation doesn't work for most sequences. We have an activation that is 101,610 in both fp32 and fp16 (the limit is 65,504).
In `#pegasus-collab`, the authors responded that they never used fp16 during pretraining/finetuning.
Things I tried that didn't help:
- never use `FusedLayerNorm`
- increase `layernorm_eps` to 1 (from 1e-5)
Things I haven't tried:
- change all softmaxes to dtype=torch.float32
- manually divide by 100 and finetune more with some loss that discourages large activations.
#### Implementation Choices
- I inherited from Bart with 0 change to bart, but added a new config/modeling file for namespace consistency/control.
- `PegasusTokenizer` inherits from `ReformerTokenizer` -- both just use a single `spiece.model`.
- added common test coverage for the tokenizer, not the model since it is 0 LOC.
- added integration tests for xsum.
### Inference API
datasets will vary between checkpoints, but otherwise, I think these are almost correct front matter
```
---
language: en
datasets:
- xsum
tags:
- summarization
---
```
This doesn't seem to be helping since [xsum](https://huggingface.co/google/pegasus-xsum) still thinks its for mask filling.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6340/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6340/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6340",
"html_url": "https://github.com/huggingface/transformers/pull/6340",
"diff_url": "https://github.com/huggingface/transformers/pull/6340.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6340.patch",
"merged_at": 1597170684000
} |
https://api.github.com/repos/huggingface/transformers/issues/6339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6339/comments | https://api.github.com/repos/huggingface/transformers/issues/6339/events | https://github.com/huggingface/transformers/pull/6339 | 675,418,401 | MDExOlB1bGxSZXF1ZXN0NDY0OTQ0ODMw | 6,339 | refactor almost identical tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=h1) Report\n> Merging [#6339](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/322dffc6c9a44fd504b24b0efcbcaa419b577a93&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6339 +/- ##\n==========================================\n- Coverage 78.37% 78.34% -0.04% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n- Hits 21316 21307 -9 \n- Misses 5880 5889 +9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=footer). Last update [322dffc...92d3825](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"could also modify `unwrap_schedule` and `unwrap_and_save_reload_schedule` to return a clean list of numbers, and then it'd be just:\r\n\r\n```\r\n for scheduler_func, data in scheds.items():\r\n kwargs, expected_learning_rates = data\r\n\r\n scheduler = scheduler_func(self.optimizer, **kwargs)\r\n lrs_1 = unwrap_schedule(scheduler, self.num_steps)\r\n self.assertListAlmostEqual(lrs_1, expected_learning_rates, tol=1e-2)\r\n\r\n scheduler = scheduler_func(self.optimizer, **kwargs)\r\n lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps)\r\n self.assertListEqual(lrs_1, lrs_2)\r\n```\r\n\r\nbut perhaps it'd be less intuitive for those reading the test code.",
"Does this impact tracebacks in a bad way? Previously I would know which scheduler I broke if `test_warmup_constant_scheduler` failed.",
"That's super-imporant, @sshleifer, thank you for flagging that!\r\n\r\nAdded an assert msg to make it clear what fails, e.g. if I break data for the sake of demo, we now get:\r\n\r\n```\r\n for scheduler_func, data in scheds.items():\r\n kwargs, expected_learning_rates = data\r\n\r\n scheduler = scheduler_func(self.optimizer, **kwargs)\r\n lrs_1 = unwrap_schedule(scheduler, self.num_steps)\r\n self.assertEqual(len(lrs_1[0]), 1)\r\n self.assertListAlmostEqual(\r\n> [l[0] for l in lrs_1], expected_learning_rates, tol=1e-2, msg=f\"failed for {scheduler_func}\"\r\n )\r\n\r\ntests/test_optimization.py:126:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_optimization.py:92: in assertListAlmostEqual\r\n self.assertAlmostEqual(a, b, delta=tol, msg=msg)\r\nE AssertionError: 2.5 != 3.5 within 0.01 delta (1.0 difference) : failed for <function get_constant_schedule_with_warmup at 0x7f5da6f0bdd0>\r\n```",
"hmm, not sure whether the last commit, to make the assert message even more specific, was needed.\r\n\r\nAlso, alternatively, I can move the code out of unittest class and then use pytest parametrization so it'll be self-documenting on assert. Ala: https://github.com/huggingface/transformers/blob/175cd45e13b2e33d1efec9e2ac217cba99f6ae58/examples/seq2seq/test_seq2seq_examples.py#L238\r\n",
"LGTM as is, but won't merge it myself."
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | in preparation for adding more schedulers this PR refactors these almost identical tests.
Unfortunately [can't use `pytest.mark.parametrize`](https://docs.pytest.org/en/latest/unittest.html#pytest-features-in-unittest-testcase-subclasses), so the only drawback that it makes them all into a single test. It'd have been nice to parametrize instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6339/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6339",
"html_url": "https://github.com/huggingface/transformers/pull/6339",
"diff_url": "https://github.com/huggingface/transformers/pull/6339.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6339.patch",
"merged_at": 1597051880000
} |
https://api.github.com/repos/huggingface/transformers/issues/6338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6338/comments | https://api.github.com/repos/huggingface/transformers/issues/6338/events | https://github.com/huggingface/transformers/pull/6338 | 675,385,481 | MDExOlB1bGxSZXF1ZXN0NDY0OTE2Njc2 | 6,338 | remove a TODO item to use a tiny model | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=h1) Report\n> Merging [#6338](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f8e8265188de8b76f5c28539056d6eb772e4e0f&el=desc) will **increase** coverage by `0.32%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6338 +/- ##\n==========================================\n+ Coverage 78.79% 79.12% +0.32% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21430 21519 +89 \n+ Misses 5766 5677 -89 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-69.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=footer). Last update [1f8e826...8721f03](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | as discussed with @sshleifer, removing this TODO to switch to a tiny model, since it won't be able to test the qualitative results of the evaluation (i.e. the results are meaningless). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6338/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6338",
"html_url": "https://github.com/huggingface/transformers/pull/6338",
"diff_url": "https://github.com/huggingface/transformers/pull/6338.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6338.patch",
"merged_at": 1596850240000
} |
https://api.github.com/repos/huggingface/transformers/issues/6337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6337/comments | https://api.github.com/repos/huggingface/transformers/issues/6337/events | https://github.com/huggingface/transformers/issues/6337 | 675,381,232 | MDU6SXNzdWU2NzUzODEyMzI= | 6,337 | [CI] add manual workflow dispatch option to github actions runners | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6337/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6336/comments | https://api.github.com/repos/huggingface/transformers/issues/6336/events | https://github.com/huggingface/transformers/issues/6336 | 675,380,598 | MDU6SXNzdWU2NzUzODA1OTg= | 6,336 | broken ONNX slow test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This seems to be a bit of a flaky test, doesn't it?",
"There is strange try/except syntax in `_test_export ` that I think can be trivially improved."
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | ```
def test_quantize_pytorch(self):
for model in OnnxExportTestCase.MODEL_TO_TEST:
path = self._test_export(model, "pt", 12)
> quantized_path = quantize(Path(path))
```
tests/test_onnx.py:75: `path` is None
https://github.com/huggingface/transformers/runs/960368281?check_suite_focus=true | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6335/comments | https://api.github.com/repos/huggingface/transformers/issues/6335/events | https://github.com/huggingface/transformers/issues/6335 | 675,376,121 | MDU6SXNzdWU2NzUzNzYxMjE= | 6,335 | delete unused tiny models | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | CONTRIBUTOR | null | ```
[ok] bart-tiny-random/
[ok] tiny-marian-en-de/
[ok] tiny-mbart/
[deleted] distilbert_tiny_random/
[ok] tiny-ctrl/
PRE tiny-dbmdz-bert-large-cased-finetuned-conll03-english/
[ok] tiny-distilbert-base-cased-distilled-squad/
[ok] tiny-distilbert-base-cased/
[ok] tiny-distilbert-base-uncased-finetuned-sst-2-english/
[ok] tiny-distilroberta-base/
[ok] tiny-gpt2/
[ok] tiny-xlnet-base-cased/
```
and make sure the ones that remain are usable/have tokenizer files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6335/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6334/comments | https://api.github.com/repos/huggingface/transformers/issues/6334/events | https://github.com/huggingface/transformers/pull/6334 | 675,356,363 | MDExOlB1bGxSZXF1ZXN0NDY0ODkwMDUx | 6,334 | [WIP] Avoid call to torch.triu | {
"login": "tomgrek",
"id": 2245347,
"node_id": "MDQ6VXNlcjIyNDUzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2245347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomgrek",
"html_url": "https://github.com/tomgrek",
"followers_url": "https://api.github.com/users/tomgrek/followers",
"following_url": "https://api.github.com/users/tomgrek/following{/other_user}",
"gists_url": "https://api.github.com/users/tomgrek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomgrek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomgrek/subscriptions",
"organizations_url": "https://api.github.com/users/tomgrek/orgs",
"repos_url": "https://api.github.com/users/tomgrek/repos",
"events_url": "https://api.github.com/users/tomgrek/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomgrek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6334/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6334",
"html_url": "https://github.com/huggingface/transformers/pull/6334",
"diff_url": "https://github.com/huggingface/transformers/pull/6334.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6334.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6333/comments | https://api.github.com/repos/huggingface/transformers/issues/6333/events | https://github.com/huggingface/transformers/issues/6333 | 675,327,498 | MDU6SXNzdWU2NzUzMjc0OTg= | 6,333 | add tests/test_tokenization_reformer.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"I can help with this.",
"Awesome!",
"@sshleifer I put together the test code and find that the following test is failing:\r\n\r\n```\r\nself = < tests.test_tokenization_reformer.ReformerTokenizationTest\r\ntestMethod = test_torch_encode_plus_sent_to_model \r\n@slow\r\n@require_torch\r\ndef test_torch_encode_plus_sent_to_model(self):\r\n import torch\r\n from transformers import MODEL_MAPPING, TOKENIZER_MAPPING\r\n\r\n MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(MODEL_MAPPING, TOKENIZER_MAPPING)\r\n\r\n tokenizers = self.get_tokenizers(do_lower_case=False)\r\n for tokenizer in tokenizers:\r\n with self.subTest(f\"{tokenizer.__class__.__name__}\"):\r\n\r\n if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING:\r\n return\r\n\r\n config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__]\r\n config = config_class()\r\n\r\n if config.is_encoder_decoder or config.pad_token_id is None:\r\n return\r\n\r\n model = model_class(config)\r\n\r\n # Make sure the model contains at least the full vocabulary size in its embedding matrix\r\n is_using_common_embeddings = hasattr(model.get_input_embeddings(), \"weight\")\r\n \r\nassert (\r\n (model.get_input_embeddings().weight.shape[0] >= len(tokenizer))\r\n if is_using_common_embeddings\r\n else True\r\n)\r\nAssertionError:\r\nassert False\r\n```\r\n\r\nUpon further investigation I found a discrepancy between the pre-trained tokenizer and pre-trained model config around the pad token id and resulting vocab size. Please see the example below:\r\n\r\n`\r\nfrom transformers import ReformerTokenizer, ReformerModel\r\nmodel = ReformerModel.from_pretrained(\"google/reformer-crime-and-punishment\")\r\ntokenizer = ReformerTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")\r\nprint(tokenizer.vocab_size) 320\r\nprint(len(tokenizer)) 321\r\nprint(model.config.vocab_size) 320\r\nprint(model.get_input_embeddings().weight.shape[0]) 320\r\nprint(tokenizer.get_vocab()['<pad>']) 320\r\nprint(model.config.pad_token_id) 0\r\nprint(tokenizer.get_vocab()['<unk>']) 0\r\n`\r\n\r\nWhat is your suggestion for moving forward?",
"My suggestion would be to check in `tokenization_utils_base ` how `__len__` works, and try to make it so that ReformerTokenizer's __len__ is 320.",
"@sshleifer Test merged.",
"Thx @D-Roberts ! "
] | 1,596 | 1,598 | 1,598 | CONTRIBUTOR | null | I don't think there is any common test coverage for ReformerTokenizer. besides through integration tests.
Good source for copy/modification is `XLMRobertaTokenizationTest`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6333/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6332/comments | https://api.github.com/repos/huggingface/transformers/issues/6332/events | https://github.com/huggingface/transformers/pull/6332 | 675,320,566 | MDExOlB1bGxSZXF1ZXN0NDY0ODU4MDMx | 6,332 | [CI] Self-scheduled runner also pins torch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"merging to fix CI",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=h1) Report\n> Merging [#6332](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6695450a23545bc9d5416f39ab39609c7811c653&el=desc) will **increase** coverage by `0.57%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6332 +/- ##\n==========================================\n+ Coverage 78.54% 79.11% +0.57% \n==========================================\n Files 148 148 \n Lines 27196 27196 \n==========================================\n+ Hits 21361 21517 +156 \n+ Misses 5835 5679 -156 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-70.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `94.63% <0.00%> (+70.08%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=footer). Last update [6695450...b9d3b99](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | ```bash
pip install torch!=1.6.0 --no-cache-dir
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6332/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6332",
"html_url": "https://github.com/huggingface/transformers/pull/6332",
"diff_url": "https://github.com/huggingface/transformers/pull/6332.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6332.patch",
"merged_at": 1596840021000
} |
https://api.github.com/repos/huggingface/transformers/issues/6331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6331/comments | https://api.github.com/repos/huggingface/transformers/issues/6331/events | https://github.com/huggingface/transformers/issues/6331 | 675,263,507 | MDU6SXNzdWU2NzUyNjM1MDc= | 6,331 | Delete this line in label_smoothed_nll_loss | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,596 | 1,596 | 1,596 | CONTRIBUTOR | null | ```python
bs = pad_mask.long().sum()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6331/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6330/comments | https://api.github.com/repos/huggingface/transformers/issues/6330/events | https://github.com/huggingface/transformers/issues/6330 | 675,179,514 | MDU6SXNzdWU2NzUxNzk1MTQ= | 6,330 | BertForPreTraining with NSP | {
"login": "choidongyeon",
"id": 54914459,
"node_id": "MDQ6VXNlcjU0OTE0NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/54914459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/choidongyeon",
"html_url": "https://github.com/choidongyeon",
"followers_url": "https://api.github.com/users/choidongyeon/followers",
"following_url": "https://api.github.com/users/choidongyeon/following{/other_user}",
"gists_url": "https://api.github.com/users/choidongyeon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/choidongyeon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/choidongyeon/subscriptions",
"organizations_url": "https://api.github.com/users/choidongyeon/orgs",
"repos_url": "https://api.github.com/users/choidongyeon/repos",
"events_url": "https://api.github.com/users/choidongyeon/events{/privacy}",
"received_events_url": "https://api.github.com/users/choidongyeon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Supporting the NSP objective is not on our roadmap, due to the reason you've linked and because of insufficient bandwidth. \r\n\r\nHowever, similar to the work in #6168 for SOP, we're very open to contributions and would accept a PR adding the BERT NSP objective to the datacollators/datasets.",
"Awesome, I've been working on something similar. Will open a PR, thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@choidongyeon May i ask if the work on dataset part using in BertForPreTraining APIs is finished? Any example codes like run_mlm.py (is there a run_mlm_nsp.py?) can help, looking forward to your reply, thx!"
] | 1,596 | 1,612 | 1,603 | CONTRIBUTOR | null | # ❓ Questions & Help
## Details
I am trying to train BERT from scratch following a modification of https://huggingface.co/blog/how-to-train, where I use a BertTokenizer and BertForPreTraining. The [documentation for BertForPreTraining](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForPreTraining) states that it has two heads on top for both pre-training processes (MLM and NSP), but [the example provided](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L874-L884) only provides an example of MLM.
Based on [a comment](https://github.com/huggingface/transformers/issues/2693#issuecomment-580870278) provided by @LysandreJik in a previous issue, it seems that none of the provided datasets (i.e. LineByLineTextDataset) will handle the NSP objective and this objective is excluded because the RoBERTa paper has proven that the NSP objective was not particularly helpful.
@LysandreJik additionally noted that anyone who wants to implement the NSP objective can do so by changing the dataset/training loop, and I was wondering if there were any plans to add support for NSP for the sake of completeness?
It seems that something similar to what is going on in a PR (https://github.com/huggingface/transformers/pull/6168) for Albert SOP can be done. Is this correct and can anyone provide me with some guidance moving forward? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6330/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6329/comments | https://api.github.com/repos/huggingface/transformers/issues/6329/events | https://github.com/huggingface/transformers/issues/6329 | 675,122,893 | MDU6SXNzdWU2NzUxMjI4OTM= | 6,329 | OSError: Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model | {
"login": "SeekPoint",
"id": 18051187,
"node_id": "MDQ6VXNlcjE4MDUxMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18051187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeekPoint",
"html_url": "https://github.com/SeekPoint",
"followers_url": "https://api.github.com/users/SeekPoint/followers",
"following_url": "https://api.github.com/users/SeekPoint/following{/other_user}",
"gists_url": "https://api.github.com/users/SeekPoint/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeekPoint/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeekPoint/subscriptions",
"organizations_url": "https://api.github.com/users/SeekPoint/orgs",
"repos_url": "https://api.github.com/users/SeekPoint/repos",
"events_url": "https://api.github.com/users/SeekPoint/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeekPoint/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,596 | 1,602 | 1,602 | NONE | null | my code:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("lonePatient/albert_chinese_small")
model = AutoModel.from_pretrained("lonePatient/albert_chinese_small")
model.save_pretrained("lonePatient+albert_chinese_small")
+++++++++++
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 633/633 [00:00<00:00, 113kB/s]
Traceback (most recent call last):
File "hg_download.py", line 30, in <module>
tokenizer = AutoTokenizer.from_pretrained("lonePatient/albert_chinese_small")
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 217, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1140, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1239, in _from_pretrained
raise EnvironmentError(
OSError: Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'lonePatient/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
~/ub16_prj % | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6328/comments | https://api.github.com/repos/huggingface/transformers/issues/6328/events | https://github.com/huggingface/transformers/pull/6328 | 675,101,174 | MDExOlB1bGxSZXF1ZXN0NDY0Njc1NjAz | 6,328 | Small docfile fixes | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=h1) Report\n> Merging [#6328](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2f2aa0c89cab9a77560e6845578f917a61081c67&el=desc) will **increase** coverage by `0.27%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6328 +/- ##\n==========================================\n+ Coverage 79.14% 79.42% +0.27% \n==========================================\n Files 148 148 \n Lines 27191 27191 \n==========================================\n+ Hits 21521 21596 +75 \n+ Misses 5670 5595 -75 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=footer). Last update [2f2aa0c...71324f6](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | COLLABORATOR | null | Nothing major, just a few fixes to make the files work with the coming notebook conversion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6328",
"html_url": "https://github.com/huggingface/transformers/pull/6328",
"diff_url": "https://github.com/huggingface/transformers/pull/6328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6328.patch",
"merged_at": 1597052232000
} |
https://api.github.com/repos/huggingface/transformers/issues/6327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6327/comments | https://api.github.com/repos/huggingface/transformers/issues/6327/events | https://github.com/huggingface/transformers/issues/6327 | 675,080,061 | MDU6SXNzdWU2NzUwODAwNjE= | 6,327 | Batched pipeline | {
"login": "berryweinst",
"id": 35626084,
"node_id": "MDQ6VXNlcjM1NjI2MDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35626084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/berryweinst",
"html_url": "https://github.com/berryweinst",
"followers_url": "https://api.github.com/users/berryweinst/followers",
"following_url": "https://api.github.com/users/berryweinst/following{/other_user}",
"gists_url": "https://api.github.com/users/berryweinst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/berryweinst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/berryweinst/subscriptions",
"organizations_url": "https://api.github.com/users/berryweinst/orgs",
"repos_url": "https://api.github.com/users/berryweinst/repos",
"events_url": "https://api.github.com/users/berryweinst/events{/privacy}",
"received_events_url": "https://api.github.com/users/berryweinst/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, building off of the model and example [here](https://huggingface.co/twmkn9/albert-base-v2-squad2): \r\n\r\n```\r\nfrom transformers.pipelines import pipeline\r\n\r\nmodel_name = \"twmkn9/albert-base-v2-squad2\"\r\nnlp = pipeline('question-answering', model=model_name, tokenizer=model_name)\r\nQA_input = {\r\n 'question': 'Why is model conversion important?',\r\n 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'}, {\r\n 'question': 'What is the name of the repository ?',\r\n 'context': 'Pipeline have been included in the huggingface/transformers repository. '\r\n }\r\nres = nlp(QA_input, handle_impossible_answer=True)\r\nprint(res)\r\n# [{'score': 0.2479676753282547, 'start': 59, 'end': 132, 'answer': 'gives freedom to the user and let people easily switch between frameworks.'}, {'score': 0.5168691277503967, 'start': 35, 'end': 71, 'answer': 'huggingface/transformers repository.'}]\r\n\r\n\r\n```\r\n\r\n",
"Hi. \r\n\r\nI used your example for testing. It seems like even though I put multiple question-context pairs in as input, it really is just doing a one-by-one prediction on them in the background. \r\nSo for 1 example the inference time is: 0.56 sec\r\nFor 2 examples the inference time is: 1.05 sec\r\nFor 16 examples it is: 8.4 sec., etc..\r\n\r\nIs there a way to do batch inference with the model to save some time ? (I use 12 GB gpu, transformers 2.4.0 or 3.2.0)\r\n \r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Possible duplicate of #3007",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"still an open point. highly required, any information on the progress?",
"This is implemented in recent versions: https://huggingface.co/docs/transformers/master/en/main_classes/pipelines#pipeline-batching\r\n\r\ncc @Narsil ",
"For the sake of completion: \r\n```python\r\npipe = pipeline('question-answering', model=model_name, tokenizer=model_name)\r\nquestions = [{\"question\": \"Who am I ?\", \"context\": \"There is something about me\"}, .... ]\r\nfor answer in pipe(questions, batch_size=16):\r\n print(answer)\r\n````"
] | 1,596 | 1,645 | 1,614 | NONE | null | Hi,
Is there a way to run batches with QuestionAnsweringPipeline rather than just one example?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6327/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6326/comments | https://api.github.com/repos/huggingface/transformers/issues/6326/events | https://github.com/huggingface/transformers/pull/6326 | 675,076,532 | MDExOlB1bGxSZXF1ZXN0NDY0NjU1MzI1 | 6,326 | Patch models | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Side note, if you rebase, you can remove those models from the special ignore list in the `check_repo` script."
] | 1,596 | 1,597 | 1,597 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6326/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6326",
"html_url": "https://github.com/huggingface/transformers/pull/6326",
"diff_url": "https://github.com/huggingface/transformers/pull/6326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6326.patch",
"merged_at": 1597070358000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6325/comments | https://api.github.com/repos/huggingface/transformers/issues/6325/events | https://github.com/huggingface/transformers/issues/6325 | 675,010,298 | MDU6SXNzdWU2NzUwMTAyOTg= | 6,325 | Text-to-SQL Query | {
"login": "thiagomoeng",
"id": 64150563,
"node_id": "MDQ6VXNlcjY0MTUwNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/64150563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thiagomoeng",
"html_url": "https://github.com/thiagomoeng",
"followers_url": "https://api.github.com/users/thiagomoeng/followers",
"following_url": "https://api.github.com/users/thiagomoeng/following{/other_user}",
"gists_url": "https://api.github.com/users/thiagomoeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thiagomoeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thiagomoeng/subscriptions",
"organizations_url": "https://api.github.com/users/thiagomoeng/orgs",
"repos_url": "https://api.github.com/users/thiagomoeng/repos",
"events_url": "https://api.github.com/users/thiagomoeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/thiagomoeng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"pinging @mrm8488 .",
"You can try this model: https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL\r\nThanks for ping me @patil-suraj ",
"If you go to the model hub and type 'SQL', you get [this](https://huggingface.co/models?search=Sql). There are currently 3 variants of the T5 model fine-tuned on WikiSQL (a large dataset that contains sentence - SQL pairs).",
"Hi @mrm8488 I am having some issue with \"torch_xla\" module, is there any way to run this model on local windows, without this TPU module? Thanks.",
"hi @thiagomoeng, can you try setting the `xla_device` argument in `config.json` to `False`.\r\n\r\n```python3\r\nconfig = T5Config.from_pretrained(\"mrm8488/t5-base-finetuned-wikiSQL\")\r\nconfig.xla_device = False\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"mrm8488/t5-base-finetuned-wikiSQL\", config=config)",
"Ofc, I can but the basic structure band the most of the code is from the public colab about fine tuning T5 on TPU by @patil-suraj ",
"Hi @mrm8488 I got a SQL data on Oracle, do you can give me some way of how to prepare this sql data to finetune on your model? I am beginner on training models.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thiagomoeng if you want to convert natural language to sql , here is one implementation : https://github.com/abhijithneilabraham/tableQA\r\n\r\nDrop me any comments on the slack channel in the readme there."
] | 1,596 | 1,605 | 1,604 | NONE | null | # ❓ Questions & Help
Hello everyone, I got a task that I want to use NLP for convert text into a SQL query. Anyone knows how to do this or got any suggestion? Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6325/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6325/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6324/comments | https://api.github.com/repos/huggingface/transformers/issues/6324/events | https://github.com/huggingface/transformers/pull/6324 | 674,954,837 | MDExOlB1bGxSZXF1ZXN0NDY0NTU0MTQx | 6,324 | Create README.md | {
"login": "rohanrajpal",
"id": 7023147,
"node_id": "MDQ6VXNlcjcwMjMxNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7023147?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohanrajpal",
"html_url": "https://github.com/rohanrajpal",
"followers_url": "https://api.github.com/users/rohanrajpal/followers",
"following_url": "https://api.github.com/users/rohanrajpal/following{/other_user}",
"gists_url": "https://api.github.com/users/rohanrajpal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohanrajpal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohanrajpal/subscriptions",
"organizations_url": "https://api.github.com/users/rohanrajpal/orgs",
"repos_url": "https://api.github.com/users/rohanrajpal/repos",
"events_url": "https://api.github.com/users/rohanrajpal/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohanrajpal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=h1) Report\n> Merging [#6324](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e9861f7f4ab137cf102dae9cf6957c1c402c022&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6324 +/- ##\n==========================================\n- Coverage 79.23% 79.11% -0.13% \n==========================================\n Files 148 148 \n Lines 27195 27195 \n==========================================\n- Hits 21548 21515 -33 \n- Misses 5647 5680 +33 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.46% <0.00%> (+5.26%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (+25.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=footer). Last update [7e9861f...66d050d](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,597 | 1,597 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6324",
"html_url": "https://github.com/huggingface/transformers/pull/6324",
"diff_url": "https://github.com/huggingface/transformers/pull/6324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6324.patch",
"merged_at": 1597185499000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6323/comments | https://api.github.com/repos/huggingface/transformers/issues/6323/events | https://github.com/huggingface/transformers/issues/6323 | 674,928,799 | MDU6SXNzdWU2NzQ5Mjg3OTk= | 6,323 | Hi , I am having trouble locating the transformers/examples/summarization/bart/ file. I was wondering if it has been renamed or changed? | {
"login": "mc2259",
"id": 57819870,
"node_id": "MDQ6VXNlcjU3ODE5ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57819870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mc2259",
"html_url": "https://github.com/mc2259",
"followers_url": "https://api.github.com/users/mc2259/followers",
"following_url": "https://api.github.com/users/mc2259/following{/other_user}",
"gists_url": "https://api.github.com/users/mc2259/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mc2259/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mc2259/subscriptions",
"organizations_url": "https://api.github.com/users/mc2259/orgs",
"repos_url": "https://api.github.com/users/mc2259/repos",
"events_url": "https://api.github.com/users/mc2259/events{/privacy}",
"received_events_url": "https://api.github.com/users/mc2259/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Think it was moved to the seq2seq directory: https://github.com/huggingface/transformers/tree/master/examples/seq2seq",
"Thanks!"
] | 1,596 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6323/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6322/comments | https://api.github.com/repos/huggingface/transformers/issues/6322/events | https://github.com/huggingface/transformers/pull/6322 | 674,887,999 | MDExOlB1bGxSZXF1ZXN0NDY0NDk4MDY3 | 6,322 | Transformer-XL: Improved tokenization with sacremoses | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`ci/circleci: check_code_quality` fails for me due to Python 3.6 is not compatible with PyTorch 1.6. Any ideas how to fix this?",
"Pinging @n1t0 and @TevenLeScao (in holidays right now, will be back next week!)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=h1) Report\n> Merging [#6322](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **increase** coverage by `0.37%`.\n> The diff coverage is `96.96%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6322 +/- ##\n==========================================\n+ Coverage 79.36% 79.74% +0.37% \n==========================================\n Files 157 157 \n Lines 28569 28587 +18 \n==========================================\n+ Hits 22675 22797 +122 \n+ Misses 5894 5790 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `41.74% <96.96%> (-0.75%)` | :arrow_down: |\n| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |\n| [src/transformers/commands/transformers\\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/commands/download.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9kb3dubG9hZC5weQ==) | `0.00% <0.00%> (-65.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (-55.89%)` | :arrow_down: |\n| [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (-53.34%)` | :arrow_down: |\n| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <0.00%> (-36.56%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=footer). Last update [930153e...3efbfcf](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,596 | 1,598 | 1,598 | CONTRIBUTOR | null | Fixes #5136
As explained in the above issue, this PR fixes the tokenization of the TransfoXLTokenizer by using the sacremoses library with an extended feature of tokenizing comma-separated and floating point numbers. That way the input text is tokenized the same way as in the WikiText-103 dateset used for pretraining.
Changes in a nutshell:
* The TransfoXLTokenizer is now using sacremoses for tokenization
* Added tokenization of comma-separated and floating point numbers.
* Removed prepare_for_tokenization() from tokenization_transfo_xl.py because punctuation is handled by sacremoses
* Added corresponding tests
* Removed test comapring TransfoXLTokenizer and TransfoXLTokenizerFast (as discussed in #5302)
* Added deprecation warning to TransfoXLTokenizerFast (as discussed in #5302)
@TevenLeScao | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6322/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6322",
"html_url": "https://github.com/huggingface/transformers/pull/6322",
"diff_url": "https://github.com/huggingface/transformers/pull/6322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6322.patch",
"merged_at": 1598622978000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.