url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6721/comments | https://api.github.com/repos/huggingface/transformers/issues/6721/events | https://github.com/huggingface/transformers/pull/6721 | 685,598,753 | MDExOlB1bGxSZXF1ZXN0NDczMjk5NzY0 | 6,721 | Add model card for singbert large | {
"login": "zyuanlim",
"id": 7169731,
"node_id": "MDQ6VXNlcjcxNjk3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7169731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyuanlim",
"html_url": "https://github.com/zyuanlim",
"followers_url": "https://api.github.com/users/zyuanlim/followers",
"following_url": "https://api.github.com/users/zyuanlim/following{/other_user}",
"gists_url": "https://api.github.com/users/zyuanlim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyuanlim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyuanlim/subscriptions",
"organizations_url": "https://api.github.com/users/zyuanlim/orgs",
"repos_url": "https://api.github.com/users/zyuanlim/repos",
"events_url": "https://api.github.com/users/zyuanlim/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyuanlim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@JetRunner same model but bert large version, thank you!!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=h1) Report\n> Merging [#6721](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d17cce227022594ba84dbb92bafc802fb41434df?el=desc) will **increase** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6721 +/- ##\n==========================================\n+ Coverage 79.00% 79.50% +0.49% \n==========================================\n Files 156 156 \n Lines 28406 28406 \n==========================================\n+ Hits 22443 22583 +140 \n+ Misses 5963 5823 -140 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.75%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=footer). Last update [d17cce2...cd3dbfc](https://codecov.io/gh/huggingface/transformers/pull/6721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Model card for singbert large, similar to [singbert](https://github.com/huggingface/transformers/tree/master/model_cards/zanelim/singbert) but bert large model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6721/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6721",
"html_url": "https://github.com/huggingface/transformers/pull/6721",
"diff_url": "https://github.com/huggingface/transformers/pull/6721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6721.patch",
"merged_at": 1598371884000
} |
https://api.github.com/repos/huggingface/transformers/issues/6720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6720/comments | https://api.github.com/repos/huggingface/transformers/issues/6720/events | https://github.com/huggingface/transformers/issues/6720 | 685,595,620 | MDU6SXNzdWU2ODU1OTU2MjA= | 6,720 | Resuming training from a checkpoint on Windows does not resume at the correct global_step | {
"login": "jncasey",
"id": 31020859,
"node_id": "MDQ6VXNlcjMxMDIwODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jncasey",
"html_url": "https://github.com/jncasey",
"followers_url": "https://api.github.com/users/jncasey/followers",
"following_url": "https://api.github.com/users/jncasey/following{/other_user}",
"gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jncasey/subscriptions",
"organizations_url": "https://api.github.com/users/jncasey/orgs",
"repos_url": "https://api.github.com/users/jncasey/repos",
"events_url": "https://api.github.com/users/jncasey/events{/privacy}",
"received_events_url": "https://api.github.com/users/jncasey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems wrong indeed. Would you mind fixing with a PR?",
"I'm embarrassed to say I'm not super familiar with git or open source collaboration in general, so even though it seems super trivial, I'm worried I'll mess something up. ",
"I can fix this but it'll be next week as I'm off tonight and tying a few other loose ends.",
"Thank you! I'm in no personal rush – I got my aborted training to resume. I just wanted someone on the project to know about the problem/solution I found. "
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | ### Who can help
Trainer: @sgugger
## Information
I've noticed that resuming training from a checkpoint on Windows fails to start at the right global step (though learning rate, loss, etc continue from where the training left off). The same doesn't happen under macOS.
## Solution?
I think the issue is in Trainer.train, specifically this line:
`self.global_step = int(model_path.split("-")[-1].split("/")[0])`
And it seems to work as expected by swapping `"/"` with `os.sep`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6719/comments | https://api.github.com/repos/huggingface/transformers/issues/6719/events | https://github.com/huggingface/transformers/pull/6719 | 685,573,476 | MDExOlB1bGxSZXF1ZXN0NDczMjc4NDUy | 6,719 | [Albert] Add position ids to allowed uninitialized weights | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=h1) Report\n> Merging [#6719](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6719 +/- ##\n==========================================\n- Coverage 79.49% 79.48% -0.01% \n==========================================\n Files 156 156 \n Lines 28405 28406 +1 \n==========================================\n- Hits 22581 22579 -2 \n- Misses 5824 5827 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6719/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.53% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6719/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.76%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=footer). Last update [625318f...8f67175](https://codecov.io/gh/huggingface/transformers/pull/6719?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6700
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6719/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6719",
"html_url": "https://github.com/huggingface/transformers/pull/6719",
"diff_url": "https://github.com/huggingface/transformers/pull/6719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6719.patch",
"merged_at": 1598369932000
} |
https://api.github.com/repos/huggingface/transformers/issues/6718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6718/comments | https://api.github.com/repos/huggingface/transformers/issues/6718/events | https://github.com/huggingface/transformers/pull/6718 | 685,454,632 | MDExOlB1bGxSZXF1ZXN0NDczMTc3NTgx | 6,718 | Create model card for lordtt13/COVID-SciBERT | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=h1) Report\n> Merging [#6718](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6718 +/- ##\n==========================================\n- Coverage 79.49% 79.43% -0.06% \n==========================================\n Files 156 156 \n Lines 28405 28405 \n==========================================\n- Hits 22581 22564 -17 \n- Misses 5824 5841 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6718/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=footer). Last update [625318f...c44165d](https://codecov.io/gh/huggingface/transformers/pull/6718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6718",
"html_url": "https://github.com/huggingface/transformers/pull/6718",
"diff_url": "https://github.com/huggingface/transformers/pull/6718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6718.patch",
"merged_at": 1598476946000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6717/comments | https://api.github.com/repos/huggingface/transformers/issues/6717/events | https://github.com/huggingface/transformers/pull/6717 | 685,445,761 | MDExOlB1bGxSZXF1ZXN0NDczMTcwMDQ2 | 6,717 | Fix TF optimizer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=h1) Report\n> Merging [#6717](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/841f07156948a28f8cc9182bb14c911f9e63b0e7?el=desc) will **decrease** coverage by `0.33%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6717 +/- ##\n==========================================\n- Coverage 79.77% 79.44% -0.34% \n==========================================\n Files 156 156 \n Lines 28392 28392 \n==========================================\n- Hits 22650 22556 -94 \n- Misses 5742 5836 +94 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=footer). Last update [841f071...5281631](https://codecov.io/gh/huggingface/transformers/pull/6717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,600 | 1,598 | CONTRIBUTOR | null | Fix #6560. Now the parameter `experimental_aggregate_gradients` should be correctly taken into account.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6717",
"html_url": "https://github.com/huggingface/transformers/pull/6717",
"diff_url": "https://github.com/huggingface/transformers/pull/6717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6717.patch",
"merged_at": 1598454765000
} |
https://api.github.com/repos/huggingface/transformers/issues/6716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6716/comments | https://api.github.com/repos/huggingface/transformers/issues/6716/events | https://github.com/huggingface/transformers/pull/6716 | 685,431,066 | MDExOlB1bGxSZXF1ZXN0NDczMTU3NDg4 | 6,716 | Fix ONNX test_quantize unittest | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=h1) Report\n> Merging [#6716](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abc0202194674ae5e241e547f3af34b4226bdc72?el=desc) will **increase** coverage by `1.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6716 +/- ##\n==========================================\n+ Coverage 78.98% 80.01% +1.03% \n==========================================\n Files 156 156 \n Lines 28398 28398 \n==========================================\n+ Hits 22429 22724 +295 \n+ Misses 5969 5674 -295 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+5.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=footer). Last update [abc0202...b0d57be](https://codecov.io/gh/huggingface/transformers/pull/6716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Lets see what happens!"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | `quantize` was catching exception silently, it now let the exception be catched by the caller to avoid dead path.
Also, introduces extras dependencies ["onnxruntime"] required to quantize a model and installs the extra when running slow tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6716",
"html_url": "https://github.com/huggingface/transformers/pull/6716",
"diff_url": "https://github.com/huggingface/transformers/pull/6716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6716.patch",
"merged_at": 1598376281000
} |
https://api.github.com/repos/huggingface/transformers/issues/6715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6715/comments | https://api.github.com/repos/huggingface/transformers/issues/6715/events | https://github.com/huggingface/transformers/pull/6715 | 685,410,326 | MDExOlB1bGxSZXF1ZXN0NDczMTM5NzM3 | 6,715 | tensor.nonzero() is deprecated in PyTorch 1.6 | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=h1) Report\n> Merging [#6715](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b105f2c6b3986e7170be353b1684861a3f70991b?el=desc) will **increase** coverage by `0.50%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6715 +/- ##\n==========================================\n+ Coverage 79.14% 79.65% +0.50% \n==========================================\n Files 156 156 \n Lines 28245 28245 \n==========================================\n+ Hits 22355 22499 +144 \n+ Misses 5890 5746 -144 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6715/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=footer). Last update [b105f2c...cccb59f](https://codecov.io/gh/huggingface/transformers/pull/6715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | It also seems to make the api-inference crash ...
Signed-off-by: Morgan Funtowicz <[email protected]>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6715",
"html_url": "https://github.com/huggingface/transformers/pull/6715",
"diff_url": "https://github.com/huggingface/transformers/pull/6715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6715.patch",
"merged_at": 1598357574000
} |
https://api.github.com/repos/huggingface/transformers/issues/6714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6714/comments | https://api.github.com/repos/huggingface/transformers/issues/6714/events | https://github.com/huggingface/transformers/issues/6714 | 685,394,740 | MDU6SXNzdWU2ODUzOTQ3NDA= | 6,714 | RuntimeError: forward() Expected a value of type ‘Tensor’ for argument ‘input_ids’ but instead found type ‘list’ while loading a torchscript model following the documentation | {
"login": "xf05888",
"id": 33285394,
"node_id": "MDQ6VXNlcjMzMjg1Mzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/33285394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xf05888",
"html_url": "https://github.com/xf05888",
"followers_url": "https://api.github.com/users/xf05888/followers",
"following_url": "https://api.github.com/users/xf05888/following{/other_user}",
"gists_url": "https://api.github.com/users/xf05888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xf05888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xf05888/subscriptions",
"organizations_url": "https://api.github.com/users/xf05888/orgs",
"repos_url": "https://api.github.com/users/xf05888/repos",
"events_url": "https://api.github.com/users/xf05888/events{/privacy}",
"received_events_url": "https://api.github.com/users/xf05888/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @xf05888, \r\n\r\nThanks for submitting the issue! I can reproduce it. I updated the docs here: #6714 . This should solve your problem I think. The input lists of tensors has to be unpacked before using it.\r\n"
] | 1,598 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
I followed the documentation [here](https://huggingface.co/transformers/torchscript.html) to convert transformers model to torchscript, modified the code for `ALBERT` and then ran them on Google Colab (also my local machine):
**To recurrent:**
**Step 1**
``` from transformers import AlbertModel, AlbertTokenizer, AlbertConfig
import torch
enc = AlbertTokenizer.from_pretrained("albert-xxlarge-v2")
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)
masked_index = 8
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]
config = AlbertConfig(torchscript=True)
model = AlbertModel(config)
model.eval()
model = AlbertModel.from_pretrained("albert-xxlarge-v2", torchscript=True)
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "albert-xxlarge-v2.pt")
```
I successfully exported the model, but the second code cell threw out a `RuntimeError`:
**Step 2**
**Code:**
```
loaded_model = torch.jit.load("albert-xxlarge-v2.pt")
loaded_model.eval()
all_encoder_layers, pooled_output = loaded_model(dummy_input)
```
**Error message:**
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-7-2e61b1def246> in <module>()
2 loaded_model.eval()
3
----> 4 all_encoder_layers, pooled_output = loaded_model(dummy_input)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
RuntimeError: forward() Expected a value of type 'Tensor' for argument 'input_ids' but instead found type 'list'.
Position: 1
Value: [tensor([[ 2, 72, 23, 2170, 27674, 13, 60, 3, 4, 27674,
23, 21, 10956, 7911, 3]]), tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]])]
Declaration: forward(__torch__.transformers.modeling_albert.AlbertModel self, Tensor input_ids, Tensor attention_mask) -> ((Tensor, Tensor))
Cast error details: Unable to cast Python instance to C++ type (compile in debug mode for details)
```
In order to ensure this is't caused by my edit of the code, I also ran all the code there to export `BERT` to TorchScript and **get same error at the same step**.
**Software Versions:**
```
Name: torch
Version: 1.6.0+cu101
Name: transformers
Version: 3.0.2
```
Is this a problem with the documentation? And how to fix it. Thanks for any advice.
## Details
It shouldn’t throw out a runtime error and the model should be able to use normally.
Original Link to the Huggingface Forum: [https://discuss.huggingface.co/t/get-runtimeerror-forward-expected-a-value-of-type-tensor-for-argument-input-ids-but-instead-found-type-list-while-loading-torchscript-models-converting-from-transformers/828](https://discuss.huggingface.co/t/get-runtimeerror-forward-expected-a-value-of-type-tensor-for-argument-input-ids-but-instead-found-type-list-while-loading-torchscript-models-converting-from-transformers/828) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6714/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6713/comments | https://api.github.com/repos/huggingface/transformers/issues/6713/events | https://github.com/huggingface/transformers/pull/6713 | 685,389,591 | MDExOlB1bGxSZXF1ZXN0NDczMTIxODc3 | 6,713 | Fix the TF Trainer gradient accumulation and the TF NER example | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=h1) Report\n> Merging [#6713](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/841f07156948a28f8cc9182bb14c911f9e63b0e7?el=desc) will **increase** coverage by `0.26%`.\n> The diff coverage is `20.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6713 +/- ##\n==========================================\n+ Coverage 79.77% 80.04% +0.26% \n==========================================\n Files 156 156 \n Lines 28392 28393 +1 \n==========================================\n+ Hits 22650 22727 +77 \n+ Misses 5742 5666 -76 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.21% <0.00%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <100.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6713/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=footer). Last update [841f071...b4b51de](https://codecov.io/gh/huggingface/transformers/pull/6713?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,600 | 1,598 | CONTRIBUTOR | null | This PR fixes #6479 and fix also the NER example that was not working anymore since few weeks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6713/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6713",
"html_url": "https://github.com/huggingface/transformers/pull/6713",
"diff_url": "https://github.com/huggingface/transformers/pull/6713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6713.patch",
"merged_at": 1598532335000
} |
https://api.github.com/repos/huggingface/transformers/issues/6712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6712/comments | https://api.github.com/repos/huggingface/transformers/issues/6712/events | https://github.com/huggingface/transformers/issues/6712 | 685,322,595 | MDU6SXNzdWU2ODUzMjI1OTU= | 6,712 | longformer padding logging | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This should be manageable with the logger:\r\n\r\n```py\r\nimport transformers\r\nimport logging\r\n\r\nlogger = logging.getLogger(\"transformers\")\r\nlogger.setLevel(logging.ERROR)\r\n```",
"thank you! sounds great. However, I do think that you can perhaps just notify this behavior at the start of training rather than logging every step.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | CONTRIBUTOR | null | # 🚀 Feature request
The longformer model logs the extra padding length that it adds (when the padding length is bigger than zero). Can you please add a verbose or non-verbose option to control this behavior? It basically creates a log every forward pass.
the code is in:
module - transformers/modeling_longformer.py
function - _pad_to_window_size
line - 555
Thank you so much ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6711/comments | https://api.github.com/repos/huggingface/transformers/issues/6711/events | https://github.com/huggingface/transformers/issues/6711 | 685,277,627 | MDU6SXNzdWU2ODUyNzc2Mjc= | 6,711 | Pegasus finetuning: OOM | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @laibamehnaz can you also post env info ",
"have seen this issue with colab, when the RAM usage suddenly increases colab just crashes it , were you using colab ?\r\n\r\nJust to confirm can you try using less examples, you can control the number of examples using `--n_train`\r\n\r\nOR\r\n\r\nMy guess is at that point when it crashed it may have received a longer sentence which could have resulted in the whole batch being large. If running on single GPU, you can use `sortish_sampler` which samples the longer batches first, so we can catch these types of errors early, can be enabled using `--sortish_sampler` ",
"Yes, I am using Colab. \r\nSure, I will check with --sortish_sampler. ",
"So I tried what you said but still get the same error. Tried with lesser training examples, still getting the same error. Tried with fewer validation examples as well. It seems like this error comes every time the first validation loop is ended, no matter the size. ",
"I see. Could you post the env info, GPU, RAM etc ? and the specific command that you ran which resulted in this error ? I will try to reproduce it ",
"GPU: Tesla K80\r\nRAM: 12GB\r\n\r\n./finetune_pegasus_xsum.sh \\\r\n --data_dir ./data/ \\\r\n --output_dir ./output/ \\\r\n --train_batch_size=2 \\\r\n --eval_batch_size=2 \\\r\n --val_check_interval 0.5 \\\r\n --num_train_epochs 3 \\\r\n --gradient_accumulation_steps 128 \\\r\n --model_name_or_path google/pegasus-xsum",
"I'd add `--freeze_embeds --sortish_sampler`.\r\nLMK how it goes, happy to help!",
"I have tried with them as well. Same issue :(",
"Hi @sshleifer , \r\nLooks like I hadn't tried with --freeze_embeds. \r\nWorks well now. Thanks a lot. \r\n\r\nEven though it leads to full completion, I still get something like this : \r\n\r\n`Epoch 3: 100% 300/300 [09:26<00:00, 1.89s/it, loss=97.408, v_num=6]\r\n./finetune_pegasus_xsum.sh: line 16: 551 Killed `\r\n\r\nAnd, it doesn't generate summaries for the test set even with --do_predict\r\n",
"@sshleifer similar issue here #6665",
"\r\n`--do_predict` doesn't work (there is a PL bug), you have to use `run_eval.py` to evaluate.\r\n\r\nHere is command I ran to evaluate `pegasus-xsum` on xsum/test data:\r\n\r\n```bash\r\nmkdir gens\r\nexport DATA_DIR=xsum\r\npython run_eval.py google/pegasus-xsum \\\r\n $DATA_DIR/test.source gens/peg_xsum_test_generation.txt \\\r\n --reference_path $DATA_DIR/test.target \\\r\n --score_path gens/peg_xsum_rouge.txt --task summarization \\\r\n --device cuda \\\r\n --bs 8\r\n```",
"I have seen \"killed\" at/towards the end of training a few times and just ignore it.",
"Alright. Thank you so much :))",
"@sshleifer I got core dumped with this, even with --freeze-embeds on 16GB P100\r\n```bash\r\n!bash finetune_pegasus_xsum.sh \\\r\n --train_batch_size 2 \\\r\n --eval_batch_size 4 \\\r\n --model_name_or_path google/pegasus-large \\\r\n --n_train 256 \\\r\n --n_val 256 \\\r\n --output_dir xsum_pegasus_test_4 \\\r\n --data_dir xsum \\\r\n --gpus 1 \\\r\n --sortish_sampler \\\r\n --val_check_interval 0.02\r\n```\r\n\r\n```\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_io.py\", line 273, in save_checkpoint\r\n self._atomic_save(checkpoint, filepath)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_io.py\", line 264, in _atomic_save\r\n torch.save(checkpoint, tmp_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 365, in save\r\n return\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 258, in __exit__\r\n self.file_like.write_end_of_file()\r\nRuntimeError: [enforce fail at inline_container.cc:262] . unexpected pos 869515520 vs 869515408\r\nterminate called after throwing an instance of 'c10::Error'\r\n what(): [enforce fail at inline_container.cc:262] . unexpected pos 869515520 vs 869515408\r\nframe #0: c10::ThrowEnforceNotMet(char const*, int, char const*, std::string const&, void const*) + 0x47 (0x7f71b2235fd7 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)\r\nframe #1: <unknown function> + 0x228ff30 (0x7f71eb51af30 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)\r\nframe #2: <unknown function> + 0x228c163 (0x7f71eb517163 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)\r\nframe #3: caffe2::serialize::PyTorchStreamWriter::writeRecord(std::string const&, void const*, unsigned long, bool) + 0x17b (0x7f71eb51c10b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)\r\nframe #4: caffe2::serialize::PyTorchStreamWriter::writeEndOfFile() + 0xe1 (0x7f71eb51cca1 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)\r\nframe #5: caffe2::serialize::PyTorchStreamWriter::~PyTorchStreamWriter() + 0x115 (0x7f71eb51d495 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)\r\nframe #6: <unknown function> + 0x5a35e3 (0x7f71f9e0b5e3 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nframe #7: <unknown function> + 0x273c00 (0x7f71f9adbc00 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nframe #8: <unknown function> + 0x274e4e (0x7f71f9adce4e in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nframe #9: python3() [0x588a98]\r\nframe #10: python3() [0x5ad558]\r\nframe #11: python3() [0x5ad56e]\r\nframe #12: python3() [0x5ad56e]\r\nframe #13: python3() [0x5ad56e]\r\nframe #14: python3() [0x5ad56e]\r\nframe #15: python3() [0x5ad56e]\r\nframe #16: python3() [0x5ad56e]\r\nframe #17: python3() [0x5ad56e]\r\nframe #18: python3() [0x5ad56e]\r\nframe #19: python3() [0x5ad56e]\r\nframe #20: python3() [0x5ad56e]\r\nframe #21: python3() [0x5ad56e]\r\nframe #22: python3() [0x5ad56e]\r\nframe #23: python3() [0x5ad56e]\r\nframe #24: python3() [0x5ad56e]\r\nframe #25: python3() [0x5ad56e]\r\nframe #26: python3() [0x5ad56e]\r\nframe #27: python3() [0x56b636]\r\n<omitting python frames>\r\nframe #33: __libc_start_main + 0xe7 (0x7f72058bdb97 in /lib/x86_64-linux-gnu/libc.so.6)\r\n\r\nfinetune_pegasus_xsum.sh: line 14: 2967 Aborted (core dumped) python finetune.py --learning_rate=1e-4 --do_train --do_predict --n_val 1000 --val_check_interval 0.25 --max_source_length 512 --max_target_length 56 --freeze_embeds --max_target_length 56 --label_smoothing 0.1 \"$@\"\r\n```\r\n\r\nGPU: P100 (16GB)\r\nRAM: 12 GB\r\n\r\n[colab](https://colab.research.google.com/drive/1d0FfFNGUkRrgqkbUQRQEB0VqMzuVET7Z?usp=sharing)",
"Crazy traceback. Is that torch 1.6 @patil-suraj ?",
"Yes, 1.6.0+cu101",
"Also `--freeze_embeds` is already present in `finetune_pegasus_xsum.sh` [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh#L13), so I'm a bit confused about how adding extra `--freeze_embeds` solved @laibamehnaz 's issue.",
"Yes, i was very confused about that too.",
"I tried with just 100 training examples, and added an extra --freeze_embeds and it worked. Now I am trying on the entire dataset and checking.",
"LMK how it goes, I'm thinking that this is not GPU OOM. This issue was previously observed when RAM usage suddenly increased on colab",
"Same issue again :/\r\n`Epoch 0: 50% 3415/6831 [49:00<49:00, 1.16it/s, loss=88.546, v_num=7]/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.\r\n warnings.warn(SAVE_STATE_WARNING, UserWarning)\r\ntcmalloc: large alloc 1202077696 bytes == 0x181364000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 1502601216 bytes == 0x212e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 1878253568 bytes == 0x7f9ef00c2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 2347819008 bytes == 0x7f9e641b4000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 2934775808 bytes == 0x7f9db52e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 3668475904 bytes == 0x7f9e641b4000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 4585594880 bytes == 0x7f9ca3db8000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\ntcmalloc: large alloc 5731999744 bytes == 0x7f9db52e2000 @ 0x7fa01e612615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7fa0128cb950 0x7fa0128cfbf7 0x7fa012c007e8 0x7fa012bb61b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4\r\n./finetune_pegasus_xsum.sh: line 16: 453 Killed `",
"Again after validation loop ?",
"Yes, exactly after the first validation loop.",
"Did you notice the RAM usage ?\r\nSeems related to serialisation or RAM ",
"No, I didn't. I don't think I can check now, right?",
"Yes, need to check when it's executing. can check ram usage in top right corner of colab when it's executing",
"Sure, lemme run it again and check. Will let you know.",
"Same thing again, fails right after the first validation loop. \r\nRAM usage at the exact end of validation loop : 7.86GB/12.72GB and then the same error as before. \r\n \r\n",
"@laibamehnaz \r\n`tcmalloc` is google's fancy `malloc` alternative and throws this error when it thinks that the requested memory might exceed the available memory.\r\n\r\n`tcmalloc: large alloc 5731999744 bytes` means it's trying to alloc ~5.73GB, so I think memory usage is peaking up when saving the checkpoint (which is large for pegasus, ~5GB) which is resulting in this error. \r\n\r\nStrangely, I ran this multiple times with 100 XSUM examples on the same K80 and 12 GB RAM instance and didn't see this error \r\nThis is the command that I used\r\n```bash\r\n!bash finetune_pegasus_xsum.sh \\\r\n --model_name_or_path google/pegasus-xsum \\\r\n --data_dir xsum \\\r\n --output_dir xsum_pegasus_test_4 \\\r\n --train_batch_size 2 \\\r\n --eval_batch_size 2 \\\r\n --num_train_epochs 1 \\\r\n --n_train 100 \\\r\n --n_val 100 \\\r\n --sortish_sampler \\\r\n --gpus 1 \\\r\n --val_check_interval 0.25 \\\r\n --gradient_accumulation_steps 4 \\\r\n```\r\n\r\n[colab](https://colab.research.google.com/drive/1iyhtazbj6yPKPWHuy02vwjDPpH4R5p-G?usp=sharing)\r\n\r\nmaybe switching to higher RAM instance should solve this issue. But let's wait for @sshleifer's answer",
"Right right, I understand. Actually, I am running this on my own dataset, and not on XSUM."
] | 1,598 | 1,598 | 1,598 | NONE | null | Epoch 0: 91% 5747/6331 [39:52<04:03, 2.40it/s, loss=75.765, v_num=2]/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
tcmalloc: large alloc 1083260928 bytes == 0x1aece0000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 1354080256 bytes == 0x21e5c000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 1692606464 bytes == 0x7f10651ce000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 2115764224 bytes == 0x7f0fe700e000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 2644705280 bytes == 0x7f0f495de000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 3305881600 bytes == 0x7f0fe700e000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 4132356096 bytes == 0x7f0e530f2000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
tcmalloc: large alloc 5165449216 bytes == 0x7f0f495de000 @ 0x7f144f09c615 0x591f47 0x4cc229 0x4cc38b 0x566c91 0x5a4df1 0x630b1d 0x7f1443355950 0x7f1443359bf7 0x7f144368a7e8 0x7f14436401b3 0x50a47f 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 0x50a64d 0x50c1f4 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4
./finetune_pegasus_xsum.sh: line 15: 876 Killed
I appreciate any help. Thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6711/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6710/comments | https://api.github.com/repos/huggingface/transformers/issues/6710/events | https://github.com/huggingface/transformers/pull/6710 | 685,219,280 | MDExOlB1bGxSZXF1ZXN0NDcyOTc5NDU5 | 6,710 | [squad] make examples and dataset accessible from SquadDataset object | {
"login": "lazovich",
"id": 678679,
"node_id": "MDQ6VXNlcjY3ODY3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/678679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lazovich",
"html_url": "https://github.com/lazovich",
"followers_url": "https://api.github.com/users/lazovich/followers",
"following_url": "https://api.github.com/users/lazovich/following{/other_user}",
"gists_url": "https://api.github.com/users/lazovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lazovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lazovich/subscriptions",
"organizations_url": "https://api.github.com/users/lazovich/orgs",
"repos_url": "https://api.github.com/users/lazovich/repos",
"events_url": "https://api.github.com/users/lazovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/lazovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=h1) Report\n> Merging [#6710](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0344428f7955675847ef95ddcb4980236b6f8721?el=desc) will **decrease** coverage by `1.27%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6710 +/- ##\n==========================================\n- Coverage 80.06% 78.78% -1.28% \n==========================================\n Files 156 156 \n Lines 28386 28391 +5 \n==========================================\n- Hits 22726 22367 -359 \n- Misses 5660 6024 +364 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `44.31% <0.00%> (-2.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.34% <0.00%> (-1.63%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.76%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=footer). Last update [0344428...20d0228](https://codecov.io/gh/huggingface/transformers/pull/6710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sgugger @LysandreJik thanks so much for the comments/suggestions! I have updated the code to include support for the legacy cache format. I had a question on one comment, but if there are any other changes needed please let me know. "
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | In order to do evaluation on the SQuAD dataset using squad_evaluate, the user needs access to both the examples loaded in the dataset and the TensorDataset that contains values like unique_id and the like that are used in constructing the list of SquadResult objects. This PR surfaces the examples and dataset to the user so that they can access it directly.
For example of why access to those is needed, see how evaluation is currently done in examples/run_squad.py. The SquadDataset object attempts to wrap up some of this functionality, but without access to examples and dataset the evaluation is not possible. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6710/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6710",
"html_url": "https://github.com/huggingface/transformers/pull/6710",
"diff_url": "https://github.com/huggingface/transformers/pull/6710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6710.patch",
"merged_at": 1598376777000
} |
https://api.github.com/repos/huggingface/transformers/issues/6709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6709/comments | https://api.github.com/repos/huggingface/transformers/issues/6709/events | https://github.com/huggingface/transformers/issues/6709 | 685,164,847 | MDU6SXNzdWU2ODUxNjQ4NDc= | 6,709 | consolidate tf activation functions | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Done by @jplu ! https://github.com/huggingface/transformers/blob/master/src/transformers/activations_tf.py#L50"
] | 1,598 | 1,602 | 1,602 | CONTRIBUTOR | null | e.g.
```
def gelu(x):
"""Gaussian Error Linear Unit.
This is a smoother version of the RELU.
Original paper: https://arxiv.org/abs/1606.08415
Args:
x: float Tensor to perform activation.
Returns:
`x` with the GELU activation applied.
"""
cdf = 0.5 * (1.0 + tf.tanh((np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3)))))
return x * cdf
```
See ACT2FN for torch.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6709/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6708/comments | https://api.github.com/repos/huggingface/transformers/issues/6708/events | https://github.com/huggingface/transformers/pull/6708 | 685,160,768 | MDExOlB1bGxSZXF1ZXN0NDcyOTI5NjYz | 6,708 | [bart] rename self-attention -> attention | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=h1) Report\n> Merging [#6708](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f58903bb62870342eae52f5a02c9105ec6f9b1e?el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6708 +/- ##\n==========================================\n+ Coverage 80.02% 80.15% +0.12% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n+ Hits 22876 22912 +36 \n+ Misses 5710 5674 -36 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6708/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=footer). Last update [0f58903...4db250c](https://codecov.io/gh/huggingface/transformers/pull/6708?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Makes more sense since also used from cross-attention.
Does not change the state dict, or break slow tests/backwards compatibility.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6708/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6708",
"html_url": "https://github.com/huggingface/transformers/pull/6708",
"diff_url": "https://github.com/huggingface/transformers/pull/6708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6708.patch",
"merged_at": 1598738589000
} |
https://api.github.com/repos/huggingface/transformers/issues/6707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6707/comments | https://api.github.com/repos/huggingface/transformers/issues/6707/events | https://github.com/huggingface/transformers/pull/6707 | 685,145,658 | MDExOlB1bGxSZXF1ZXN0NDcyOTE3NDE3 | 6,707 | copy of #6654 for easier pulling | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,651 | 1,598 | CONTRIBUTOR | null | ```bash
git fetch
git checkout batch-parity-cleaner
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6707/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6707",
"html_url": "https://github.com/huggingface/transformers/pull/6707",
"diff_url": "https://github.com/huggingface/transformers/pull/6707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6707.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6706/comments | https://api.github.com/repos/huggingface/transformers/issues/6706/events | https://github.com/huggingface/transformers/pull/6706 | 685,134,432 | MDExOlB1bGxSZXF1ZXN0NDcyOTA4MjI3 | 6,706 | ci/gh/self-scheduled: add newline to make examples tests run even if src/ tests fail | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=h1) Report\n> Merging [#6706](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ebc9699fad3889a6e882dce1e4c465232d73438?el=desc) will **increase** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6706 +/- ##\n==========================================\n+ Coverage 78.80% 79.44% +0.64% \n==========================================\n Files 156 156 \n Lines 28386 28386 \n==========================================\n+ Hits 22369 22552 +183 \n+ Misses 6017 5834 -183 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=footer). Last update [0ebc969...1887abe](https://codecov.io/gh/huggingface/transformers/pull/6706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I'm pretty sure that the tests stop as soon as there's a failure in one of the items. It would seem logical since there could be a relationship between tests, e.g., a test needing the output of the previous test to continue.\r\n\r\nStill merging because it looks more consistent this way!"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | At the moment, the examples tests dont run if the src/ tests don't
this is not the case for self-push I don't think. I don't know why.
Maybe you have an idea @julien-c ?
https://github.com/huggingface/transformers/runs/1024137280?check_suite_focus=true

Gunna merge this tomorrow cause it can't hurt, then try harder/stackoverflow if it doesn't work.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6706/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6706",
"html_url": "https://github.com/huggingface/transformers/pull/6706",
"diff_url": "https://github.com/huggingface/transformers/pull/6706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6706.patch",
"merged_at": 1598351190000
} |
https://api.github.com/repos/huggingface/transformers/issues/6705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6705/comments | https://api.github.com/repos/huggingface/transformers/issues/6705/events | https://github.com/huggingface/transformers/issues/6705 | 685,133,158 | MDU6SXNzdWU2ODUxMzMxNTg= | 6,705 | PegasusXSUMIntegrationTest.test_pegasus_xsum_summary | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | self-scheduled. cause of force_bos | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6705/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6705/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6704/comments | https://api.github.com/repos/huggingface/transformers/issues/6704/events | https://github.com/huggingface/transformers/pull/6704 | 685,131,970 | MDExOlB1bGxSZXF1ZXN0NDcyOTA2MzE2 | 6,704 | [s2s] round bleu, rouge to 4 digits | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=h1) Report\n> Merging [#6704](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ebc9699fad3889a6e882dce1e4c465232d73438?el=desc) will **increase** coverage by `1.26%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6704 +/- ##\n==========================================\n+ Coverage 78.80% 80.06% +1.26% \n==========================================\n Files 156 156 \n Lines 28386 28386 \n==========================================\n+ Hits 22369 22727 +358 \n+ Misses 6017 5659 -358 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6704/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=footer). Last update [0ebc969...c0bbc79](https://codecov.io/gh/huggingface/transformers/pull/6704?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Current output is ridiculous to copy paste into the forums/github/slack:
```
{ "rouge1": 43.38281995775625,
"rouge2": 20.59345268595595,
"rougeL": 30.00905449498762
}
```
also `calculate_bleu_score` -> `calculate_bleu` for consistency with `calculate_rouge` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6704/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6704",
"html_url": "https://github.com/huggingface/transformers/pull/6704",
"diff_url": "https://github.com/huggingface/transformers/pull/6704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6704.patch",
"merged_at": 1598329992000
} |
https://api.github.com/repos/huggingface/transformers/issues/6703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6703/comments | https://api.github.com/repos/huggingface/transformers/issues/6703/events | https://github.com/huggingface/transformers/pull/6703 | 685,089,471 | MDExOlB1bGxSZXF1ZXN0NDcyODcwNTIw | 6,703 | Adding model cards for 5 models | {
"login": "AMontgomerie",
"id": 7648722,
"node_id": "MDQ6VXNlcjc2NDg3MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7648722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AMontgomerie",
"html_url": "https://github.com/AMontgomerie",
"followers_url": "https://api.github.com/users/AMontgomerie/followers",
"following_url": "https://api.github.com/users/AMontgomerie/following{/other_user}",
"gists_url": "https://api.github.com/users/AMontgomerie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AMontgomerie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AMontgomerie/subscriptions",
"organizations_url": "https://api.github.com/users/AMontgomerie/orgs",
"repos_url": "https://api.github.com/users/AMontgomerie/repos",
"events_url": "https://api.github.com/users/AMontgomerie/events{/privacy}",
"received_events_url": "https://api.github.com/users/AMontgomerie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=h1) Report\n> Merging [#6703](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/640550fc7a1e311915ead1bcca6dacea0c503faf?el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6703 +/- ##\n==========================================\n- Coverage 77.85% 77.75% -0.10% \n==========================================\n Files 146 146 \n Lines 26326 26326 \n==========================================\n- Hits 20496 20471 -25 \n- Misses 5830 5855 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+5.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+29.90%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=footer). Last update [640550f...8293ed6](https://codecov.io/gh/huggingface/transformers/pull/6703?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Adding model cards for:
- iarfmoose/bert-base-cased-qa-evaluator
- iarfmoose/roberta-base-bulgarian-pos
- iarfmoose/roberta-base-bulgarian
- iarfmoose/roberta-small-bulgarian-pos
- iarfmoose/roberta-small-bulgarian | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6703",
"html_url": "https://github.com/huggingface/transformers/pull/6703",
"diff_url": "https://github.com/huggingface/transformers/pull/6703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6703.patch",
"merged_at": 1598476856000
} |
https://api.github.com/repos/huggingface/transformers/issues/6702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6702/comments | https://api.github.com/repos/huggingface/transformers/issues/6702/events | https://github.com/huggingface/transformers/issues/6702 | 685,081,664 | MDU6SXNzdWU2ODUwODE2NjQ= | 6,702 | Questions on the date of Wikipedia dumps for pretrained checkpoints (BERT and RoBERTa). | {
"login": "woojeongjin",
"id": 37932936,
"node_id": "MDQ6VXNlcjM3OTMyOTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/37932936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woojeongjin",
"html_url": "https://github.com/woojeongjin",
"followers_url": "https://api.github.com/users/woojeongjin/followers",
"following_url": "https://api.github.com/users/woojeongjin/following{/other_user}",
"gists_url": "https://api.github.com/users/woojeongjin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woojeongjin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woojeongjin/subscriptions",
"organizations_url": "https://api.github.com/users/woojeongjin/orgs",
"repos_url": "https://api.github.com/users/woojeongjin/repos",
"events_url": "https://api.github.com/users/woojeongjin/events{/privacy}",
"received_events_url": "https://api.github.com/users/woojeongjin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
Hi, I really appreciate the Huggingface and the fantastic code for pretrained LMs.
I am trying to figure out the date of Wikipedia dumps used for pretrained checkpoints. (BERT and RoBERTa)
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I'd like to know the date (month, year) of Wikipedia dumps that were used for the current pretrained checkpoints of BERT-base uncased and RoBERTa-base and large.
I am looking for an older version of pretrained checkpoints that were trained on a Wikipedia dump before 2019.
If available, is there a way to get the older version of pretrained checkpoints (before 2019)?
Thanks!
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6701/comments | https://api.github.com/repos/huggingface/transformers/issues/6701/events | https://github.com/huggingface/transformers/issues/6701 | 685,036,226 | MDU6SXNzdWU2ODUwMzYyMjY= | 6,701 | NER GermEval preprocessor not working as documented | {
"login": "jdruvini",
"id": 70177735,
"node_id": "MDQ6VXNlcjcwMTc3NzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/70177735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdruvini",
"html_url": "https://github.com/jdruvini",
"followers_url": "https://api.github.com/users/jdruvini/followers",
"following_url": "https://api.github.com/users/jdruvini/following{/other_user}",
"gists_url": "https://api.github.com/users/jdruvini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdruvini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdruvini/subscriptions",
"organizations_url": "https://api.github.com/users/jdruvini/orgs",
"repos_url": "https://api.github.com/users/jdruvini/repos",
"events_url": "https://api.github.com/users/jdruvini/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdruvini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jdruvini , the script was added in [Transformers 3.0.0](https://github.com/huggingface/transformers/releases/tag/v3.0.0) so unfortuntately it is only working with more recent versions of Transformers. Could you try to update your version to at least 3.0 to give it a try 🤔",
"Indeed... My bad, sorry.",
"The GermEval 2014 files have been moved and the curl commands in the README do not work. The new location is:\r\nhttps://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J\r\n\r\nJD"
] | 1,598 | 1,598 | 1,598 | NONE | null | ## Environment info
- `transformers` version: 2.5.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@stefan-it
## Information
Following https://github.com/huggingface/transformers/blob/master/examples/token-classification/README.md
The problem arises when using:
python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt
python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt
python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt
The commands above produce the following output:
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "scripts/preprocess.py", line 13, in <module>
max_len -= tokenizer.num_special_tokens_to_add()
AttributeError: 'BertTokenizer' object has no attribute 'num_special_tokens_to_add'
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6700/comments | https://api.github.com/repos/huggingface/transformers/issues/6700/events | https://github.com/huggingface/transformers/issues/6700 | 685,028,257 | MDU6SXNzdWU2ODUwMjgyNTc= | 6,700 | Some weights of AlbertModel were not initialized ['albert.embeddings.position_ids'] | {
"login": "vgaraujov",
"id": 43154161,
"node_id": "MDQ6VXNlcjQzMTU0MTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/43154161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vgaraujov",
"html_url": "https://github.com/vgaraujov",
"followers_url": "https://api.github.com/users/vgaraujov/followers",
"following_url": "https://api.github.com/users/vgaraujov/following{/other_user}",
"gists_url": "https://api.github.com/users/vgaraujov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vgaraujov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vgaraujov/subscriptions",
"organizations_url": "https://api.github.com/users/vgaraujov/orgs",
"repos_url": "https://api.github.com/users/vgaraujov/repos",
"events_url": "https://api.github.com/users/vgaraujov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vgaraujov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello! Which checkpoint are you trying to load? Is it one of your checkpoints or is it one of the checkpoints hosted on the modelhub?",
"This is probably because of this PR: https://github.com/huggingface/transformers/pull/5773 , but should not pose a real problem. I guess we just have to add the position ids as \"allowed\" non-initialized weights.",
"@LysandreJik I used model hub checkpoints. I runned the following lines with pytorch:\r\n\r\n`from transformers import AlbertForPreTraining`\r\n`model = AlbertForPreTraining.from_pretrained('albert-base-v2')`\r\n",
"Hey @vgaraujov, I can reproduce - this PR: #6700 should fix it. Thanks for reporting it :-) ",
"@patrickvonplaten Thank you for your help! 💯 ",
"Position_ids seems unnecessary to be saved? Why not use register_buffer with persistent=False",
"> Position_ids seems unnecessary to be saved? Why not use register_buffer with persistent=False\r\n\r\nIt's a fantastic suggestion, @ruotianluo!\r\n\r\nBut, alas, It can't be used at the moment, since:\r\n1. this functionality [was added just a few months ago](https://github.com/pytorch/pytorch/pull/37191) (can't require recent `torch`)\r\n2. [it doesn't yet work with torchscript](https://github.com/pytorch/pytorch/issues/45012)\r\n\r\nSee the following solution: https://github.com/huggingface/transformers/pull/7224"
] | 1,598 | 1,600 | 1,598 | NONE | null | Hello!
There seems to be a problem with the current code to load a pre-trained Albert model. This warning appears in any configuration of the Albert model:
`Some weights of AlbertModel were not initialized from the model checkpoint at albert-base-v2 and are newly initialized: ['albert.embeddings.position_ids']`
`You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`
I found this happens only when I install it from the source. Models load correctly (without warning) when installing the library with pip. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6699/comments | https://api.github.com/repos/huggingface/transformers/issues/6699/events | https://github.com/huggingface/transformers/pull/6699 | 684,938,374 | MDExOlB1bGxSZXF1ZXN0NDcyNzQzMzQw | 6,699 | More tests to Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=h1) Report\n> Merging [#6699](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6b4c617666fd26646d44d54f0c45dfe1332b12ca?el=desc) will **decrease** coverage by `0.42%`.\n> The diff coverage is `70.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6699 +/- ##\n==========================================\n- Coverage 79.44% 79.01% -0.43% \n==========================================\n Files 156 156 \n Lines 28386 28388 +2 \n==========================================\n- Hits 22551 22432 -119 \n- Misses 5835 5956 +121 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.64% <70.00%> (+2.91%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-1.96%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.50%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6699/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=footer). Last update [6b4c617...f7790fa](https://codecov.io/gh/huggingface/transformers/pull/6699?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | While doing see, realized there were some problems with the seed (in particular for HP search) so added a few tests of that too. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6699/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6699",
"html_url": "https://github.com/huggingface/transformers/pull/6699",
"diff_url": "https://github.com/huggingface/transformers/pull/6699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6699.patch",
"merged_at": 1598353656000
} |
https://api.github.com/repos/huggingface/transformers/issues/6698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6698/comments | https://api.github.com/repos/huggingface/transformers/issues/6698/events | https://github.com/huggingface/transformers/pull/6698 | 684,920,735 | MDExOlB1bGxSZXF1ZXN0NDcyNzI4NDk3 | 6,698 | [fixdoc] Add import to pegasus usage doc | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6698/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6698",
"html_url": "https://github.com/huggingface/transformers/pull/6698",
"diff_url": "https://github.com/huggingface/transformers/pull/6698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6698.patch",
"merged_at": 1598298897000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6697/comments | https://api.github.com/repos/huggingface/transformers/issues/6697/events | https://github.com/huggingface/transformers/issues/6697 | 684,912,014 | MDU6SXNzdWU2ODQ5MTIwMTQ= | 6,697 | words of overflowing_tokens in function truncate_sequences is not in right order | {
"login": "cin-levi",
"id": 61211179,
"node_id": "MDQ6VXNlcjYxMjExMTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/61211179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cin-levi",
"html_url": "https://github.com/cin-levi",
"followers_url": "https://api.github.com/users/cin-levi/followers",
"following_url": "https://api.github.com/users/cin-levi/following{/other_user}",
"gists_url": "https://api.github.com/users/cin-levi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cin-levi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cin-levi/subscriptions",
"organizations_url": "https://api.github.com/users/cin-levi/orgs",
"repos_url": "https://api.github.com/users/cin-levi/repos",
"events_url": "https://api.github.com/users/cin-levi/events{/privacy}",
"received_events_url": "https://api.github.com/users/cin-levi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/src/transformers/tokenization_utils_base.py#L2570
```
if not overflowing_tokens:
window_len = min(len(pair_ids), stride + 1)
else:
window_len = 1
overflowing_tokens.extend(pair_ids[-window_len:])
pair_ids = pair_ids[:-1]
```
In my understanding, the overflowing_tokens is a sub sequences of the second sequence (pair_ids). But in this code, the order of words in overflowing_tokens is not same as the pair_ids (and also different with the reversed sequence of pair_ids if window_len != 1). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6697/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6696/comments | https://api.github.com/repos/huggingface/transformers/issues/6696/events | https://github.com/huggingface/transformers/pull/6696 | 684,880,421 | MDExOlB1bGxSZXF1ZXN0NDcyNjk0NTU2 | 6,696 | Use separate tqdm progressbars | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=h1) Report\n> Merging [#6696](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6b4c617666fd26646d44d54f0c45dfe1332b12ca?el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `80.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6696 +/- ##\n==========================================\n- Coverage 79.44% 79.41% -0.04% \n==========================================\n Files 156 156 \n Lines 28386 28390 +4 \n==========================================\n- Hits 22551 22545 -6 \n- Misses 5835 5845 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6696/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `50.88% <80.00%> (+0.15%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6696/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=footer). Last update [6b4c617...96d9c81](https://codecov.io/gh/huggingface/transformers/pull/6696?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | Currently, we close progress bar and break the inner loops when training is complete which yields to the progress bars indicating one less step than actually done. The user might believe the last step was not done (even if it was) so this PR uses different generators for the loops than the progress bar and manually updates the progress bar (tried to just do manually the last update but tqdm doesn't want to cooperate with that approach).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6696",
"html_url": "https://github.com/huggingface/transformers/pull/6696",
"diff_url": "https://github.com/huggingface/transformers/pull/6696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6696.patch",
"merged_at": 1598353618000
} |
https://api.github.com/repos/huggingface/transformers/issues/6695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6695/comments | https://api.github.com/repos/huggingface/transformers/issues/6695/events | https://github.com/huggingface/transformers/pull/6695 | 684,845,090 | MDExOlB1bGxSZXF1ZXN0NDcyNjY1ODk2 | 6,695 | Fix hyperparameter_search doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | Fixes a few typos in the doc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6695/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6695",
"html_url": "https://github.com/huggingface/transformers/pull/6695",
"diff_url": "https://github.com/huggingface/transformers/pull/6695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6695.patch",
"merged_at": 1598317448000
} |
https://api.github.com/repos/huggingface/transformers/issues/6694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6694/comments | https://api.github.com/repos/huggingface/transformers/issues/6694/events | https://github.com/huggingface/transformers/pull/6694 | 684,830,721 | MDExOlB1bGxSZXF1ZXN0NDcyNjUzOTU4 | 6,694 | Move unused args to kwargs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=h1) Report\n> Merging [#6694](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/912a21ec78998a5e35751132c328e7aee8e9f47f?el=desc) will **decrease** coverage by `0.73%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6694 +/- ##\n==========================================\n- Coverage 79.68% 78.95% -0.74% \n==========================================\n Files 156 156 \n Lines 28386 28386 \n==========================================\n- Hits 22619 22411 -208 \n- Misses 5767 5975 +208 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.17% <ø> (-37.57%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6694/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=footer). Last update [912a21e...cd84cb9](https://codecov.io/gh/huggingface/transformers/pull/6694?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | Those arguments are poped from the kwargs since specific to optuna for now.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6694",
"html_url": "https://github.com/huggingface/transformers/pull/6694",
"diff_url": "https://github.com/huggingface/transformers/pull/6694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6694.patch",
"merged_at": 1598289604000
} |
https://api.github.com/repos/huggingface/transformers/issues/6693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6693/comments | https://api.github.com/repos/huggingface/transformers/issues/6693/events | https://github.com/huggingface/transformers/issues/6693 | 684,829,226 | MDU6SXNzdWU2ODQ4MjkyMjY= | 6,693 | Longformer finetuning on TPUs IndexError: tuple index out of range | {
"login": "wassimseif",
"id": 7282773,
"node_id": "MDQ6VXNlcjcyODI3NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7282773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wassimseif",
"html_url": "https://github.com/wassimseif",
"followers_url": "https://api.github.com/users/wassimseif/followers",
"following_url": "https://api.github.com/users/wassimseif/following{/other_user}",
"gists_url": "https://api.github.com/users/wassimseif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wassimseif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wassimseif/subscriptions",
"organizations_url": "https://api.github.com/users/wassimseif/orgs",
"repos_url": "https://api.github.com/users/wassimseif/repos",
"events_url": "https://api.github.com/users/wassimseif/events{/privacy}",
"received_events_url": "https://api.github.com/users/wassimseif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @wassimseif, sadly neither Longformer nor Reformer works on PyTorch/XLA . There is just too much dynamic tensor reshaping happening. I think @ibeltagy made Longformer work on PyTorch/XLA when respecting certain limitations (only local attention)",
"Hey @patrickvonplaten,\r\nUnderstood. Is there some wiki that specifies which model works on XLA & which don't ?",
"@wassimseif, running longformer on pytroch-xla is tracked in this issue https://github.com/allenai/longformer/issues/101. I am aiming to make that code available soon, probably this week. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi. Got the same error. Any update on this issue? Thanks!"
] | 1,598 | 1,616 | 1,604 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?):1.6.0+cu101
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes. XLA
### Who can help
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): longformer: allenai/longformer-large-4096
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
My Model
```python
class LongFormerBaseUncased(nn.Module):
def __init__(self):
super(LongFormerBaseUncased, self).__init__()
self.bert = transformers.LongformerModel.from_pretrained(
"allenai/longformer-large-4096",
gradient_checkpointing=True
)
self.bert_drop = nn.Dropout(config.dropout)
self.out = nn.Linear(1024, config.output_num_classes)
def forward(self, ids, mask):
_, o2 = self.bert(ids, attention_mask = mask)
bo = self.bert_drop(o2)
output = self.out(bo)
return output
```
```python
tokenizer = transformers.LongformerTokenizer.from_pretrained(
"allenai/longformer-base-4096"
)
text = "Very Long text"
tokenized = self.tokenizer.tokenize(text)
inputs = self.tokenizer.encode_plus(
tokenized,
is_pretokenized=True,
max_length=4096,
pad_to_max_length=True,
truncation=True,
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
ids = ids.to(device, dtype=torch.long)
mask = mask.to(device, dtype=torch.long)
targets = targets.to(device, dtype=torch.float)
#This throws the error
outputs = model(ids=ids, mask=mask)
```
Error
```
Exception in device=TPU:0: tuple index out of range
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 228, in _start_fn
fn(gindex, *args)
File "<ipython-input-14-9a008098ce7f>", line 3, in _mp_fn
a = run()
File "<ipython-input-12-9c37f47d0144>", line 156, in run
train_fn(train_data_loader, model, optimizer, device, scheduler)
File "<ipython-input-12-9c37f47d0144>", line 26, in train_fn
outputs = model(ids=ids, mask=mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-9-b68f74a484cf>", line 12, in forward
_, o2 = self.bert(ids, attention_mask = mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 1004, in forward
output_hidden_states=output_hidden_states,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 692, in forward
create_custom_forward(layer_module), hidden_states, attention_mask,
File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 163, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/checkpoint.py", line 74, in forward
outputs = run_function(*args)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 687, in custom_forward
return module(*inputs, output_attentions)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py", line 658, in forward
self_attn_outputs = self.attention(hidden_states, attention_mask, output_attentions=output_attentions,)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 577, in __call__
result = self.forward(*input, **kwargs)
IndexError: tuple index out of range
An exception has occurred, use %tb to see the full traceback.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6692/comments | https://api.github.com/repos/huggingface/transformers/issues/6692/events | https://github.com/huggingface/transformers/pull/6692 | 684,803,483 | MDExOlB1bGxSZXF1ZXN0NDcyNjMxMjg2 | 6,692 | Add "tie_word_embeddings" config param | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings.",
"Maybe `embedding_as_softmax_weights` or something like that?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=h1) Report\n> Merging [#6692](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/625318f52516b413126be1bb1cb6818231d2eca6?el=desc) will **decrease** coverage by `0.41%`.\n> The diff coverage is `81.25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6692 +/- ##\n==========================================\n- Coverage 79.49% 79.08% -0.42% \n==========================================\n Files 156 156 \n Lines 28405 28399 -6 \n==========================================\n- Hits 22581 22458 -123 \n- Misses 5824 5941 +117 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.16% <ø> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.69% <50.00%> (ø)` | |\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `89.09% <60.00%> (-3.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.43% <100.00%> (-0.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6692/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=footer). Last update [625318f...c723d16](https://codecov.io/gh/huggingface/transformers/pull/6692?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> `tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings.\r\n\r\nI think the name `tie_word_embeddings` is alright because it quite clear that it is a flag to me and it forces the output word embeddings to point to the same graph node where the input word embeddings poitns to for which \"tying' is a fitting word IMO. ",
"> > `tie_word_embeddings` doesn't sound a good name to me since it may be confused with freezing the word embeddings.\r\n> \r\n> I think the name `tie_word_embeddings` is alright because it quite clear that it is a flag to me and it forces the output word embeddings to point to the same graph node where the input word embeddings poitns to for which \"tying' is a fitting word IMO.\r\n\r\nFair enough!",
"What about the first part of `tie_weights` - the proposed PR, leaves it unmodified:\r\n```\r\n def tie_weights(self):\r\n [...]\r\n output_embeddings = self.get_output_embeddings()\r\n if output_embeddings is not None:\r\n self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())\r\n```\r\nthis needs to be configurable too.\r\n\r\nThe sub-class override with `pass()` in `reformer` removed both calls and not just `self._tie_or_clone_weights`, so this PR changes the behavior of `reformer` which will now tie input and output embeddings. I will need the same original behavior (neither of 2 calls) for fairseq transformer port.\r\n\r\nI thought the conclusion in https://github.com/huggingface/transformers/issues/6628 was about a config option to activate or not `tie_weights`, but ended up with only one of its internal calls. But, I think this change is good - as it gives more refined control, though also need the first call to be configurable.\r\n\r\nPerhaps it could be: `config.tie_in_out_embeddings` for the first call?\r\n",
"> What about the first part of `tie_weights` - the proposed PR, leaves it unmodified:\r\n> \r\n> ```\r\n> def tie_weights(self):\r\n> [...]\r\n> output_embeddings = self.get_output_embeddings()\r\n> if output_embeddings is not None:\r\n> self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())\r\n> ```\r\n> \r\n> this needs to be configurable too.\r\n> \r\n> The sub-class override with `pass()` in `reformer` removed both calls and not just `self._tie_or_clone_weights`, so this PR changes the behavior of `reformer` which will now tie input and output embeddings. I will need the same original behavior (neither of 2 calls) for fairseq transformer port.\r\n> \r\n> I thought the conclusion in #6628 was about a config option to activate or not `tie_weights`, but ended up with only one of its internal calls. But, I think this change is good - as it gives more refined control, though also need the first call to be configurable.\r\n> \r\n> Perhaps it could be: `config.tie_in_out_embeddings` for the first call?\r\n\r\nSorry I don't really follow here. Could you explain how this PR breaks backward compatibility for Reformer\r\nand what first part of `tie_weights` is not modified? ",
"Here is a stripped down version of the code:\r\n\r\nBefore this PR:\r\n\r\n```\r\n# src/transformers/modeling_utils.py\r\n def tie_weights(self):\r\n self._tie_or_clone_weights(...) # part 1\r\n self._tie_encoder_decoder_weights(...) # part 2\r\n\r\n# src/transformers/modeling_reformer.py\r\n def tie_weights(self): pass\r\n```\r\nAfter this PR:\r\n```\r\n# src/transformers/modeling_utils.py\r\n def tie_weights(self):\r\n self._tie_or_clone_weights(...) # part 1\r\n if self.config.tie_word_embeddings: # part 2\r\n self._tie_encoder_decoder_weights(...)\r\n\r\n# src/transformers/modeling_reformer.py\r\n```\r\nI removed all the other option checks to just show the gist of the change.\r\n\r\nAs you can see the first part of `tie_weights`, i.e. `_tie_or_clone_weights`, will now be called by reformer whereas it was not getting called before this PR when it overridden the whole `tie_weights` method. i.e. the first part also needs a config option.\r\n\r\nPlease let me know whether this clarification was useful, @patrickvonplaten. \r\n",
"> # src/transformers/modeling_utils.py\r\n> def tie_weights(self):\r\n> self._tie_or_clone_weights(...) # part 1\r\n> if self.config.tie_word_embeddings: # part 2\r\n> self._tie_encoder_decoder_weights(...)\r\n> \r\n> # src/transformers/modeling_reformer.py\r\n\r\nI think it should be rather (after this PR):\r\n\r\n```\r\n# src/transformers/modeling_utils.py\r\n def tie_weights(self):\r\n if self.config.tie_word_embeddings:\r\n self._tie_or_clone_weights(...) # part 1\r\n self._tie_encoder_decoder_weights(...) # part2\r\n\r\n# src/transformers/modeling_reformer.py\r\n```",
"Weird. I see what happened. github was showing only part of the code in the diff, not showing the `if self.config.is_encoder_decoder and self.config.tie_encoder_decoder` so I thought the other part will always be run. \r\n`tie_encoder_decoder` and `is_encoder_decoder` are False (by default) for reformer therefore the other part won't be run either, so it's still a noop for reformer.\r\n\r\nAnd then I mixed up the 2 parts when trying to explain what I thought I saw.\r\n\r\nI will checkout the whole PR code in the future and not look at the partial picture shown in github.\r\n\r\nSo all is good.\r\n\r\nThank you for bearing with me.\r\n\r\nThank you for this fix, @patrickvonplaten "
] | 1,598 | 1,598 | 1,598 | MEMBER | null | As discussed in #6628, this PR makes the word embedding tying dependent on a new parameter:
`config.tie_word_embeddings` which is set to `True` by default and set to `False` for Reformer.
Also, some unnecessary code is removed in Albert and a similar param is deprecated in Transfo-XL.
I don't see how this PR could break backwards compatibility as the `tie_word_embeddings` param did not exist before and is set to `True` by default.
Thanks a lot for posting the issue @stas00 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6692/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6692",
"html_url": "https://github.com/huggingface/transformers/pull/6692",
"diff_url": "https://github.com/huggingface/transformers/pull/6692.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6692.patch",
"merged_at": 1598432302000
} |
https://api.github.com/repos/huggingface/transformers/issues/6691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6691/comments | https://api.github.com/repos/huggingface/transformers/issues/6691/events | https://github.com/huggingface/transformers/pull/6691 | 684,790,176 | MDExOlB1bGxSZXF1ZXN0NDcyNjIwMzIy | 6,691 | Lat fix for Ray HP search | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=h1) Report\n> Merging [#6691](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3a7fdd3f5214d1ec494379e7c65b4eb08146ddb0?el=desc) will **increase** coverage by `0.48%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6691 +/- ##\n==========================================\n+ Coverage 78.95% 79.44% +0.48% \n==========================================\n Files 156 156 \n Lines 28384 28386 +2 \n==========================================\n+ Hits 22412 22551 +139 \n+ Misses 5972 5835 -137 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `50.73% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.50%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6691/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=footer). Last update [3a7fdd3...0081e9d](https://codecov.io/gh/huggingface/transformers/pull/6691?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6691/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6691",
"html_url": "https://github.com/huggingface/transformers/pull/6691",
"diff_url": "https://github.com/huggingface/transformers/pull/6691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6691.patch",
"merged_at": 1598285701000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6690/comments | https://api.github.com/repos/huggingface/transformers/issues/6690/events | https://github.com/huggingface/transformers/pull/6690 | 684,745,835 | MDExOlB1bGxSZXF1ZXN0NDcyNTg0MTcw | 6,690 | Add DPR to models summary | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=h1) Report\n> Merging [#6690](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.71%`.\n> The diff coverage is `78.15%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6690 +/- ##\n==========================================\n- Coverage 80.37% 79.65% -0.72% \n==========================================\n Files 156 156 \n Lines 28058 28248 +190 \n==========================================\n- Hits 22552 22502 -50 \n- Misses 5506 5746 +240 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `95.55% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (ø)` | |\n| ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/6690/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=footer). Last update [16e3894...f62c9fa](https://codecov.io/gh/huggingface/transformers/pull/6690?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | I created a `retrieval-based-models` section for models like DPR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6690/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6690",
"html_url": "https://github.com/huggingface/transformers/pull/6690",
"diff_url": "https://github.com/huggingface/transformers/pull/6690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6690.patch",
"merged_at": 1598342249000
} |
https://api.github.com/repos/huggingface/transformers/issues/6689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6689/comments | https://api.github.com/repos/huggingface/transformers/issues/6689/events | https://github.com/huggingface/transformers/pull/6689 | 684,733,314 | MDExOlB1bGxSZXF1ZXN0NDcyNTczOTMz | 6,689 | Add tokenizer to Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=h1) Report\n> Merging [#6689](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/abc0202194674ae5e241e547f3af34b4226bdc72?el=desc) will **decrease** coverage by `1.73%`.\n> The diff coverage is `55.55%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6689 +/- ##\n==========================================\n- Coverage 78.98% 77.24% -1.74% \n==========================================\n Files 156 156 \n Lines 28398 28405 +7 \n==========================================\n- Hits 22429 21941 -488 \n- Misses 5969 6464 +495 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.66% <55.55%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `58.88% <0.00%> (-36.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-29.33%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6689/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=footer). Last update [abc0202...54feec2](https://codecov.io/gh/huggingface/transformers/pull/6689?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice! I like it. Ok for me to do the same on the TF one :+1: "
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | Not entirely sure about this change as there is a trade-off API complexity/ease of use.
This PR adds `tokenizer` as an optional argument to `Trainer` (if this is approved, will do the same for `TFTrainer`, I have a few recent changes to port there but was mainly waiting for @jplu to be back from vacation to make the two APIs on par).
The benefit is that:
- we can have a smart default `data_collator` that will automatically pad examples if the tokenizer is provided, so the user doesn't have to learn about data_collators for simple examples.
- we can save the tokenizer along the model directly inside `Trainer` for the intermediary checkpoints, so it a checkpoint folder can be used directly with our scripts when resuming an interrupted training.
As for the bad part, it's just that it adds a new argument to `Trainer`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6689/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6689",
"html_url": "https://github.com/huggingface/transformers/pull/6689",
"diff_url": "https://github.com/huggingface/transformers/pull/6689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6689.patch",
"merged_at": 1598356029000
} |
https://api.github.com/repos/huggingface/transformers/issues/6688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6688/comments | https://api.github.com/repos/huggingface/transformers/issues/6688/events | https://github.com/huggingface/transformers/issues/6688 | 684,729,296 | MDU6SXNzdWU2ODQ3MjkyOTY= | 6,688 | Question Answering demonstrator for contribute model stopped working | {
"login": "mfebIBM",
"id": 50590088,
"node_id": "MDQ6VXNlcjUwNTkwMDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/50590088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfebIBM",
"html_url": "https://github.com/mfebIBM",
"followers_url": "https://api.github.com/users/mfebIBM/followers",
"following_url": "https://api.github.com/users/mfebIBM/following{/other_user}",
"gists_url": "https://api.github.com/users/mfebIBM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfebIBM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfebIBM/subscriptions",
"organizations_url": "https://api.github.com/users/mfebIBM/orgs",
"repos_url": "https://api.github.com/users/mfebIBM/repos",
"events_url": "https://api.github.com/users/mfebIBM/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfebIBM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Same issue as #6226. We are currently working on a fix, will post an update here.",
"Interestingly, this was working last Thursday. #6226 was from 21 days ago.",
"@mfebIBM It should be working right now, if you want to give it a try, let us know 👍.\r\n\r\nSorry for the inconvenience.",
"Yes. Working now. Thanks!",
"I'm closing, don't hesitate to reopen if anything goes wrong."
] | 1,598 | 1,598 | 1,598 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
All if this is run on the huggingface platform for contributed models.
Processing of the model in other hosts works correctly, using the versions described below. Was there an upgrade to the deployed transformers demonstration code that breaks the loading of contributed q/a models?
- `transformers` version: 2.2.1
- Platform: ubuntu 18.04
- Python version: 3.7.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
Model I am using (Bert, XLNet ...): Contributed model `mfeb/albert-xxlarge-v2-squad2` based on Albert xxlarge v2, pretrained with SQuAD2.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
* [x] web demonstrator for question answering, using the contributed model
The tasks I am working on is:
* [x] an official GLUE/SQuAD task: SQuAD 2
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Visit https://huggingface.co/mfeb/albert-xxlarge-v2-squad2
2. press `compute` button
3. See the following message:
```
Model name 'mfeb/albert-xxlarge-v2-squad2' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'mfeb/albert-xxlarge-v2-squad2' was a path or url to a directory containing vocabulary files named ['spiece.model'], but couldn't find such vocabulary files at this path or url.
```
This seems to imply that the code that is performing the run_squad is not recognizing that the model is one of the contributed models (not one of the recognized, provided models).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
An answer to the question: `London`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6688/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6688/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6687/comments | https://api.github.com/repos/huggingface/transformers/issues/6687/events | https://github.com/huggingface/transformers/pull/6687 | 684,708,610 | MDExOlB1bGxSZXF1ZXN0NDcyNTUyODE5 | 6,687 | Typo fix in longformer documentation | {
"login": "arnavsharma93",
"id": 1503614,
"node_id": "MDQ6VXNlcjE1MDM2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1503614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnavsharma93",
"html_url": "https://github.com/arnavsharma93",
"followers_url": "https://api.github.com/users/arnavsharma93/followers",
"following_url": "https://api.github.com/users/arnavsharma93/following{/other_user}",
"gists_url": "https://api.github.com/users/arnavsharma93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnavsharma93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnavsharma93/subscriptions",
"organizations_url": "https://api.github.com/users/arnavsharma93/orgs",
"repos_url": "https://api.github.com/users/arnavsharma93/repos",
"events_url": "https://api.github.com/users/arnavsharma93/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnavsharma93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thank you :-) "
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6687/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6687",
"html_url": "https://github.com/huggingface/transformers/pull/6687",
"diff_url": "https://github.com/huggingface/transformers/pull/6687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6687.patch",
"merged_at": 1598362743000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6686/comments | https://api.github.com/repos/huggingface/transformers/issues/6686/events | https://github.com/huggingface/transformers/pull/6686 | 684,651,398 | MDExOlB1bGxSZXF1ZXN0NDcyNTAzOTU0 | 6,686 | Update repo to isort v5 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=h1) Report\n> Merging [#6686](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `92.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6686 +/- ##\n==========================================\n- Coverage 79.65% 79.16% -0.50% \n==========================================\n Files 156 156 \n Lines 28250 28254 +4 \n==========================================\n- Hits 22503 22366 -137 \n- Misses 5747 5888 +141 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <ø> (ø)` | |\n| [src/transformers/data/test\\_generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <ø> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <ø> (ø)` | |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <ø> (-12.22%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <ø> (ø)` | |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6686/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=footer). Last update [1a779ad...dbaad9c](https://codecov.io/gh/huggingface/transformers/pull/6686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | Since isort now works properly with black, we can use the latest version. It also comes with new functionality (hence the lot of diff) mainly:
- it can deal with the __init__
- it can deal with import in if/else blocks.
This will fix #6681
Also v5 does not use the recursive flag anymore, so I removed it from the make style and make quality commands. For users having an old isort version, this will result in make style/make quality having no impact.
It's very likely that users having suggested a PR will need to rebase after this is merged, update isort then run a new `make style`. We can also assist by force-pushing on their branches.
Commented on the changes I made manually, all the others come from the new version of isort. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6686/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6686",
"html_url": "https://github.com/huggingface/transformers/pull/6686",
"diff_url": "https://github.com/huggingface/transformers/pull/6686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6686.patch",
"merged_at": 1598281382000
} |
https://api.github.com/repos/huggingface/transformers/issues/6685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6685/comments | https://api.github.com/repos/huggingface/transformers/issues/6685/events | https://github.com/huggingface/transformers/pull/6685 | 684,639,280 | MDExOlB1bGxSZXF1ZXN0NDcyNDkzNzUz | 6,685 | Fixed DataCollatorForLanguageModeling not accepting lists of lists | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=h1) Report\n> Merging [#6685](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6685 +/- ##\n==========================================\n- Coverage 79.65% 79.64% -0.01% \n==========================================\n Files 156 156 \n Lines 28250 28254 +4 \n==========================================\n+ Hits 22503 22504 +1 \n- Misses 5747 5750 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.70% <75.00%> (-1.21%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=footer). Last update [1a779ad...dd1b689](https://codecov.io/gh/huggingface/transformers/pull/6685?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | As discussed on Slack, currently `DataCollatorForLanguageModeling` and `DataCollatorForPermutationLanguageModeling` cannot take in lists of lists, as opposed to `default_data_collator`. This fixes this issue by calling `torch.Tensor` beforehand if a list of lists is detected. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6685/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6685",
"html_url": "https://github.com/huggingface/transformers/pull/6685",
"diff_url": "https://github.com/huggingface/transformers/pull/6685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6685.patch",
"merged_at": 1598275904000
} |
https://api.github.com/repos/huggingface/transformers/issues/6684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6684/comments | https://api.github.com/repos/huggingface/transformers/issues/6684/events | https://github.com/huggingface/transformers/issues/6684 | 684,626,304 | MDU6SXNzdWU2ODQ2MjYzMDQ= | 6,684 | missing reference `from model_bertabs import BertAbsSummarizer` | {
"login": "MarcoGorelli",
"id": 33491632,
"node_id": "MDQ6VXNlcjMzNDkxNjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/33491632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarcoGorelli",
"html_url": "https://github.com/MarcoGorelli",
"followers_url": "https://api.github.com/users/MarcoGorelli/followers",
"following_url": "https://api.github.com/users/MarcoGorelli/following{/other_user}",
"gists_url": "https://api.github.com/users/MarcoGorelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarcoGorelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarcoGorelli/subscriptions",
"organizations_url": "https://api.github.com/users/MarcoGorelli/orgs",
"repos_url": "https://api.github.com/users/MarcoGorelli/repos",
"events_url": "https://api.github.com/users/MarcoGorelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarcoGorelli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | [This line](https://github.com/huggingface/transformers/blob/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274/examples/seq2seq/bertabs/convert_bertabs_original_pytorch_checkpoint.py#L28) reads
> from model_bertabs import BertAbsSummarizer
yet neither `model_bertabs` not `BertAbsSummarizer` can be found in the repository | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6683/comments | https://api.github.com/repos/huggingface/transformers/issues/6683/events | https://github.com/huggingface/transformers/pull/6683 | 684,605,241 | MDExOlB1bGxSZXF1ZXN0NDcyNDY1MDYx | 6,683 | Don't reset the dataset type + plug for rm unused columns | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=h1) Report\n> Merging [#6683](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a779ad7ecb9e5215b6bd1cfa0153469d37e4274?el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `14.28%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6683 +/- ##\n==========================================\n- Coverage 79.65% 79.60% -0.05% \n==========================================\n Files 156 156 \n Lines 28250 28256 +6 \n==========================================\n- Hits 22503 22494 -9 \n- Misses 5747 5762 +15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.63% <0.00%> (-0.53%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.34% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.51%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=footer). Last update [1a779ad...eea7e8d](https://codecov.io/gh/huggingface/transformers/pull/6683?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | COLLABORATOR | null | This PR avoids resetting the dataset type when removing columns, and also introduces a filed in `TrainingArguments` to disable that behavior (in case the use wants to use some of those fields in an elaborate data collator). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6683",
"html_url": "https://github.com/huggingface/transformers/pull/6683",
"diff_url": "https://github.com/huggingface/transformers/pull/6683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6683.patch",
"merged_at": 1598275323000
} |
https://api.github.com/repos/huggingface/transformers/issues/6682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6682/comments | https://api.github.com/repos/huggingface/transformers/issues/6682/events | https://github.com/huggingface/transformers/pull/6682 | 684,601,416 | MDExOlB1bGxSZXF1ZXN0NDcyNDYxODYz | 6,682 | Fix PL token classification examples | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=h1) Report\n> Merging [#6682](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **increase** coverage by `0.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6682 +/- ##\n==========================================\n+ Coverage 79.14% 79.66% +0.51% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22358 22503 +145 \n+ Misses 5890 5745 -145 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.95%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+12.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.93% <0.00%> (+64.09%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6682/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=footer). Last update [d0e42a7...2f843e6](https://codecov.io/gh/huggingface/transformers/pull/6682?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@stefan-it At the bottom of the germeval [page](https://sites.google.com/site/germeval2014ner/data?authuser=0) I found the new location for the datasets (see Downloads section)",
"would love tests!",
"I will start working on them @sshleifer",
"I updated the urls a while ago in this PR https://github.com/huggingface/transformers/pull/6571 😅",
"> I updated the urls a while ago in this PR #6571 😅\r\n\r\nApologies @stefan-it, I didn't know about it. There was an error in PL version of the training so I thought why not fix the dataset URL as well. I am following you now so I'll know more about your PRs. Will you fix the NLP dataset germeval_14 as well?",
"@vblagoje Thanks for this!!\r\n\r\nplease reach out to me on [Lightning's Slack](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A) (username same as here on github) or email me: `nate [at] pytorchlightning.ai`. I'm about to make updates across examples this week and would love to sync up with you on this. "
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | This PR fixes the following:
- fetches germeval_14 dataset from a new [location](https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J) where it has been moved recently (cc @stefan-it)
- correctly implements `def get_dataloader(self, mode: int, batch_size: int, shuffle: bool = False) -> DataLoader:` from BaseTransformer PL parent class (cc @sshleifer )
I have verified both normal and PL training works as expected. Will add tests as we rework examples to use datasets | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6682/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6682",
"html_url": "https://github.com/huggingface/transformers/pull/6682",
"diff_url": "https://github.com/huggingface/transformers/pull/6682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6682.patch",
"merged_at": 1598283006000
} |
https://api.github.com/repos/huggingface/transformers/issues/6681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6681/comments | https://api.github.com/repos/huggingface/transformers/issues/6681/events | https://github.com/huggingface/transformers/issues/6681 | 684,571,058 | MDU6SXNzdWU2ODQ1NzEwNTg= | 6,681 | BUILD upgrade to isort v5 | {
"login": "MarcoGorelli",
"id": 33491632,
"node_id": "MDQ6VXNlcjMzNDkxNjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/33491632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MarcoGorelli",
"html_url": "https://github.com/MarcoGorelli",
"followers_url": "https://api.github.com/users/MarcoGorelli/followers",
"following_url": "https://api.github.com/users/MarcoGorelli/following{/other_user}",
"gists_url": "https://api.github.com/users/MarcoGorelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MarcoGorelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MarcoGorelli/subscriptions",
"organizations_url": "https://api.github.com/users/MarcoGorelli/orgs",
"repos_url": "https://api.github.com/users/MarcoGorelli/repos",
"events_url": "https://api.github.com/users/MarcoGorelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/MarcoGorelli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"which linked PR are you referring to?",
"this one https://github.com/timothycrosley/isort/pull/1000 (it's linked to here https://huggingface.co/transformers/contributing.html )"
] | 1,598 | 1,598 | 1,598 | NONE | null | The contributing guide says
> Right now, we need an unreleased version of isort to avoid a bug:
However, it looks like the linked PR has been incorporated into the latest version of isort. I could submit a PR to address this if it would be welcome | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6681/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6680/comments | https://api.github.com/repos/huggingface/transformers/issues/6680/events | https://github.com/huggingface/transformers/issues/6680 | 684,565,360 | MDU6SXNzdWU2ODQ1NjUzNjA= | 6,680 | Tokenizers works different between NFD/NFKD and NFC/NFKC normalize functions in lowercase Turkish(and probably some other languages) | {
"login": "abdullaholuk-loodos",
"id": 70137509,
"node_id": "MDQ6VXNlcjcwMTM3NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/70137509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abdullaholuk-loodos",
"html_url": "https://github.com/abdullaholuk-loodos",
"followers_url": "https://api.github.com/users/abdullaholuk-loodos/followers",
"following_url": "https://api.github.com/users/abdullaholuk-loodos/following{/other_user}",
"gists_url": "https://api.github.com/users/abdullaholuk-loodos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abdullaholuk-loodos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abdullaholuk-loodos/subscriptions",
"organizations_url": "https://api.github.com/users/abdullaholuk-loodos/orgs",
"repos_url": "https://api.github.com/users/abdullaholuk-loodos/repos",
"events_url": "https://api.github.com/users/abdullaholuk-loodos/events{/privacy}",
"received_events_url": "https://api.github.com/users/abdullaholuk-loodos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you experiment with the FastTokenizers from https://github.com/huggingface/tokenizers?\r\n\r\ncc @n1t0 ",
"Yes, it is same.\r\n\r\n```\r\nfrom transformers import BertTokenizerFast\r\n\r\nTEXT = \"ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR\"\r\n\r\nbt = BertTokenizerFast.from_pretrained(\"bert-base-turkish-uncased\")\r\nprint(bt.tokenize(TEXT))\r\n```\r\n\r\n['co', '##cuk', 'san', '##li', '##ur', '##fa', \"'\", 'dan', 'gelenleri', 'o', '##gun', 'olarak', 'yiyor']\r\n\r\nBut it should be: ['çocuk', 'şanlıurfa', \"'\", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor']\r\n\r\nWe developed custom normalization module [here](https://github.com/Loodos/turkish-language-models/blob/master/text_normalization.py). For now, we use tokenizers like this:\r\n```\r\nfrom transformers import BertTokenizerFast\r\nfrom text_normalization import TextNormalization\r\n\r\nbt = BertTokenizerFast.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=False)\r\n\r\nnorm = TextNormalization()\r\nTEXT = \"ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR\"\r\nLOWER = norm.normalize(TEXT)\r\n\r\nprint(bt.tokenize(LOWER))\r\n```\r\nand it gives : ['çocuk', 'şanlıurfa', \"'\", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor']\r\n\r\nCould you please add config parameters in tokenizer_config.json for:\r\n* unicodedata normalization function type(NFD, NFKD, NFC, NFKC)\r\n* is_turkish(I->ı, İ->i)",
"Hello Sir, \r\n\r\nIs there any update about this issue?",
"Hi @abdullaholuk-loodos,\r\n\r\nBertTokenizer is based on WordPiece which is a subword segmentation algorithm. It may split a word into more than one piece. In this way, out-of-vocabulary words can be represented. You should not expect to see exact word tokens.",
"Hi @erncnerky, thanks for reply.\r\n\r\nYou misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization.\r\n\r\nWhen do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function. https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420\r\n\r\nThis function, calls text = unicodedata.normalize(\"NFD\", text) normalization function. NFD normalization is not proper for Turkish because of \"ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I\" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations. \r\n\r\nCorpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters.\r\n\r\n Please try that:\r\n```\r\nTEXT1 = \"ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR\"\r\n \r\nbt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=True)\r\nprint(bt.tokenize(TEXT1)) \r\n\r\nTEXT2 = \"çocuk şanlıurfa'dan gelenleri öğün olarak yiyor\"\r\n\r\nbt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=False)\r\nprint(bt.tokenize(TEXT2))\r\n```\r\nAs you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization.\r\n\r\nIt is also same with albert tokenizer's keep_accent=False parameter.\r\n\r\nFYI, @julien-c \r\nFYI, @n1t0 ",
"> Hi @erncnerky, thanks for reply.\r\n> \r\n> You misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization.\r\n> \r\n> When do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function.\r\n> \r\n> https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420\r\n> \r\n> This function, calls text = unicodedata.normalize(\"NFD\", text) normalization function. NFD normalization is not proper for Turkish because of \"ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I\" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations.\r\n> \r\n> Corpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters.\r\n> \r\n> Please try that:\r\n> \r\n> TEXT1 = \"ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR\"\r\n> \r\n> bt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=True)\r\n> printf(bt.tokenize(TEXT1))\r\n> \r\n> TEXT2 = \"çocuk şanlıurfa'dan gelenleri öğün olarak yiyor\"\r\n> \r\n> bt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=False)\r\n> printf(bt.tokenize(TEXT2))\r\n> \r\n> As you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization.\r\n> \r\n> It is also same with albert tokenizer's keep_accent=False parameter.\r\n> \r\n> FYI, @julien-c\r\n\r\nI had seen the problem. Since you gave exact word tokens which are not mostly expected especially for the morphologically rich languages such as Turkish, I wrote the comment. ",
"> \r\n> \r\n> > Hi @erncnerky, thanks for reply.\r\n> > You misunderstood me. I am not mentioning about subword segmentation algorithm. I am talking about normalization algorithm before tokenization.\r\n> > When do_lower_case=True, tokenizer calls _run_strip_accents(self, text) function.\r\n> > https://github.com/huggingface/transformers/blob/447808c85f0e6d6b0aeeb07214942bf1e578f9d2/src/transformers/models/bert/tokenization_bert.py#L420\r\n> > \r\n> > This function, calls text = unicodedata.normalize(\"NFD\", text) normalization function. NFD normalization is not proper for Turkish because of \"ç Ç, ü Ü, ş Ş, ğ Ğ, i İ, ı I\" characters. When you change NFD to NFC or NFKC result changes. NFD normalization adds some invisible characters to text when special Turkish characters that I mentioned. NFC normalization does not add these invisible characters. These invisible characters causes different tokenizations.\r\n> > Corpus normalized with NFKC normalization, then subword algorithm run. So it is correct. No invisible characters. But at inference, NFD normalization changes text for Turkish and causes wrong text with invisible characters.\r\n> > Please try that:\r\n> > TEXT1 = \"ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR\"\r\n> > bt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=True)\r\n> > printf(bt.tokenize(TEXT1))\r\n> > TEXT2 = \"çocuk şanlıurfa'dan gelenleri öğün olarak yiyor\"\r\n> > bt = AutoTokenizer.from_pretrained(\"loodos/bert-base-turkish-uncased\", do_lower_case=False)\r\n> > printf(bt.tokenize(TEXT2))\r\n> > As you see, TEXT2 is correct lowercase TEXT1, but results are different because of _run_strip_accents's NFD before tokenization.\r\n> > It is also same with albert tokenizer's keep_accent=False parameter.\r\n> > FYI, @julien-c\r\n> \r\n> I had seen the problem. Since you gave exact word tokens which are not mostly expected especially for the morphologically rich languages such as Turkish, I wrote the comment.\r\n\r\nThank you for your interest.\r\n\r\nCould you mention admins and like issue for taking attention to issue?\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"is there any changes ?",
"Any workarounds so far? I came across the same issue."
] | 1,598 | 1,625 | 1,614 | CONTRIBUTOR | null | Transformers: 3.0.2
Tokenizers: 0.8.1
Hi. First of all thanks for this great library. This is my first issue opening here. I am working at Loodos Tech as a NLP R&D Engineer in Turkey. We are pretraining and finetuning Turkish BERT/ALBERT/ELECTRA models and publizing them.
I found a bug in Tokenizers for Turkish(and possibly some other languages which use non-ASCII alphabet).
For example,
TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR"
bt = BertTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=True)
assert bt.tokenize(TEXT) == ['co', '##cuk', 'san', '##li', '##ur', '##fa', "'", 'dan', 'gelenleri', 'o', '##gun', 'olarak', 'yiyor']
But it should be,
assert bt.tokenize(TEXT) == ['çocuk', 'şanlıurfa', "'", 'dan', 'gelenleri', 'öğün', 'olarak', 'yiyor']
Same for ALBERT tokenizer,
TEXT = "ÇOCUK ŞANLIURFA'DAN GELENLERİ ÖĞÜN OLARAK YİYOR"
at = AlbertTokenizer.from_pretrained("loodos/albert-base-turkish-uncased", do_lower_case=True, keep_accents=False)
assert at.tokenize(TEXT) == ['▁c', 'oc', 'uk', '▁san', 'li', 'urfa', "'", 'dan', '▁gelenleri', '▁o', 'gun', '▁olarak', '▁yiyor']
But it should be,
assert at.tokenize(TEXT) == ['▁çocuk', '▁şanlıurfa', "'", 'dan', '▁gelenleri', '▁öğün', '▁olarak', '▁yiyor']
This is caused by two things:
1- Vocabulary and sentence piece model is created with **NFC/NFKC** normalization but tokenizer uses **NFD/NFKD** . NFD/NFKD normalization changes text with Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and lost of informations. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
For BERT and ELECTRA, tokenizers executes this code when **do_lower_case = True**:
def _run_strip_accents(self, text):
"""Strips accents from a piece of text."""
text = unicodedata.normalize("NFD", text)
output = []
for char in text:
cat = unicodedata.category(char)
if cat == "Mn":
continue
output.append(char)
return "".join(output)
For ALBERT, tokenizers executes this code when **keep_accents = False**:
if not self.keep_accents:
outputs = unicodedata.normalize("NFKD", outputs)
outputs = "".join([c for c in outputs if not unicodedata.combining(c)])
2- 'I' is not uppercase of 'i' in Turkish. Python's default lowercase or casefold functions do not care this.(check this: https://stackoverflow.com/questions/19030948/python-utf-8-lowercase-turkish-specific-letter)
if is_turkish:
lower = lower.replace('\u0049', '\u0131') # I -> ı
lower = lower.replace('\u0130', '\u0069') # İ -> i
Probably this normalization function error effects some other languages too. For ASCII, NFD and NFC both work same but for Turkish they don't.
Could you please give optional parameters for normalization function and is_turkish? We need NFKC normalization and casefold with I->ı.
Thanks... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6680/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6679/comments | https://api.github.com/repos/huggingface/transformers/issues/6679/events | https://github.com/huggingface/transformers/pull/6679 | 684,551,472 | MDExOlB1bGxSZXF1ZXN0NDcyNDE5Mzg0 | 6,679 | Add Mirror Option for Downloads | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=h1) Report\n> Merging [#6679](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8fcbe486e1592321e868f872545c8fd9d359a515?el=desc) will **decrease** coverage by `1.53%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6679 +/- ##\n==========================================\n- Coverage 80.93% 79.39% -1.54% \n==========================================\n Files 168 168 \n Lines 32179 32182 +3 \n==========================================\n- Hits 26044 25552 -492 \n- Misses 6135 6630 +495 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <ø> (+0.27%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.45% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <100.00%> (-0.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <100.00%> (+0.02%)` | :arrow_up: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6679/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=footer). Last update [8fcbe48...6bb70c7](https://codecov.io/gh/huggingface/transformers/pull/6679?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c I updated the doc to not encourage the users to use this option."
] | 1,598 | 1,600 | 1,600 | CONTRIBUTOR | null | This PR will integrate a mirror download source kindly provided by Tsinghua University. This will enormously accelerate downloads from China. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6679",
"html_url": "https://github.com/huggingface/transformers/pull/6679",
"diff_url": "https://github.com/huggingface/transformers/pull/6679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6679.patch",
"merged_at": 1600098623000
} |
https://api.github.com/repos/huggingface/transformers/issues/6678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6678/comments | https://api.github.com/repos/huggingface/transformers/issues/6678/events | https://github.com/huggingface/transformers/issues/6678 | 684,524,381 | MDU6SXNzdWU2ODQ1MjQzODE= | 6,678 | Can't load config for New Community Model | {
"login": "donal1",
"id": 30771534,
"node_id": "MDQ6VXNlcjMwNzcxNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/30771534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donal1",
"html_url": "https://github.com/donal1",
"followers_url": "https://api.github.com/users/donal1/followers",
"following_url": "https://api.github.com/users/donal1/following{/other_user}",
"gists_url": "https://api.github.com/users/donal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donal1/subscriptions",
"organizations_url": "https://api.github.com/users/donal1/orgs",
"repos_url": "https://api.github.com/users/donal1/repos",
"events_url": "https://api.github.com/users/donal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/donal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Edit so it appears to work with: \r\n\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\ntokenizer = AutoTokenizer.from_pretrained(\"donal/Pro_Berta\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"donal/Pro_Berta\")\r\n\r\nCould just be an issue with the API? maybe needs some time to load correctly?\r\n\r\n",
"They fixed it"
] | 1,598 | 1,598 | 1,598 | NONE | null | API says "Can't load config for 'donal/Pro_Berta'. Make sure that: - 'donal/Pro_Berta' is a correct model identifier listed on 'https://huggingface.co/models' - or 'donal/Pro_Berta' is the correct path to a directory containing a config.json file". But I followed the instructions to the letter. Do not know what's the issue. Please fix.
https://huggingface.co/donal/Pro_Berta?text=The+goal+of+life+is+%3Cmask%3E.
Pinging @mfuntowicz, @julien-c,
All the files seem to be in the right place.
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/merges.txt
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/special_tokens_map.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/training_args.bin
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/pytorch_model.bin
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/tokenizer_config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/vocab.json | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6678/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6677/comments | https://api.github.com/repos/huggingface/transformers/issues/6677/events | https://github.com/huggingface/transformers/pull/6677 | 684,490,474 | MDExOlB1bGxSZXF1ZXN0NDcyMzY3Nzc2 | 6,677 | Batch encore plus and overflowing tokens fails when non existing overflowing tokens for a sequence | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I face this issue as well, and I agree this PR will fix it. Thanks for the PR, I can now fix it on my local :P",
"@LysandreJik I encountered the same issue, glad you found a way to fix it.\r\nsome tests are failing - maybe that's the reason this PR is not being merged?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=h1) Report\n> Merging [#6677](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a?el=desc) will **increase** coverage by `0.33%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6677 +/- ##\n==========================================\n+ Coverage 79.51% 79.85% +0.33% \n==========================================\n Files 164 164 \n Lines 31022 31023 +1 \n==========================================\n+ Hits 24668 24773 +105 \n+ Misses 6354 6250 -104 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-5.02%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=footer). Last update [ed71c21...4e4bfb3](https://codecov.io/gh/huggingface/transformers/pull/6677?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,599 | 1,599 | MEMBER | null | closes #6632 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6677/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6677",
"html_url": "https://github.com/huggingface/transformers/pull/6677",
"diff_url": "https://github.com/huggingface/transformers/pull/6677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6677.patch",
"merged_at": 1599648917000
} |
https://api.github.com/repos/huggingface/transformers/issues/6676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6676/comments | https://api.github.com/repos/huggingface/transformers/issues/6676/events | https://github.com/huggingface/transformers/pull/6676 | 684,478,909 | MDExOlB1bGxSZXF1ZXN0NDcyMzU4MjMx | 6,676 | Allow numpy array as tokenizer input | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=h1) Report\n> Merging [#6676](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f230a640941ef11b077c953cbda01aa981e1ec9a?el=desc) will **increase** coverage by `0.94%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6676 +/- ##\n==========================================\n+ Coverage 79.00% 79.94% +0.94% \n==========================================\n Files 156 156 \n Lines 28248 28249 +1 \n==========================================\n+ Hits 22317 22584 +267 \n+ Misses 5931 5665 -266 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <ø> (ø)` | |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.32% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6676/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=footer). Last update [16e3894...f693155](https://codecov.io/gh/huggingface/transformers/pull/6676?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Let me know when you release 0.9.0 @n1t0 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@lhoestq I don't know if this is still relevant, but this has definitely been released now!",
"Cool ! Will update the PR tomorrow thanks :) ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,598 | 1,614 | 1,614 | MEMBER | null | Tokenizers allow numpy arrays since https://pypi.org/project/tokenizers/0.9.0.dev0/ thanks to @n1t0
This is related to issue #5729 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6676/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6676",
"html_url": "https://github.com/huggingface/transformers/pull/6676",
"diff_url": "https://github.com/huggingface/transformers/pull/6676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6676.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6675/comments | https://api.github.com/repos/huggingface/transformers/issues/6675/events | https://github.com/huggingface/transformers/issues/6675 | 684,456,391 | MDU6SXNzdWU2ODQ0NTYzOTE= | 6,675 | ner example failed on examples/token-classification % bash run.sh | {
"login": "SeekPoint",
"id": 18051187,
"node_id": "MDQ6VXNlcjE4MDUxMTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18051187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeekPoint",
"html_url": "https://github.com/SeekPoint",
"followers_url": "https://api.github.com/users/SeekPoint/followers",
"following_url": "https://api.github.com/users/SeekPoint/following{/other_user}",
"gists_url": "https://api.github.com/users/SeekPoint/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeekPoint/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeekPoint/subscriptions",
"organizations_url": "https://api.github.com/users/SeekPoint/orgs",
"repos_url": "https://api.github.com/users/SeekPoint/repos",
"events_url": "https://api.github.com/users/SeekPoint/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeekPoint/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"I also get this same error on this example.",
"Hi @loveJasmine and @isoboroff ,\r\n\r\nthis should be fixed in latest `master` version (I'm currently training a model with this example script, preprocessing was fine) :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,605 | 1,605 | NONE | null | (.venvpy36) examples/token-classification % bash run.sh
08/24/2020 15:56:12 - INFO - filelock - Lock 5754732216 acquired on ./cached_train_BertTokenizer_128.lock
08/24/2020 15:56:12 - INFO - utils_ner - Creating features from dataset file at .
08/24/2020 15:56:12 - INFO - utils_ner - Saving features into cached file ./cached_train_BertTokenizer_128
08/24/2020 15:56:12 - INFO - filelock - Lock 5754732216 released on ./cached_train_BertTokenizer_128.lock
08/24/2020 15:56:12 - INFO - filelock - Lock 5754732048 acquired on ./cached_dev_BertTokenizer_128.lock
08/24/2020 15:56:12 - INFO - utils_ner - Creating features from dataset file at .
08/24/2020 15:56:12 - INFO - utils_ner - Writing example 0 of 9
08/24/2020 15:56:12 - INFO - filelock - Lock 5754732048 released on ./cached_dev_BertTokenizer_128.lock
Traceback (most recent call last):
File "run_ner.py", line 304, in <module>
main()
File "run_ner.py", line 189, in main
if training_args.do_eval
File "/Users/yuanke/ghSrc/transformers/examples/token-classification/utils_ner.py", line 127, in __init__
pad_token_label_id=self.pad_token_label_id,
File "/Users/yuanke/ghSrc/transformers/examples/token-classification/utils_ner.py", line 305, in convert_examples_to_features
label_ids.extend([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1))
KeyError: '[null,"AIzaSyCF97XfLoejM9NhWDAZeOcjC6kOEsEmv6A","897606708560-a63d8ia0t9dhtpdt4i3djab2m42see7o.apps.googleusercontent.com",null,null,"v2",null,null,null,null,null,null,null,"https://content.googleapis.com","SITES_%s",null,null,null,null,null,0,null,null,null,["AHKXmL0ZzONWw2TXF2GVALSixIY_wY8DFDhrOeiPL5czjvgRVJRjibFVAqFSDdzkAGNCFzy2FNRZ",1,"CJDXlfOos-sCFZWnIwAdnZoJHA",1598254209133000,[5703022,5703839,5704621,5705837,5705841,5706601,5706832,5706836,5707711,5709888,5710567,5710768,5710806,5711078,5711206,5711530,5711563,5711808,5711866,5711929,5712328,5713049,5714628,14100031,14100834,14100854,14101054,14101218,14101254,14101334,14101346,14101350,14101354,14101358,14101374,14101378,14101386,14101410,14101418,14101430,14101442,14101446,14101458,14101462,14101474,14101492]'
- `transformers` version:
latest
- Platform: Mac
- Python version:
py3.6
- PyTorch version (GPU?):
1.4
- Tensorflow version (GPU?):
- Using GPU in script?:
no
- Using distributed or parallel set-up in script?:
no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6675/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6674/comments | https://api.github.com/repos/huggingface/transformers/issues/6674/events | https://github.com/huggingface/transformers/pull/6674 | 684,392,311 | MDExOlB1bGxSZXF1ZXN0NDcyMjg1MjM2 | 6,674 | Add model card for singbert. | {
"login": "zyuanlim",
"id": 7169731,
"node_id": "MDQ6VXNlcjcxNjk3MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7169731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyuanlim",
"html_url": "https://github.com/zyuanlim",
"followers_url": "https://api.github.com/users/zyuanlim/followers",
"following_url": "https://api.github.com/users/zyuanlim/following{/other_user}",
"gists_url": "https://api.github.com/users/zyuanlim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyuanlim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyuanlim/subscriptions",
"organizations_url": "https://api.github.com/users/zyuanlim/orgs",
"repos_url": "https://api.github.com/users/zyuanlim/repos",
"events_url": "https://api.github.com/users/zyuanlim/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyuanlim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@JetRunner Just a final addition of couple of more examples and customized the widget inputs, good to go :)",
"Not sure if I should bring this up here or raise a new issue, but when i tested my widget (hosted inference), i got this error: \r\n```\r\nCan't load config for 'zanelim/singbert'. Make sure that: - 'zanelim/singbert' is a correct model identifier listed on 'https://huggingface.co/models' - or 'zanelim/singbert' is the correct path to a directory containing a config.json file\r\n```\r\nHowever the config file is there as confirmed by\r\n```\r\n>>> transformers-cli s3 ls\r\nsingbert/config.json 2020-08-24T05:54:28.000Z \"6004cb287370530537f4076c9cf7fdbe\" 471 \r\nsingbert/pytorch_model.bin 2020-08-24T05:53:37.000Z \"c060a644d84e55b2f93aa67fb1f35956\" 440509997 \r\nsingbert/special_tokens_map.json 2020-08-24T05:54:23.000Z \"8b3fb1023167bb4ab9d70708eb05f6ec\" 112 \r\nsingbert/tf_model.h5 2020-08-24T05:52:23.000Z \"45a8eea544f73079768bb136fe3d0a27\" 536061440 \r\nsingbert/tokenizer_config.json 2020-08-24T05:54:25.000Z \"8b3fb1023167bb4ab9d70708eb05f6ec\" 112 \r\nsingbert/vocab.txt 2020-08-24T05:53:32.000Z \"767659dd848f37f6937a0ffb833ee6b1\" 224170 \r\n```",
"> Not sure if I should bring this up here or raise a new issue, but when i tested my widget (hosted inference), i got this error: \n> \n> ```\n> \n> Can't load config for 'zanelim/singbert'. Make sure that: - 'zanelim/singbert' is a correct model identifier listed on 'https://huggingface.co/models' - or 'zanelim/singbert' is the correct path to a directory containing a config.json file\n> \n> ```\n> \n> However the config file is there as confirmed by\n> \n> ```\n> \n> >>> transformers-cli s3 ls\n> \n> singbert/config.json 2020-08-24T05:54:28.000Z \"6004cb287370530537f4076c9cf7fdbe\" 471 \n> \n> singbert/pytorch_model.bin 2020-08-24T05:53:37.000Z \"c060a644d84e55b2f93aa67fb1f35956\" 440509997 \n> \n> singbert/special_tokens_map.json 2020-08-24T05:54:23.000Z \"8b3fb1023167bb4ab9d70708eb05f6ec\" 112 \n> \n> singbert/tf_model.h5 2020-08-24T05:52:23.000Z \"45a8eea544f73079768bb136fe3d0a27\" 536061440 \n> \n> singbert/tokenizer_config.json 2020-08-24T05:54:25.000Z \"8b3fb1023167bb4ab9d70708eb05f6ec\" 112 \n> \n> singbert/vocab.txt 2020-08-24T05:53:32.000Z \"767659dd848f37f6937a0ffb833ee6b1\" 224170 \n> \n> ```\n\nDon't worry. It's sometimes flaky. As long as you can load the model with from_pretrained and you are good to go. 😉"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Adding a model card for singbert- bert for singlish and manglish. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6674/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6674",
"html_url": "https://github.com/huggingface/transformers/pull/6674",
"diff_url": "https://github.com/huggingface/transformers/pull/6674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6674.patch",
"merged_at": 1598321354000
} |
https://api.github.com/repos/huggingface/transformers/issues/6673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6673/comments | https://api.github.com/repos/huggingface/transformers/issues/6673/events | https://github.com/huggingface/transformers/issues/6673 | 684,298,123 | MDU6SXNzdWU2ODQyOTgxMjM= | 6,673 | New training arg: warmup_ratio | {
"login": "GanjinZero",
"id": 19466330,
"node_id": "MDQ6VXNlcjE5NDY2MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19466330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GanjinZero",
"html_url": "https://github.com/GanjinZero",
"followers_url": "https://api.github.com/users/GanjinZero/followers",
"following_url": "https://api.github.com/users/GanjinZero/following{/other_user}",
"gists_url": "https://api.github.com/users/GanjinZero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GanjinZero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GanjinZero/subscriptions",
"organizations_url": "https://api.github.com/users/GanjinZero/orgs",
"repos_url": "https://api.github.com/users/GanjinZero/repos",
"events_url": "https://api.github.com/users/GanjinZero/events{/privacy}",
"received_events_url": "https://api.github.com/users/GanjinZero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Well, I think it is a useful feature since PLMs like RoBERTa require setting task-specific warmup-steps, which is annoying. But the design of merge warmup-steps and warmup-ratio together may not be a good idea. ",
"Hi @TobiasLee,\r\nAgreed with not merging both `warmup_steps` and `warmup_ratio` into a single parameter. It seems cleaner to give higher precedence to one over the other in case both are given by user. \r\n\r\nI have raised a [PR](https://github.com/huggingface/transformers/pull/10229) with the same implemented. \r\nYour review is appreciated!"
] | 1,598 | 1,613 | 1,613 | NONE | null | # 🚀 Feature request
When training or fine-tuning a transformer model, people usually warmup for 10% training steps. For now, transformers only provide the parameter of warmup_steps. A warmup_ratio parameter can be helpful, it means warmup for some percentage of total training steps.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
To use the parameter of warmup_steps, people need to know the total training steps. When people use the training epoch parameter instead of max_steps, it is hard to know the total training steps. A warmup_ratio parameter get rid of people knowing total training steps. Another reason for using warmup_ratio parameter is it can help people write less hard code. People have different total training steps for different dataset, but people usually set warmup_ratio as 10% as default.
Original Usage may like this:
`
python run_ner.py --data_dir some_data_dir \
--model_name_or_path some_model \
--output_dir some_output_dir \
--max_seq_length 512 \
--num_train_epochs 10 \
--warmup_steps 35 \
--per_device_train_batch_size 8 \
--do_train \
`
New usage may like this:
`
python run_ner.py --data_dir some_data_dir \
--model_name_or_path some_model \
--output_dir some_output_dir \
--max_seq_length 512 \
--num_train_epochs 10 \
--warmup_ratio 0.1 \
--per_device_train_batch_size 8 \
--do_train \
`
Also, we can merge warmup_step and warmup_ratio into one parameter. If user input a number 0 <= x < 1, it will be considered as warmup_ratio. If user input an interger, it will be considered as warmup_step.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can submit a PR to complete this feature. If similar feature is alreadly in this repo, please just close this issue.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6673/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6672/comments | https://api.github.com/repos/huggingface/transformers/issues/6672/events | https://github.com/huggingface/transformers/issues/6672 | 684,228,929 | MDU6SXNzdWU2ODQyMjg5Mjk= | 6,672 | TFTrainer with TPUs: Here's a suggestion on getting it to work | {
"login": "alexorona",
"id": 11825654,
"node_id": "MDQ6VXNlcjExODI1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexorona",
"html_url": "https://github.com/alexorona",
"followers_url": "https://api.github.com/users/alexorona/followers",
"following_url": "https://api.github.com/users/alexorona/following{/other_user}",
"gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexorona/subscriptions",
"organizations_url": "https://api.github.com/users/alexorona/orgs",
"repos_url": "https://api.github.com/users/alexorona/repos",
"events_url": "https://api.github.com/users/alexorona/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexorona/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @jplu for when he comes back",
"Thanks a lot @alexorona for this explicit issue.\r\n\r\nIndeed the TF Trainer was working with usual TPU creation on GCP but not with Colab and investigating on this was one of my priotities when back from vacation. Apparently, TPU on Colab is more restrictive which is good to have a better implementation :+1: \r\n\r\nThere are two small things we still have to think about from your proposal:\r\n\r\n1) The preprocessing is not anymore in the Trainer. A solution might be to make the function static to be used without the need to instanciate the Trainer.\r\n2) We have to be careful when calling `training_args.training_batch_size` only when not running on TPU and use the `tpu_num_cores` argument instead.",
"Can you train on TPU without loading tfrecords from a remote bucket?\r\n\r\nI usually got an error `[local] not supported` and assumed that TPU does not support loading datasets directly.",
"You cannot load/save data on your local environment, everything must be in a GCS bucket.",
"There is some sample notebook on how to finetune TFGPT2LMHeadModel and about making that tfrecords. ",
"@jplu Great points, Julien. The proposal above is just a temporary work-around. From a user perspective, there really aren't any options in `get_train_tfdataset` that haven't already been declared elsewhere, so this is a routine task with no value in exposing it to the user. Therefore, it should be hidden _somewhere_. The question is whether that _somewhere_ is in the `TFTrainer` or in `TFTrainingArguments`. From a library management perspective, there are a lot of considerations, including how similar `TFTrainer` and `TFTrainingArguments` are to `Trainer` and `TrainingArguments` for pytorch. You want these classes to behave as similarly as possible. With that in mind, here are the options from best to worst:\r\n1. See if there's a way to modify the current `TFTrainingArguments` tpu initialization procedure so that `get_train_tfdataset` can be left in `TFTrainer`. The model is still likely to be initialized outside of the scope, so a full-proof way of dealing with this is to re-initialize the model when `trainer.train()` is called by adding something like this in `train.train()`:\r\n```\r\nwith args.strategy.scope():\r\n self.model = self.model\r\n```\r\n2. Barring that, it might be possible to initialize the strategy when `TFTrainingArguments` is first declared. In that case, `get_train_tfdataset` could be placed inside of `TFTrainingArguments`. We'd also need to know in the documentation that the model has to be loaded after `TFTrainingArguments` and with the clause `with training_args.strategy.scope():` coming before the line that loads the model.\r\n\r\n@volker42maru I haven't had any problems with loading TF data records directly. Can you restructure so that the dataset is something like `tf.data.Dataset.from_tensor_slices((train_inputs, train_labels))`? Are you sure your batch size is equal to at least the the number of tensor cores and you're calling` strategy.experimental_distribute_dataset(dataset)` somewhere? I've been able to load and transform data just fine on Colab. You can also connect to Google Drive too and use it as a disk with:\r\n```\r\nfrom google.colab import drive\r\ndrive.mount('/content/drive')\r\n```",
"@jplu @alexorona \r\nHey i want small help please see my colab notebook. i am trying to finetune gpt2 its showing training but not returning loss and please confirm me that its using tpu or not.\r\nhttps://colab.research.google.com/drive/1IqXH0_VZ8LqgnbgjP3GqRXecChVitcea?usp=sharing\r\nIf i am doing something wrong please let me know.",
"Thanks @alexorona! I'm gonna investigate to be able to create the strategy in `trainer.train()` once all the datasets have been created and not in `TFTrainingArguments` anymore.",
"I tried a fix in the PR #6880.\r\n\r\n@alexorona Can you try it and tell me if it works for your env?",
"@jplu I'm getting that `Unable to parse tensor proto` error. Did you merge the changes into TF Trainer already?",
"No it is not merged, that's why if you could try the PR, it would be nice to have your feedback if it works or not.",
"@jplu Your approach looks great! I setup two dev notebook so you can see the remaining challenges:\r\n- It looks like there's an expected `args.training_args` attribute that isn't there. Maybe the changes to `TFTrainingArguments` didn't make it to the fork? I had to revert most instances of `self.args.training_args` to `self.args` to get to the next step.\r\n- `from_pt=bool(\".bin\" in self.args.model_name_or_path)` was throwing an error, but this might be due to `TFTrainingArguments` problem above.\r\n- The current strategy implementation is running out of resources on models we know it can train on (gpt2-medium 1024 tokens). This is speculation, but it might be because `strategy.experimental_distribute_dataset(ds`) isn't being used in `get_train_tfdataset` anymore.\r\n- `tb_writer` was also causing problems, so I had to comment that out too\r\n- `self.model.ckpt_manager.save()` is throwing `File system scheme '[local]' not implemented` \r\n- Special tokens are sometimes added to the model, especially when using this for dialogue-style generation. Maybe add a parameter `tokenizer_length = None` to the class on `__init__` and then replace with this: \r\n\r\n```\r\n if not self.model:\r\n with self.strategy.scope():\r\n self.model = self.model_class.from_pretrained(\r\n self.args.model_name_or_path,\r\n # from_pt=bool(\".bin\" in self.args.model_name_or_path),\r\n config=self.config,\r\n cache_dir=self.argscache_dir,\r\n )\r\n if self.tokenizer_length:\r\n self.model.resize_token_embeddings(self.tokenizer_length)\r\n```",
"> * It looks like there's an expected `args.training_args` attribute that isn't there. Maybe the changes to `TFTrainingArguments` didn't make it to the fork? I had to revert most instances of `self.args.training_args` to `self.args` to get to the next step.\r\n> * `from_pt=bool(\".bin\" in self.args.model_name_or_path)` was throwing an error, but this might be due to `TFTrainingArguments` problem above.\r\n\r\nYes, you have to modify your main file, look at the `run_tf_ner.py` example to see what has changed.\r\n\r\n> * The current strategy implementation is running out of resources on models we know it can train on (gpt2-medium 1024 tokens). This is speculation, but it might be because `strategy.experimental_distribute_dataset(ds`) isn't being used in `get_train_tfdataset` anymore.\r\n\r\n`strategy.experimental_distribute_dataset(ds)` are now respectively in the `train` and `prediction_loop` methods.\r\n\r\n> * `tb_writer` was also causing problems, so I had to comment that out too\r\n> * `self.model.ckpt_manager.save()` is throwing `File system scheme '[local]' not implemented`\r\n\r\nYou have to give the log/input/output directories as a GCS path and not a local path.\r\n\r\n> * Special tokens are sometimes added to the model, especially when using this for dialogue-style generation. Maybe add a parameter `tokenizer_length = None` to the class on `__init__` and then replace with this:\r\n> \r\n> ```\r\n> if not self.model:\r\n> with self.strategy.scope():\r\n> self.model = self.model_class.from_pretrained(\r\n> self.args.model_name_or_path,\r\n> # from_pt=bool(\".bin\" in self.args.model_name_or_path),\r\n> config=self.config,\r\n> cache_dir=self.argscache_dir,\r\n> )\r\n> if self.tokenizer_length:\r\n> self.model.resize_token_embeddings(self.tokenizer_length)\r\n> ```\r\n\r\nThis is a temporary solution, the model will soon be created into a `model_init` closure that one has to provide to the Trainer as argument.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,605 | 1,605 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
## Analysis and Temporary Solution
The approach in `TFTrainer` and `TFTrainingArguments` is really good, but it's not working right now on TPUs. It looks like we need to do some work on updating the trainer. There are a number of errors on this, the common being gradient accumulation (#6479) and `Unable to parse tensor proto`. Since Julien is on vacation, here's some things I did to get it to train on Colab with TPUs. It's hacky, but should be able to get it to work if you're anxious to use TPUs in until Julien has a fix:
- The `strategy` loading order in `TFTrainingArguments` and `TFTrainer` doesn't play well with a typical workflow (process data, create `training_args`, load model and pass to `TFTrainer`). The model needs to be loaded after the strategy has been initialized, and right now the strategy is being initialized inside of `TFTrainer.`
- Shuffle, batch etc. need to be called prior to instantiating the strategy. I think this has something to do with the way the strategy is defined in `TFTrainingArguments`.
- Calling `training_args.training_batch_size` automatically calculates the number of TPU cores. Unfortunately, this causes the strategy to initialize, so this cannot be used to calculate `total_train_batch_size` with the current strategy implementation because it will prematurely initialize before shuffle, batch, etc. are done.
- To avoid the `Unable to parse tensor proto`, shuffle, batch etc. will need to be pulled from `TFTrainer`. They're handled by the `TFTrainer` method `get_train_tfdataset`. With the current strategy implementation in TFTrainingArguments, you'll need to do that after shuffle, batch and before loading the model.
## Example with GPT2
Here's a example implementing the above changes:
```
# Note: you'll need to build transformers from source
# Grab a temporary version of TFTrainer with get_train_tfdataset pulled out
git clone https://github.com/alexorona/lm_tf_trainer
from lm_tf_trainer import LMTFTrainer
# Pulled out of TFTrainer
def get_train_tfdataset(train_dataset,
training_args,
train_batch_size,
gradient_accumulation_steps,
dataloader_drop_last = False,
seed = 40):
total_train_batch_size = train_batch_size * gradient_accumulation_steps
num_train_examples = tf.data.experimental.cardinality(train_dataset).numpy()
if num_train_examples < 0:
raise ValueError("The training dataset must have an asserted cardinality")
ds = (
train_dataset.repeat()
.shuffle(num_train_examples, seed=seed)
.batch(total_train_batch_size, dataloader_drop_last)
.prefetch(tf.data.experimental.AUTOTUNE)
)
return training_args.strategy.experimental_distribute_dataset(ds), num_train_examples
# Get Training Args
training_args, num_train_examples = TFTrainingArguments(...) # Create a normal training_args object
# Manual settings to avoid prematurely initializing the strategy
tpu_cores = 8
train_batch_size = tpu_cores * training_args.per_device_train_batch_size
# Formatting tf dataset from lists of different kinds of inputs
input_ids = tf.convert_to_tensor(train_input_ids) # train_input_ids[0] is a list of input ids and train_input_ids is a list of lists
attention_mask = tf.convert_to_tensor(attention_mask) # as above
token_type_ids = tf.convert_to_tensor(token_type_ids) # as above
labels = tf.convert_to_tensor(labels) # as above
train_inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}
train_dataset = tf.data.Dataset.from_tensor_slices((train_inputs, train_labels))
# Now, call the function to do shuffle, batch and initialize the strategy
train_ds = get_train_tfdataset(train_dataset = train_dataset ,
training_args = training_args,
train_batch_size = train_batch_size ,
gradient_accumulation_steps = training_args.gradient_accumulation_steps
)
# Then, load the model with the strategy
with training_args.strategy.scope():
model = TFGPT2LMHeadModel.from_pretrained('gpt2-medium')
# Now, train it
trainer = LMTFTrainer(args = training_args,
model = model,
num_train_examples = num_train_examples,
total_train_batch_size = 8,
train_dataset = train_ds)
trainer.train()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6672/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6672/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6671/comments | https://api.github.com/repos/huggingface/transformers/issues/6671/events | https://github.com/huggingface/transformers/issues/6671 | 684,163,825 | MDU6SXNzdWU2ODQxNjM4MjU= | 6,671 | Value Error & dev file parameter: run_squad.py BERT QA finetuning | {
"login": "haenvely",
"id": 34908281,
"node_id": "MDQ6VXNlcjM0OTA4Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/34908281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haenvely",
"html_url": "https://github.com/haenvely",
"followers_url": "https://api.github.com/users/haenvely/followers",
"following_url": "https://api.github.com/users/haenvely/following{/other_user}",
"gists_url": "https://api.github.com/users/haenvely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haenvely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haenvely/subscriptions",
"organizations_url": "https://api.github.com/users/haenvely/orgs",
"repos_url": "https://api.github.com/users/haenvely/repos",
"events_url": "https://api.github.com/users/haenvely/events{/privacy}",
"received_events_url": "https://api.github.com/users/haenvely/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
Hello, I have a question about implementing run_squad.py finetuning.
I have 2 questions.
**1. --dev_file parameter**
I have 3 datasets : train, dev, and predict file.
However, I discovered that run_squad.py finetuning seems it does not supply `--dev_file` parameter, so I can use only 2 dataset(train and predict).
Is there any way to use dev_file for evaluation? Or how can I check the accuracy score?
**2. Value Error**
I just implemented the run_squad.py script using train and predict file, and I got several errors while implementing the script.
Due to CUDA out of memory error, I set the 'max_seq_length' as 64 and per_gpu_train_batch_size' as 2, and it didn't bring CUDA out of memory error, but instead, it brought 'Valueerror expected sequence of length 64 at dim 1 (got 878) '.
Could you give some idea to fix this error?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6670/comments | https://api.github.com/repos/huggingface/transformers/issues/6670/events | https://github.com/huggingface/transformers/issues/6670 | 684,076,930 | MDU6SXNzdWU2ODQwNzY5MzA= | 6,670 | Pretrained GPT2DoubleHeadsModel | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"One of the two output heads is the language modeling head, which is tied to the embeddings. This is already trained, as the embeddings were trained during pre-training.\r\n\r\nThe second output head is a multiple choice head, which was not pre-trained. You would need to fine-tune it on a multiple choice dataset so that it works in your case.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | #6667
Hello,
This is a follow-up post for #6667.
So just so that I am understanding this correctly, the main body (excluding the 2 output heads) of the pre-trained `GPT2DoubleHeadsModel` are indeed the pre-trained weights and biases, but the 2 output heads are not pre-trained?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6669/comments | https://api.github.com/repos/huggingface/transformers/issues/6669/events | https://github.com/huggingface/transformers/issues/6669 | 684,074,179 | MDU6SXNzdWU2ODQwNzQxNzk= | 6,669 | Inconsistent handling of empty string in tokenizers | {
"login": "thomlake",
"id": 404526,
"node_id": "MDQ6VXNlcjQwNDUyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/404526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomlake",
"html_url": "https://github.com/thomlake",
"followers_url": "https://api.github.com/users/thomlake/followers",
"following_url": "https://api.github.com/users/thomlake/following{/other_user}",
"gists_url": "https://api.github.com/users/thomlake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomlake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomlake/subscriptions",
"organizations_url": "https://api.github.com/users/thomlake/orgs",
"repos_url": "https://api.github.com/users/thomlake/repos",
"events_url": "https://api.github.com/users/thomlake/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomlake/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After looking into this a bit, I believe it is isolated to the tokenizers package. If `BertTokenizerFast` is replaced with `BertTokenizer` above, no error is raised. I've opened an issue with a proposed solution there."
] | 1,598 | 1,600 | 1,600 | NONE | null | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no (issue with tokenizer)
- Using distributed or parallel set-up in script?: no (issue with tokenizer)
### Who can help
@mfuntowicz
## Information
I'm encountering inconsistent handling of empty string with `BertTokenizerFast` when tokenizing pairs. In particular, I'm observing an error when one string in a text pair is empty AND truncation is performed using the `longest_first` strategy. This issue only manifests when truncation actually occurs. If one of the strings are empty, and the other is short enough that truncation does not occur (or both strings are empty), then no error occurs (see example below). I haven't checked other tokenizers to see if they exhibit similar behavior.
## Example
```python
from transformers import BertTokenizerFast
tokz = BertTokenizerFast.from_pretrained('bert-base-uncased')
empty = ''
short = 'the ' * 509
long = 'the ' * 510
# Case 1: no truncation, no error
tokz(empty, empty, padding=True, truncation='longest_first', return_tensors='pt', max_length=512)
# Case 2: no truncation, no error
tokz(empty, short, padding=True, truncation='longest_first', return_tensors='pt', max_length=512)
# Case 3: truncation, no error
tokz(long, long, padding=True, truncation='longest_first', return_tensors='pt', max_length=512)
# Case 4: truncation, Truncation error
tokz(empty, long, padding=True, truncation='longest_first', return_tensors='pt', max_length=512)
```
## Possible Cause
This appears to be due to logic in the tokenizers package that throws an error if any of the strings has length 0 after truncation.
https://github.com/huggingface/tokenizers/blob/331e3ffc257ec2792ad88f6ff820d335859ed775/tokenizers/src/utils/truncation.rs#L100
I assume there are some checks occurring that prevent this code path from being hit in the other cases above, but I wasn't able to identify where.
.
## Stacktrace
```
Exception Traceback (most recent call last)
<ipython-input-22-dda0aff18100> in <module>
----> 1 tokz('', 'word ' * 510, padding=True, truncation='longest_first', return_tensors='pt', max_length=512)
~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, s$ride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_ma$ping, return_length, verbose, **kwargs)
1667 return_length=return_length,
1668 verbose=verbose,
-> 1669 **kwargs,
1670 )
1671
~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length$ stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets$mapping, return_length, verbose, **kwargs)
1735 return_length=return_length,
1736 verbose=verbose,
-> 1737 **kwargs,
1738 )
1739
~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_s$rategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_m$sk, return_offsets_mapping, return_length, verbose, **kwargs)
418 return_length=return_length,
419 verbose=verbose,
--> 420 **kwargs,
421 )
422
~/anaconda3/envs/aq/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strateg$, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_s$ecial_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
329 *batch_text_or_text_pairs[0],
330 add_special_tokens=add_special_tokens,
--> 331 is_pretokenized=is_pretokenized,
332 )
333 else:
~/anaconda3/envs/aq/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in encode(self, sequence, pair, is_pretokenized, add_special_tokens)
210 raise ValueError("encode: `sequence` can't be `None`")
211
--> 212 return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens)
213
214 def encode_batch(
Exception: Truncation error: Specified max length is too low to respect the various constraints
```
## To reproduce
See example above
## Expected behavior
The handling of empty strings (cases 1, 2, and 4) should be consistent (either empty string are ok, or they result in an error).
edit: grammar | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6668/comments | https://api.github.com/repos/huggingface/transformers/issues/6668/events | https://github.com/huggingface/transformers/issues/6668 | 684,068,283 | MDU6SXNzdWU2ODQwNjgyODM= | 6,668 | Zero-Shot-Classification: multi_class or multi_label? | {
"login": "taufique74",
"id": 7470463,
"node_id": "MDQ6VXNlcjc0NzA0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7470463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taufique74",
"html_url": "https://github.com/taufique74",
"followers_url": "https://api.github.com/users/taufique74/followers",
"following_url": "https://api.github.com/users/taufique74/following{/other_user}",
"gists_url": "https://api.github.com/users/taufique74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taufique74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taufique74/subscriptions",
"organizations_url": "https://api.github.com/users/taufique74/orgs",
"repos_url": "https://api.github.com/users/taufique74/repos",
"events_url": "https://api.github.com/users/taufique74/events{/privacy}",
"received_events_url": "https://api.github.com/users/taufique74/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @joeddav ",
"Yeah I think `multi_label` probably would have been better in retrospect given that many do seem to make this distinction between terms. Since it's already being widely used, though, I think we'll keep it as is for the moment. If it turns out to cause consistent confusion for people we can look at changing it later on.",
"Leaving a comment and hoping this to be reopened.\r\n\r\nThere are two major classification tasks (see for instance: https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff#:~:text=Difference%20between%20multi%2Dclass%20classification,the%20tasks%20are%20somehow%20related.):\r\nA multiclass classification problem is a classification task with more than two classes, that makes the assumption that each sample is assigned to one and only one label: an animal can be wither a cat or a dog.\r\n\r\nA multilabel classification assigns a set of target labels to each sample. A text might have multiple categories, for instance politics, finance and education, or none of these.\r\n\r\nThe multi_class option here, works exactly opposite. You have to set it to multi_class=False to make it behave like multi class classification problem, and multi_class=True to make it multi label. It is really confusing.\r\n\r\nSwitching the behaviour would probably lead to even more confusion. My suggestion would be to depreciate \"multi_class=True/False\" and instead add the parameter \"multi=label(default)/class. Probably not ideal, but less confusing.",
"@peregilk I think you're right, it's consistently been a little bit confusing. I don't think the parameter names need to **exactly** reflect the vernacular, but I do think we can deprecate `multi_class` and rename it to `multi_label` while keeping the behavior the same. I think that's the solution with the least confusion. I'll send a PR. The only case where it's not multi-class is when only a single label is passed, in which case the `multi_class` argument is treated as true since it doesn't make sense to softmax over a single label.",
"That sounds like a great solution. Thanks. "
] | 1,598 | 1,615 | 1,615 | NONE | null | https://github.com/huggingface/transformers/blob/068df740bd73b95e9a1e233e47608df942fda9da/src/transformers/pipelines.py#L1048
Since we're allowing multiple labels to be true, shouldn't this be called `multi_label` instead of `multi_class`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6667/comments | https://api.github.com/repos/huggingface/transformers/issues/6667/events | https://github.com/huggingface/transformers/issues/6667 | 684,048,406 | MDU6SXNzdWU2ODQwNDg0MDY= | 6,667 | Why does the median cross entropy loss change when I change the random seed? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The `GPT2DoubleHeadsModel` model has a multiple choice head which generally isn't initialized from GPT-2 checkpoints. The `gpt2` checkpoint doesn't contain weights for that head as the pre-training didn't involve a multiple-choice task.\r\n\r\nIf you're on the latest version of `transformers` you should see such a warning:\r\n\r\n```\r\nSome weights of GPT2DoubleHeadsModel were not initialized from the model checkpoint at gpt2 and are newly initialized: [... 'multiple_choice_head.summary.weight', 'multiple_choice_head.summary.bias', 'lm_head.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n``` \r\n\r\nYou should first fine-tune your model on a multiple-choice task."
] | 1,598 | 1,598 | 1,598 | NONE | null | Hello,
I've noticed that, when I use the *pre-trained* `GPT2DoubleHeadsModel` to process multiple choice questions, the median of the cross entropy loss generated for the same set of multiple choice questions change when I change the type of my random seed (NOTE: I changed my random seed *before* loading the pre-trained BTP GPT-2 tokenizer and loading the pre-trained `GPT2DoubleHeadsModel` ... I also did `my_gpt2_model.eval()` before evaluating the loss to prevent dropout).
Why does this occur? I thought the parameters of both the pre-trained model and the tokenizer are fixed, so to me, the cross entropy loss should be the same regardless of the type of random seed?
For more information, below are my code:
```python
# for our main experiment, we use G1G2, G4G5, G7G8, G10G12 files
def fill_MC_loss_tensor( ...):
for m in range(num_mc_questions):
# make an empty list to store the mc_loss
mc_loss_list = []
# Turn on the evaluation mode
best_model_gpt2DoubleHeadsModel.eval()
# for each layer j = 1,...,12, extract the hidden states at the layer j
input_hidden_state = best_model_gpt2DoubleHeadsModel(input_ids,
token_type_ids = token_type_ids,
attention_mask = attention_mask)[3][0][:,:,:].detach()
for j in range(nlayer):
# Turn on the evaluation mode
layer_hidden_state =
best_model_gpt2DoubleHeadsModel.transformer.h[j](input_hidden_state)
# feed the hidden states from each layer directly into the multiple-choice head
mc_logits =
best_model_gpt2DoubleHeadsModel.multiple_choice_head(layer_hidden_state[0]).squeeze(-1).detach()
del layer_hidden_state
gc.collect()
# define the loss function
loss_fct = CrossEntropyLoss()
# calculate the mc_loss
mc_loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)),
mc_labels.view(-1))
# store the mc_loss in a list
mc_loss_list = mc_loss_list + [mc_loss.tolist()]
del mc_logits
gc.collect()
mc_loss_tensor[m,:] = torch.tensor(mc_loss_list)
print('m={}'.format(m))
return mc_loss_tensor
# main function for analysis
def main_function(...):
# set initial seed
seed(125)
num_iter = 200
# define mc_loss_tensor_num_iter
mc_loss_tensor_num_iter = torch.zeros(num_iter, int(num_mc_questions),
nlayer)
mc_loss_tensor_num_iter[mc_loss_tensor_num_iter == 0] = nan
for i in range(num_iter):
# change seed at each iteration
s = randint(1,999999)
seed(s)
# import the pre-trained HuggingFace GPT2Tokenizer
gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# make a dictionary of special tokens
special_tokens_dict = {'pad_token': '<pad>'}
# add the special tokens to the tokenizer
gpt2_tokenizer.add_special_tokens(special_tokens_dict)
assert gpt2_tokenizer.pad_token == '<pad>'
# get the encoding for the special tokens
pub2_pad_token_id = gpt2_tokenizer.convert_tokens_to_ids('<pad>')
pub2_eos_token_id = gpt2_tokenizer.convert_tokens_to_ids(gpt2_tokenizer.eos_token)
# sanity check
len(gpt2_tokenizer) # note: original size of the tokenizer is 50257 + <pad> = 50258
# get the pre-trained HuggingFace GPT2DoubleHeadsModel and
# resize the token embeddings after adding the special token
best_model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2',
output_hidden_states = True)
best_model_gpt2DoubleHeadsModel.resize_token_embeddings(len(gpt2_tokenizer))
#######
# make an empty tensor to store mc loss
mc_loss_tensor = torch.zeros(num_mc_questions, nlayer).float()
mc_loss_tensor[mc_loss_tensor == 0] = nan
mc_loss_tensor = fill_MC_loss_tensor(...)
if torch.isnan(mc_loss_tensor).any().tolist():
sys.exit('nan found in mc_loss_tensor')
mc_loss_tensor_num_iter[i,:,:] = mc_loss_tensor
print('i={}'.format(i))
return mc_loss_tensor_num_iter
# for each of the 200 iteration, the computed median
# (median over all questions)
# cross entropy loss are different,
# for the same layer.
>>> main_function(...)
```
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6667/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6666/comments | https://api.github.com/repos/huggingface/transformers/issues/6666/events | https://github.com/huggingface/transformers/pull/6666 | 684,030,105 | MDExOlB1bGxSZXF1ZXN0NDcyMDExNDk0 | 6,666 | added multiple model_cards for below models | {
"login": "sagorbrur",
"id": 10723655,
"node_id": "MDQ6VXNlcjEwNzIzNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagorbrur",
"html_url": "https://github.com/sagorbrur",
"followers_url": "https://api.github.com/users/sagorbrur/followers",
"following_url": "https://api.github.com/users/sagorbrur/following{/other_user}",
"gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions",
"organizations_url": "https://api.github.com/users/sagorbrur/orgs",
"repos_url": "https://api.github.com/users/sagorbrur/repos",
"events_url": "https://api.github.com/users/sagorbrur/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagorbrur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=h1) Report\n> Merging [#6666](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f230a640941ef11b077c953cbda01aa981e1ec9a?el=desc) will **increase** coverage by `0.62%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6666 +/- ##\n==========================================\n+ Coverage 79.00% 79.62% +0.62% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22317 22493 +176 \n+ Misses 5931 5755 -176 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=footer). Last update [16e3894...6d88a30](https://codecov.io/gh/huggingface/transformers/pull/6666?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Hi,
I added model_cards for these models
* codeswitch-hineng-ner-lince
* codeswitch-hineng-pos-lince
* codeswitch-nepeng-lid-lince
* codeswitch-spaeng-ner-lince
* codeswitch-spaeng-pos-lince
also update model card for this two model
* codeswitch-hineng-lid-lince
* codeswitch-spaeng-lid-lince
Please check. If possible please merge.
thanks and regards
Sagor | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6666/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6666",
"html_url": "https://github.com/huggingface/transformers/pull/6666",
"diff_url": "https://github.com/huggingface/transformers/pull/6666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6666.patch",
"merged_at": 1598260113000
} |
https://api.github.com/repos/huggingface/transformers/issues/6665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6665/comments | https://api.github.com/repos/huggingface/transformers/issues/6665/events | https://github.com/huggingface/transformers/issues/6665 | 684,012,121 | MDU6SXNzdWU2ODQwMTIxMjE= | 6,665 | Finetune.sh showing killed | {
"login": "mc2259",
"id": 57819870,
"node_id": "MDQ6VXNlcjU3ODE5ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57819870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mc2259",
"html_url": "https://github.com/mc2259",
"followers_url": "https://api.github.com/users/mc2259/followers",
"following_url": "https://api.github.com/users/mc2259/following{/other_user}",
"gists_url": "https://api.github.com/users/mc2259/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mc2259/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mc2259/subscriptions",
"organizations_url": "https://api.github.com/users/mc2259/orgs",
"repos_url": "https://api.github.com/users/mc2259/repos",
"events_url": "https://api.github.com/users/mc2259/events{/privacy}",
"received_events_url": "https://api.github.com/users/mc2259/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, can you post the full command and your env info",
"lets all hang out in #6711 "
] | 1,598 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I am getting this very strange issue while running the finetuning script and have absolutely no idea what is going wrong.
./finetune.sh: line 14: 9040 Killed python finetune.py --learning_rate=3e-5 --fp16 --gpus 1 --do_train --do_predict --n_val 1000 --val_check_interval 0.1 "$@"
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6664/comments | https://api.github.com/repos/huggingface/transformers/issues/6664/events | https://github.com/huggingface/transformers/pull/6664 | 684,007,730 | MDExOlB1bGxSZXF1ZXN0NDcxOTk1NTcz | 6,664 | convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf | {
"login": "kbrajwani",
"id": 29722986,
"node_id": "MDQ6VXNlcjI5NzIyOTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/29722986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kbrajwani",
"html_url": "https://github.com/kbrajwani",
"followers_url": "https://api.github.com/users/kbrajwani/followers",
"following_url": "https://api.github.com/users/kbrajwani/following{/other_user}",
"gists_url": "https://api.github.com/users/kbrajwani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kbrajwani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kbrajwani/subscriptions",
"organizations_url": "https://api.github.com/users/kbrajwani/orgs",
"repos_url": "https://api.github.com/users/kbrajwani/repos",
"events_url": "https://api.github.com/users/kbrajwani/events{/privacy}",
"received_events_url": "https://api.github.com/users/kbrajwani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,598 | 1,598 | NONE | null | check src/transformers/convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf.py file
i only make this file which can convert BertForQuestionAnswering pytorch models into in TFBertForQuestionAnswering. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6664/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6664",
"html_url": "https://github.com/huggingface/transformers/pull/6664",
"diff_url": "https://github.com/huggingface/transformers/pull/6664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6664.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6663/comments | https://api.github.com/repos/huggingface/transformers/issues/6663/events | https://github.com/huggingface/transformers/pull/6663 | 683,993,431 | MDExOlB1bGxSZXF1ZXN0NDcxOTg1NDM3 | 6,663 | added model_card for model codeswitch-hineng-lid-lince and codeswitch-spaeng-lid-lince | {
"login": "sagorbrur",
"id": 10723655,
"node_id": "MDQ6VXNlcjEwNzIzNjU1",
"avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sagorbrur",
"html_url": "https://github.com/sagorbrur",
"followers_url": "https://api.github.com/users/sagorbrur/followers",
"following_url": "https://api.github.com/users/sagorbrur/following{/other_user}",
"gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions",
"organizations_url": "https://api.github.com/users/sagorbrur/orgs",
"repos_url": "https://api.github.com/users/sagorbrur/repos",
"events_url": "https://api.github.com/users/sagorbrur/events{/privacy}",
"received_events_url": "https://api.github.com/users/sagorbrur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Looks great! Maybe also add a link to https://github.com/sagorbrur/codeswitch?",
"Hi @julien-c,\r\nthank you so much. \r\nregards\r\nSagor"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Hi,
Thank you so much for this beautiful, awesome repo.
I added model card for model `codeswitch-hineng-lid-lince` and `codeswitch-spaeng-lid-lince`.
Please check and if possible please merge.
thanks and regards
Sagor | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6663",
"html_url": "https://github.com/huggingface/transformers/pull/6663",
"diff_url": "https://github.com/huggingface/transformers/pull/6663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6663.patch",
"merged_at": 1598112802000
} |
https://api.github.com/repos/huggingface/transformers/issues/6662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6662/comments | https://api.github.com/repos/huggingface/transformers/issues/6662/events | https://github.com/huggingface/transformers/issues/6662 | 683,991,814 | MDU6SXNzdWU2ODM5OTE4MTQ= | 6,662 | Integer division of tensors using div or / is no longer supported torch | {
"login": "dhruvrnaik",
"id": 22565320,
"node_id": "MDQ6VXNlcjIyNTY1MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/22565320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvrnaik",
"html_url": "https://github.com/dhruvrnaik",
"followers_url": "https://api.github.com/users/dhruvrnaik/followers",
"following_url": "https://api.github.com/users/dhruvrnaik/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvrnaik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvrnaik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvrnaik/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvrnaik/orgs",
"repos_url": "https://api.github.com/users/dhruvrnaik/repos",
"events_url": "https://api.github.com/users/dhruvrnaik/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvrnaik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | https://github.com/huggingface/transformers/blob/97bb2497abbbf978a0f78f1d414a7b45539e795b/examples/seq2seq/bertabs/modeling_bertabs.py#L885
Line throws an error "integer division of tensors using div or / is no longer supported torch" when executing `bertabs/run_summarization.py`
Is replacing `.div()` with `.floor_divide()` the correct solution here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6661/comments | https://api.github.com/repos/huggingface/transformers/issues/6661/events | https://github.com/huggingface/transformers/issues/6661 | 683,947,254 | MDU6SXNzdWU2ODM5NDcyNTQ= | 6,661 | Sequence packing | {
"login": "saareliad",
"id": 22762845,
"node_id": "MDQ6VXNlcjIyNzYyODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saareliad",
"html_url": "https://github.com/saareliad",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saareliad/subscriptions",
"organizations_url": "https://api.github.com/users/saareliad/orgs",
"repos_url": "https://api.github.com/users/saareliad/repos",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"received_events_url": "https://api.github.com/users/saareliad/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | # 🚀 Feature request
Add sequence packing support
## Motivation
Faster training, higher utilization, replicate experiments.
See https://github.com/google-research/text-to-text-transfer-transformer/issues/365
## Your contribution
I think it makes sense doing something similar to other frameworks which already have this implemented (e.g https://github.com/tensorflow/mesh/blob/6a812c8bb847e081e976533ed497c7c5016bb1ec/mesh_tensorflow/transformer/dataset.py#L474-L504 ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6661/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6661/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6660/comments | https://api.github.com/repos/huggingface/transformers/issues/6660/events | https://github.com/huggingface/transformers/pull/6660 | 683,930,564 | MDExOlB1bGxSZXF1ZXN0NDcxOTM4OTU0 | 6,660 | Create PULL_REQUEST_TEMPLATE.md | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=h1) Report\n> Merging [#6660](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6660 +/- ##\n==========================================\n+ Coverage 79.01% 79.65% +0.64% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22320 22502 +182 \n+ Misses 5928 5746 -182 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6660/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=footer). Last update [0f94151...73edfdc](https://codecov.io/gh/huggingface/transformers/pull/6660?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Proposing to copy this neat feature from pytorch. This is a small template that let's a PR submitter tell which issue that PR closes.
Here is an example of it in action, the end of the top post: https://github.com/huggingface/transformers/pull/6659 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6660",
"html_url": "https://github.com/huggingface/transformers/pull/6660",
"diff_url": "https://github.com/huggingface/transformers/pull/6660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6660.patch",
"merged_at": 1598286638000
} |
https://api.github.com/repos/huggingface/transformers/issues/6659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6659/comments | https://api.github.com/repos/huggingface/transformers/issues/6659/events | https://github.com/huggingface/transformers/pull/6659 | 683,929,497 | MDExOlB1bGxSZXF1ZXN0NDcxOTM4MTAz | 6,659 | [doc] remove BartForConditionalGeneration.generate | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=h1) Report\n> Merging [#6659](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `0.61%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6659 +/- ##\n==========================================\n+ Coverage 79.01% 79.62% +0.61% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22320 22493 +173 \n+ Misses 5928 5755 -173 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=footer). Last update [0f94151...2b5ac03](https://codecov.io/gh/huggingface/transformers/pull/6659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | As suggested here: https://github.com/huggingface/transformers/issues/6651#issuecomment-678594233
this removes a generic `generate` doc with a large group of generate examples, none of which is relevant to BART. the BART class pre-amble doc already provides the examples.
Fixes #6651 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6659",
"html_url": "https://github.com/huggingface/transformers/pull/6659",
"diff_url": "https://github.com/huggingface/transformers/pull/6659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6659.patch",
"merged_at": 1598287355000
} |
https://api.github.com/repos/huggingface/transformers/issues/6658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6658/comments | https://api.github.com/repos/huggingface/transformers/issues/6658/events | https://github.com/huggingface/transformers/pull/6658 | 683,928,572 | MDExOlB1bGxSZXF1ZXN0NDcxOTM3MzU4 | 6,658 | wip: add from scratch arg to lightning_base | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,601 | 1,601 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6658/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6658",
"html_url": "https://github.com/huggingface/transformers/pull/6658",
"diff_url": "https://github.com/huggingface/transformers/pull/6658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6658.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6657/comments | https://api.github.com/repos/huggingface/transformers/issues/6657/events | https://github.com/huggingface/transformers/issues/6657 | 683,889,083 | MDU6SXNzdWU2ODM4ODkwODM= | 6,657 | Error while loading pretrained model with "return_dict=True" | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I believe that parameter is only available on `master` right now, so you should install `transformers` from the `master` branch to use it (`pip install git+https://github.com/huggingface/transformers`). It'll be available in version `3.1.0` which will be released in a couple of days.",
"Working for me after the upgrade to 3.1.0 - thanks @LysandreJik ",
"> pip install git+https://github.com/huggingface/transformers\r\n\r\nThanks, this helped me too.\r\nSeems like ```transformers``` latest version must be important , especially stable and master branch one"
] | 1,598 | 1,631 | 1,598 | CONTRIBUTOR | null | # ❓ Questions & Help
torch: 1.6.0+cu101
Transformers: 3.0.2
**Error with "return_dict=True"**
```
from transformers import BertTokenizer, BertForPreTraining
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForPreTraining.from_pretrained('bert-base-uncased', return_dict=True)
```
```
TypeError Traceback (most recent call last)
<ipython-input-3-5eca8cb45c88> in <module>()
2 import torch
3 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
----> 4 model = BertForPreTraining.from_pretrained('bert-base-uncased', return_dict=True)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
670
671 # Instantiate model.
--> 672 model = cls(config, *model_args, **model_kwargs)
673
674 if state_dict is None and not from_tf:
TypeError: __init__() got an unexpected keyword argument 'return_dict'
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6657/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6656/comments | https://api.github.com/repos/huggingface/transformers/issues/6656/events | https://github.com/huggingface/transformers/pull/6656 | 683,857,992 | MDExOlB1bGxSZXF1ZXN0NDcxODgwMTU0 | 6,656 | Add bibtex for new paper | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=h1) Report\n> Merging [#6656](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f94151dc7809128b40ab68ba164742fe1c5b4e6?el=desc) will **increase** coverage by `1.26%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6656 +/- ##\n==========================================\n+ Coverage 79.01% 80.27% +1.26% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22320 22677 +357 \n+ Misses 5928 5571 -357 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `97.08% <0.00%> (+19.70%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.17% <0.00%> (+71.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=footer). Last update [0f94151...5d78b0b](https://codecov.io/gh/huggingface/transformers/pull/6656?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Just checking, there is no anonymity period, right?",
"no sasha tweeted it https://twitter.com/srush_nlp/status/1283433427212079104?s=20"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | and link
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6656/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6656",
"html_url": "https://github.com/huggingface/transformers/pull/6656",
"diff_url": "https://github.com/huggingface/transformers/pull/6656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6656.patch",
"merged_at": 1598191421000
} |
https://api.github.com/repos/huggingface/transformers/issues/6655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6655/comments | https://api.github.com/repos/huggingface/transformers/issues/6655/events | https://github.com/huggingface/transformers/issues/6655 | 683,833,637 | MDU6SXNzdWU2ODM4MzM2Mzc= | 6,655 | Error when wandb is installed | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What version of wandb do you have?\r\nAlso would you have a simple example to reproduce this issue?",
"It could be a problem with your virtual environment.\r\nMaybe this issue will help: https://github.com/wandb/client/issues/539",
"@borisdayma Thank you for your reply. I'm using wand 0.9.5. Here is some sample code:\r\n```python\r\nfrom transformers import Trainer, TrainingArguments, GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\nclass SDAbstractsDataset(Dataset):\r\n def __init__(self):\r\n prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, ø13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of ø13.'\r\n prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.'\r\n prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30ðC and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121ðC/1h), pH (2âÂÂ11) and ionic strength (0âÂÂ25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30ðC in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.'\r\n prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and TakagiâÂÂSugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first oneâÂÂwith inverse response and the second oneâÂÂa MIMO plant with time delay.'\r\n self.data_list = [prompt1, prompt2, prompt3, prompt4]\r\n\r\n def __len__(self):\r\n return len(self.data_list)\r\n\r\n def __getitem__(self, idx):\r\n if torch.is_tensor(idx):\r\n idx = idx.tolist()\r\n abstract_text = self.data_list[idx]\r\n return abstract_text\r\n\r\n\r\ndef sd_data_collator(dataset_samples_list):\r\n tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right')\r\n tokenizer.pad_token = tokenizer.eos_token\r\n\r\n encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True)\r\n\r\n batch = {}\r\n batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])\r\n batch['past'] = None\r\n batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])\r\n batch['position_ids'] = None\r\n batch['head_mask'] = None\r\n batch['inputs_embeds'] = None\r\n batch['labels'] = None\r\n batch['use_cache'] = True\r\n return batch\r\n\r\n\r\noutput_dir = 'YOUR_OUTPUT_DIR'\r\nlogging_dir = 'YOUR_LOGGING_DIR'\r\ntraining_args = TrainingArguments(\r\n output_dir=output_dir,\r\n do_train=True,\r\n logging_dir=logging_dir,\r\n save_steps=50,\r\n per_device_train_batch_size=2\r\n)\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\nsd_dataset = SDAbstractsDataset()\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=sd_dataset,\r\n data_collator=sd_data_collator\r\n)\r\n\r\n#trainer.train()\r\n\r\n```",
"I ran it in colab: see [notebook](https://colab.research.google.com/gist/borisdayma/ccff2200853253a3dcbf39d1100a7da0/welcome-to-colaboratory.ipynb)\r\n\r\nThere does not seem to be any issue related to wandb.\r\nMaybe try my previous link as it may be due to issues in your local environment.",
"@borisdayma that link solved my problem! Thanks for your help!"
] | 1,598 | 1,598 | 1,598 | NONE | null | ## System Summary
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 3.0.2
Tokenizers: 0.8.1rc1
Python: 3.7.6
Pretrained Model: GPT2
Pretrained Tokenizer: GPT2
## Question
Training without `wandb` works fine but when I `pip install wandb` and change nothing else in my code, whenever I go to run training I get the following error:
```python
I0821 15:44:17.531560 46912496399424 file_utils.py:39] PyTorch version 1.5.1 available.
I0821 15:44:21.471980 46912496399424 file_utils.py:55] TensorFlow version 2.0.0 available.
Traceback (most recent call last):
File "run_finetune_gpt2.py", line 5, in <module>
from transformers import TrainingArguments, Trainer, GPT2Tokenizer
File "/path/to/venv/my-venv/lib/python3.6/site-packages/transformers/__init__.py", line 158, in <module>
from .trainer_utils import EvalPrediction, set_seed
File "/path/to/venv/my-venv/lib/python3.6/site-packages/transformers/trainer_utils.py", line 11, in <module>
import wandb
File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/__init__.py", line 60, in <module>
from wandb.apis import InternalApi, PublicApi, CommError
File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/apis/__init__.py", line 116, in <module>
from .public import Api as PublicApi
File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/apis/public.py", line 28, in <module>
from wandb.summary import HTTPSummary
File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/summary.py", line 15, in <module>
from wandb.meta import Meta
File "/path/to/venv/my-venv/lib/python3.6/site-packages/wandb/meta.py", line 6, in <module>
import pynvml
File "/cm/local/apps/cuda/libs/current/pynvml/pynvml.py", line 1671
print c_count.value
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(c_count.value)?
```
Any thoughts? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6655/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6654/comments | https://api.github.com/repos/huggingface/transformers/issues/6654/events | https://github.com/huggingface/transformers/pull/6654 | 683,831,777 | MDExOlB1bGxSZXF1ZXN0NDcxODU4NTQx | 6,654 | prepare_seq2seq_batch makes labels/ decoder_input_ids made later. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=h1) Report\n> Merging [#6654](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7be9a4268221d2a0000c7e8033aaeb365c03b?el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6654 +/- ##\n==========================================\n- Coverage 79.74% 79.70% -0.05% \n==========================================\n Files 157 157 \n Lines 28479 28477 -2 \n==========================================\n- Hits 22712 22697 -15 \n- Misses 5767 5780 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.57% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (+32.48%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <100.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.23% <100.00%> (+49.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.32% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6654/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=footer). Last update [4bd7be9...08ddfd4](https://codecov.io/gh/huggingface/transformers/pull/6654?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Added common tokenizer tests @LysandreJik "
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | `src/` changes:
- when `tgt_texts` is supplied `prepare_seq_to_seq_batch` calls the tensor that used to be called `decoder_input_ids`, `labels`.
- This change helps metrics for models whose tokenizers do not add bos to the beginning of target sequences, like Marian and Pegasus, without affecting metrics for other models (bart).
This branch was originally called "Fairseq batch equivalence", because it makes batches that look identical to fairseq's for mbart (and bart).
- tokenization testing file for bart.
- lots of cleanup and testing.
`examples/seq2seq` changes:
- `examples/seq2seq/finetune.py` (and eventually Seq2SeqTrainer) makes decoder_input_ids by shifting tokens right.
- this enables Marian finetuning and distillation, with a few extra changes.
- add `--label_smoothing` option to seq2seq/distillation.py
- rename `Seq2SeqDataset` -> `LegacySeq2SeqDataset` and `TranslationDataset`-> `Seq2SeqDataset`. The new `Seq2SeqDataset` calls `prepare_seq2seq_batch`. The choice of which dataset to use is determined based on whether the tokenizer has a `prepare_seq2seq_batch` method.
**Problem:**
Previously on master, if the target language sequence was
"Șeful ONU declară că nu există soluții militare în Siria", and the tokenizer was Marian, lm_labels would become "ONU declară că nu există soluții militare în Siria", and the model would learn to skip the first token (or not generate bos).
Generations would then start very strangely, for example:
`", fostul şef al personalului prezidenţial din Brazilia, va participa la un proces"`
now: `"Fostul şef al personalului prezidenţial al Braziliei va fi judecat".`
(same thing is happening for pegasus #6711)
### Metrics
**mbart en-> ro**: no change
marian: master: 23 BLEU, this branch: 25
(en ro distillation/no teacher/3 dec layers)
distilbart-cnn-12-3: no change (within 0.01 ROUGE 2)
master + label smoothing: `{'rouge1': 43.2764, 'rouge2': 20.4969, 'rougeL': 29.9210}`
this branch + label smoothing: `{"rouge1": 43.1997, "rouge2": 20.4879, "rougeL": 30.1607}`
### TODO:
- check t5-base
- check pegasus
If you want to test whether this branch makes truncation go away, the easiest way is to pull the mirror branch with
```bash
git fetch
git checkout batch-parity-cleaner
```
cc @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6654",
"html_url": "https://github.com/huggingface/transformers/pull/6654",
"diff_url": "https://github.com/huggingface/transformers/pull/6654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6654.patch",
"merged_at": 1598627718000
} |
https://api.github.com/repos/huggingface/transformers/issues/6653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6653/comments | https://api.github.com/repos/huggingface/transformers/issues/6653/events | https://github.com/huggingface/transformers/issues/6653 | 683,821,829 | MDU6SXNzdWU2ODM4MjE4Mjk= | 6,653 | old nlp causes error that pip install -e. can't fix. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Fixed with `pip install nlp --upgrade`.\r\nThis problem only happens if you do have nlp installed, but an old version.\r\nwe could make assertions about` nlp.__version___` to avoid this.",
"I face this even now after running pip install on requirements.txt . I just upgrade the version for pyarrow and it works fine then.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | CONTRIBUTOR | null | ```
Traceback (most recent call last):
File "finetune.py", line 15, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
File "/home/shleifer/transformers_fork/examples/lightning_base.py", line 10, in <mo
dule>
from transformers import (
File "/home/shleifer/transformers_fork/src/transformers/__init__.py", line 23, in <
module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/home/shleifer/transformers_fork/src/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/home/shleifer/transformers_fork/src/transformers/configuration_utils.py", line 25, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/home/shleifer/transformers_fork/src/transformers/file_utils.py", line 68, in <module>
import nlp # noqa: F401
File "/home/shleifer/miniconda/envs/torch1.5/lib/python3.8/site-packages/nlp/__init__.py", line 41, in <module>
raise ImportWarning(
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition.
If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6653/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6652/comments | https://api.github.com/repos/huggingface/transformers/issues/6652/events | https://github.com/huggingface/transformers/issues/6652 | 683,789,447 | MDU6SXNzdWU2ODM3ODk0NDc= | 6,652 | ['encoder.version', 'decoder.version'] are unexpected when loading a pretrained BART model | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah I think the clean solution is `authorized_extra_keys` but I could also just reconvert the models.\r\nWe could also leave the warning.\r\nWhat do you think @sgugger ?",
"IMHO, that warning makes the library look somewhat amateurish, as it makes the user wonder whether something is wrong, for absolutely no reason.\r\n\r\nAs I'm the one who is bothered - If I can be of help resolving this please don't hesitate to delegate this to me.",
"The cleanest would be to reconvert the models and remove the keys we don't need, I think. Adding the `authorized_extra_keys` works too, but then using it too much could have unexpected consequences resulting in bugs, so I'd only go down that road if there is no other option.",
"The simplest and cleanest way would probably to simply remove these two variables from the state dict, wouldn't it? If reconverting the checkpoint you should check that it is exactly the same as the previous one, which sounds like more of a pain and more error prone than simply doing\r\n\r\n```py\r\n!wget https://cdn.huggingface.co/facebook/bart-large/pytorch_model.bin\r\n\r\nweights = torch.load('/path/to/pytorch_model.bin')\r\ndel weights['encoder.version']\r\ndel weights['decoder.version']\r\ntorch.save(weights, 'new_pytorch_model.bin')\r\n```",
"Done. Also converted weights to fp16."
] | 1,598 | 1,599 | 1,599 | CONTRIBUTOR | null | Using an example from the bart doc:
https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration
```
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
TXT = "My friends are <mask> but they eat too many carbs."
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
logits = model(input_ids)[0]
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
print(tokenizer.decode(predictions).split())
```
gives:
```
Some weights of the model checkpoint at facebook/bart-large were not used
when initializing BartForConditionalGeneration:
['encoder.version', 'decoder.version']
- This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
test:9: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1597302504919/work/torch/csrc/utils/python_arg_parser.cpp:864.)
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
['good', 'great', 'all', 'really', 'very']
```
well, there is one more issue of using a weird deprecated `nonzero()` invocation, which has to do with some strange undocumented requirement to pass the `as_tuple` arg, since pytorch 1.5 .https://github.com/pytorch/pytorch/issues/43425
we have `authorized_missing_keys`:
`authorized_missing_keys = [r"final_logits_bias", r"encoder\.version", r"decoder\.version"]`
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L942
which correctly updates `missing_keys` - should there be also an `authorized_unexpected_keys` which would clean up `unexpected_keys`?
(note: I re-edited this issue once I understood it better to save reader's time, the history is there if someone needs it)
And found another variety of it: for `['model.encoder.version', 'model.decoder.version']`
```
tests/test_modeling_bart.py::BartModelIntegrationTests::test_mnli_inference Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
PASSED
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6652/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6651/comments | https://api.github.com/repos/huggingface/transformers/issues/6651/events | https://github.com/huggingface/transformers/issues/6651 | 683,787,399 | MDU6SXNzdWU2ODM3ODczOTk= | 6,651 | [doc] bart doc examples aren't for bart | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Those examples are from the `generate` method doc which is a generic method shared by all generative models.",
"Thank you, @patil-suraj. Indeed, since `BartForConditionalGeneration` uses super-super class's `generate` it ends up having that generic signature in its docs.\r\n\r\nWhat I'm trying to say is that these example are confusing to the user since not only they are irrelevant to someone trying to use BART, there isn't even an example of using bart in that part of the doc (there is one earlier in the class signature).\r\n\r\ne.g. they don't show up in other similar classes which have `generate`\r\nhttps://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration\r\nhttps://huggingface.co/transformers/model_doc/marian.html\r\nbut there they have their own `generate` methods, so this doesn't happen.\r\n\r\nI'm trying to flag a poor user experience and asking whether perhaps there is a better way to do it?\r\n\r\nOne possible suggestion:\r\n- remove the 5 examples from `generate` at https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L215\r\n- replace with a note - see this class' pre-amble documentation for examples.",
"feel free to send a PR deleting generate here\r\nhttps://github.com/huggingface/transformers/blob/master/docs/source/model_doc/bart.rst#L35\r\n",
"Thank you, Sam.\r\nhttps://github.com/huggingface/transformers/pull/6659\r\n\r\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | If you look at the very end of this section https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration.generate
there are 5 examples of using `generate` none of which is for BART. Is this an accidental copy-n-paste issue and they should just be removed?
There are examples of generate for BART in the pre-amble of the class:
https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6651/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6650/comments | https://api.github.com/repos/huggingface/transformers/issues/6650/events | https://github.com/huggingface/transformers/pull/6650 | 683,695,373 | MDExOlB1bGxSZXF1ZXN0NDcxNzQ1MTE2 | 6,650 | Add model card for electricidad-base-generator | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=h1) Report\n> Merging [#6650](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `0.43%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6650 +/- ##\n==========================================\n+ Coverage 79.20% 79.64% +0.43% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22374 22497 +123 \n+ Misses 5874 5751 -123 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (+24.27%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <0.00%> (+36.00%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `56.16% <0.00%> (+41.74%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6650/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=footer). Last update [9e8c494...ab682f0](https://codecov.io/gh/huggingface/transformers/pull/6650?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | I works like a charm!
Look at the output of the example code! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6650",
"html_url": "https://github.com/huggingface/transformers/pull/6650",
"diff_url": "https://github.com/huggingface/transformers/pull/6650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6650.patch",
"merged_at": 1598033896000
} |
https://api.github.com/repos/huggingface/transformers/issues/6649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6649/comments | https://api.github.com/repos/huggingface/transformers/issues/6649/events | https://github.com/huggingface/transformers/pull/6649 | 683,684,459 | MDExOlB1bGxSZXF1ZXN0NDcxNzM2MTAz | 6,649 | [Doc model summary] add MBart model summary | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=h1) Report\n> Merging [#6649](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6649 +/- ##\n==========================================\n+ Coverage 79.20% 79.62% +0.42% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22374 22493 +119 \n+ Misses 5874 5755 -119 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (+24.27%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <0.00%> (+36.00%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `56.16% <0.00%> (+41.74%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6649/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=footer). Last update [9e8c494...00c14dd](https://codecov.io/gh/huggingface/transformers/pull/6649?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer applied the suggestions.",
"thx suraj!"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | add model summary for MBart
@sshleifer , @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6649/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6649",
"html_url": "https://github.com/huggingface/transformers/pull/6649",
"diff_url": "https://github.com/huggingface/transformers/pull/6649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6649.patch",
"merged_at": 1598031780000
} |
https://api.github.com/repos/huggingface/transformers/issues/6648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6648/comments | https://api.github.com/repos/huggingface/transformers/issues/6648/events | https://github.com/huggingface/transformers/pull/6648 | 683,681,085 | MDExOlB1bGxSZXF1ZXN0NDcxNzMzMzYz | 6,648 | Remove hard-coded uses of float32 to fix mixed precision use | {
"login": "schmidek",
"id": 442328,
"node_id": "MDQ6VXNlcjQ0MjMyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/442328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schmidek",
"html_url": "https://github.com/schmidek",
"followers_url": "https://api.github.com/users/schmidek/followers",
"following_url": "https://api.github.com/users/schmidek/following{/other_user}",
"gists_url": "https://api.github.com/users/schmidek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schmidek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schmidek/subscriptions",
"organizations_url": "https://api.github.com/users/schmidek/orgs",
"repos_url": "https://api.github.com/users/schmidek/repos",
"events_url": "https://api.github.com/users/schmidek/events{/privacy}",
"received_events_url": "https://api.github.com/users/schmidek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=h1) Report\n> Merging [#6648](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e8c494da78077a91071a00ab2b73717deda24be?el=desc) will **increase** coverage by `1.07%`.\n> The diff coverage is `54.54%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6648 +/- ##\n==========================================\n+ Coverage 79.20% 80.27% +1.07% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n+ Hits 22374 22677 +303 \n+ Misses 5874 5571 -303 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <16.66%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6648/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=footer). Last update [9e8c494...6ac59c3](https://codecov.io/gh/huggingface/transformers/pull/6648?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Remove hard-coded uses of float32 from the tensorflow implementation of BERT and ELECTRA.
Fixes #3320 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6648/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6648/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6648",
"html_url": "https://github.com/huggingface/transformers/pull/6648",
"diff_url": "https://github.com/huggingface/transformers/pull/6648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6648.patch",
"merged_at": 1598341353000
} |
https://api.github.com/repos/huggingface/transformers/issues/6647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6647/comments | https://api.github.com/repos/huggingface/transformers/issues/6647/events | https://github.com/huggingface/transformers/issues/6647 | 683,671,250 | MDU6SXNzdWU2ODM2NzEyNTA= | 6,647 | mbart broken in summarization pipeline | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"works on master."
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | ```
summarizer = pipeline("summarization", model="facebook/mbart-large-cc25", tokenizer="facebook/mbart-large-cc25")
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6646/comments | https://api.github.com/repos/huggingface/transformers/issues/6646/events | https://github.com/huggingface/transformers/issues/6646 | 683,662,338 | MDU6SXNzdWU2ODM2NjIzMzg= | 6,646 | Error when loading my trained model | {
"login": "yanchao-yu",
"id": 5929774,
"node_id": "MDQ6VXNlcjU5Mjk3NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5929774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanchao-yu",
"html_url": "https://github.com/yanchao-yu",
"followers_url": "https://api.github.com/users/yanchao-yu/followers",
"following_url": "https://api.github.com/users/yanchao-yu/following{/other_user}",
"gists_url": "https://api.github.com/users/yanchao-yu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanchao-yu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanchao-yu/subscriptions",
"organizations_url": "https://api.github.com/users/yanchao-yu/orgs",
"repos_url": "https://api.github.com/users/yanchao-yu/repos",
"events_url": "https://api.github.com/users/yanchao-yu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanchao-yu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Does \r\n```\r\nself._model = BertForQuestionAnswering.from_pretrained(`./model/trained_squad/`)\r\n```\r\nwork?",
"How silly I am! Thanks a lot. It works for me. ",
"When I take out `from_tf=True` it then says `[Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory distilbert-somm or `from_tf` and `from_flax` set to False.]()`",
"amazingly the following worked\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"...\", use_auth_token=\"<key>\", from_tf=True,)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"...\", from_tf=True, use_auth_token=True)\r\n\r\ni.e. the first use_auth_token requires the actual key, while the second has to be \"True\""
] | 1,598 | 1,645 | 1,598 | NONE | null | Hello,
I tried to train the question-answering model using `bert-base-uncased` on SQUADv1.1. The training process seems to be successfully completed. However, when I load the trained model, it said that `File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (file signature not found)`
Here is my configuration for the training process:
```
model_name_or_path: bert-base-uncased
do_train: True
do_eval: True
overwrite_output_dir: True
num_train_epochs: 10
per_device_train_batch_size: 12
per_device_eval_batch_size: 12
warmup_steps: 100
weight_decay: 0.01
learning_rate: 3e-5
evaluate_during_training: True
save_steps: 5000
```
And here is what I stored in my model directory:
```
checkpoint-10000 checkpoint-35000 checkpoint-55000 pytorch_model.bin
checkpoint-15000 checkpoint-40000 checkpoint-60000 special_tokens_map.json
checkpoint-20000 checkpoint-45000 checkpoint-65000 tokenizer_config.json
checkpoint-25000 checkpoint-5000 checkpoint-70000 training_args.bin
checkpoint-30000 checkpoint-50000 config.json vocab.txt
```
I tried to load my model by
```
self._model = BertForQuestionAnswering.from_pretrained(`./model/trained_squad/`, from_tf=True)
```
It would be appreciated if anyone can give me a clue about what happens here. Is there anything wrong with my training process?
Best,
Yanchao
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6645/comments | https://api.github.com/repos/huggingface/transformers/issues/6645/events | https://github.com/huggingface/transformers/issues/6645 | 683,643,290 | MDU6SXNzdWU2ODM2NDMyOTA= | 6,645 | New config param for cross-attention dimensionality | {
"login": "Squire-tomsk",
"id": 5622473,
"node_id": "MDQ6VXNlcjU2MjI0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5622473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Squire-tomsk",
"html_url": "https://github.com/Squire-tomsk",
"followers_url": "https://api.github.com/users/Squire-tomsk/followers",
"following_url": "https://api.github.com/users/Squire-tomsk/following{/other_user}",
"gists_url": "https://api.github.com/users/Squire-tomsk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Squire-tomsk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Squire-tomsk/subscriptions",
"organizations_url": "https://api.github.com/users/Squire-tomsk/orgs",
"repos_url": "https://api.github.com/users/Squire-tomsk/repos",
"events_url": "https://api.github.com/users/Squire-tomsk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Squire-tomsk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"is this different than d_model/hidden_size?",
"Yes it is. As I undestand from T5 and TransformerXL config this param named `n_embd` in GPT2 config (embedding and hidden size dimensionality). It has impact on selt-attention dimentionality:\r\nQ (n_embd x n_embd) \r\nK (n_embd x n_embd) \r\nV (n_embd x n_embd). \r\n\r\nFor now cross-attention dimensionality same, but I want:\r\nQ (n_embd x n_embd)\r\nK (n_cross x n_embd) \r\nV (n_cross x n_embd) \r\nin cross-attention.",
"Hey @Squire-tomsk, thanks for your issue. This can easily be resolved by introducing a new config parameter for each model that can be used as a decoder in the `EncoderDecoder` Framework, so probably in `configuration_utils.py`. By default it can be set to `d_model` or `hidden_size`, if not further specifiied.\r\n\r\nIMO, adding another config param for this case is OK. What do you think? @sshleifer, @sgugger, @LysandreJik ? ",
"As long as the parameter is set to the right default and doesn't break backward compatibility, I have no objection.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi,\r\n\r\nThis feature has recently been implemented in #13874 . We called the attribute `cross_attention_hidden_size`.\r\n\r\nIn short, there are 2 options:\r\n- either you set it to `None`, in which case the `encoder_hidden_states` will be projected using a single linear layer, to match the hidden size of the decoder (in case they don't match).\r\n- either you set it to the size of the encoder, in which case the decoder will project the `encoder_hidden_states` to the same dimension as the decoder when creating `keys` and `values` in each cross-attention layer. This is the case for the recently added TrOCR model."
] | 1,598 | 1,636 | 1,610 | NONE | null | # 🚀 Feature request
Add a new config param `n_cross` for each model that has cross-attention. This param will determine key and value matrices dimensionality in cross-attention.
## Motivation
I have pretrained encoder (hidden size 512) and want to combine it with GPT-2 medium (hidden size 1024) by cross-attention. Now I cant do it, because in according with this PR https://github.com/huggingface/transformers/commit/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af query and key matrices of cross-attention have dimensionality same as self-attention.
In code it could looks like:
```
config.is_cross_attention = True
config.n_cross = 512
```
And in doc string:
```
n_inner (:obj:`int`, optional, defaults to None):
Dimensionality of the inner feed-forward layers. :obj:`None` will set it to 4 times n_embd.
n_cross (:obj:`int`, optional, defaults to None):
Dimensionality of the cross-attention input. :obj:`None` will set it same as n_embd.
```
## Your contribution
Actually I have already fixed this in my fork, but I didn't make any tests and docs. But I would be able to prepare PR within next two weeks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6645/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6645/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6644/comments | https://api.github.com/repos/huggingface/transformers/issues/6644/events | https://github.com/huggingface/transformers/pull/6644 | 683,613,849 | MDExOlB1bGxSZXF1ZXN0NDcxNjc3NjE3 | 6,644 | Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task | {
"login": "mojave-pku",
"id": 26648528,
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mojave-pku",
"html_url": "https://github.com/mojave-pku",
"followers_url": "https://api.github.com/users/mojave-pku/followers",
"following_url": "https://api.github.com/users/mojave-pku/following{/other_user}",
"gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions",
"organizations_url": "https://api.github.com/users/mojave-pku/orgs",
"repos_url": "https://api.github.com/users/mojave-pku/repos",
"events_url": "https://api.github.com/users/mojave-pku/events{/privacy}",
"received_events_url": "https://api.github.com/users/mojave-pku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=h1) Report\n> Merging [#6644](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/41aa2b4ef1b9c2d14d5b06af2e0faa10592779dd?el=desc) will **decrease** coverage by `0.27%`.\n> The diff coverage is `12.40%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6644 +/- ##\n==========================================\n- Coverage 79.64% 79.36% -0.28% \n==========================================\n Files 157 156 -1 \n Lines 28564 28384 -180 \n==========================================\n- Hits 22750 22528 -222 \n- Misses 5814 5856 +42 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `56.97% <10.81%> (-34.86%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `57.14% <12.12%> (-32.57%)` | :arrow_down: |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.03% <0.00%> (-0.49%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.26% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `81.88% <0.00%> (-0.29%)` | :arrow_down: |\n| ... and [133 more](https://codecov.io/gh/huggingface/transformers/pull/6644/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=footer). Last update [41aa2b4...ec89daf](https://codecov.io/gh/huggingface/transformers/pull/6644?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hey so I have a PR out for the same task: https://github.com/huggingface/transformers/pull/6376\r\n\r\nI'm mostly just writing this comment so that I can keep track of what the reviewers have to say and what happens with the NSP task.",
"Hi, @sgugger ! I add dict inputs support like `DataCollatorForLanguageModeling` according to your suggestion, but now there is a conflict in `src/transformers/__init__.py`. Do I need to resolve it or leave it to you?",
"I can take care of the final merge once this is all good and @LysandreJik approved, it's due to a new version of isort.",
"Could we add a test for this? I just merged `master` in to make sure it has the latest changes.",
"After @LysandreJik merge the `master` branch, many files need to be reformatted. \r\nTo clearly show the codes I modified, I did not include the changes caused by `make style` of other files in those commits, so `check_code_quality` will not pass."
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Add `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` to support mlm and next sentence prediction objectives together. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6644/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6644",
"html_url": "https://github.com/huggingface/transformers/pull/6644",
"diff_url": "https://github.com/huggingface/transformers/pull/6644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6644.patch",
"merged_at": 1598876700000
} |
https://api.github.com/repos/huggingface/transformers/issues/6643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6643/comments | https://api.github.com/repos/huggingface/transformers/issues/6643/events | https://github.com/huggingface/transformers/issues/6643 | 683,595,738 | MDU6SXNzdWU2ODM1OTU3Mzg= | 6,643 | bert finetuning for multilingual question answering | {
"login": "haenvely",
"id": 34908281,
"node_id": "MDQ6VXNlcjM0OTA4Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/34908281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haenvely",
"html_url": "https://github.com/haenvely",
"followers_url": "https://api.github.com/users/haenvely/followers",
"following_url": "https://api.github.com/users/haenvely/following{/other_user}",
"gists_url": "https://api.github.com/users/haenvely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haenvely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haenvely/subscriptions",
"organizations_url": "https://api.github.com/users/haenvely/orgs",
"repos_url": "https://api.github.com/users/haenvely/repos",
"events_url": "https://api.github.com/users/haenvely/events{/privacy}",
"received_events_url": "https://api.github.com/users/haenvely/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @haenvely , if you have the datset in same format as SQuAD then you can us the `run_squad` script. You can specify train and eval file using `--train_file` and `--predict_file` argument.\r\n\r\nBut you'll need a model which pre-trained on the language that you want.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | Hi, I'm just learning BERT now, and wanna use pyTorch and apply BERT for question answering.
I browsed the models, and it seems that run_squad.py is appropriate for question answering task.
But I wanna apply it to other languages' question answering tasks, so what I can do is using run_squad.py and switching the train/dev file paths to other languages datasets?
And what if I wanna apply the task to some small data(written in another language. so it will be actual and specific task), the process will be finetuning the "run_sqaud.py" with that language datasets, and then finetuning it again with the small data?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6642/comments | https://api.github.com/repos/huggingface/transformers/issues/6642/events | https://github.com/huggingface/transformers/pull/6642 | 683,592,039 | MDExOlB1bGxSZXF1ZXN0NDcxNjU5MjI2 | 6,642 | fix order of input/target of cross_entropy | {
"login": "HHoofs",
"id": 4730933,
"node_id": "MDQ6VXNlcjQ3MzA5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4730933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HHoofs",
"html_url": "https://github.com/HHoofs",
"followers_url": "https://api.github.com/users/HHoofs/followers",
"following_url": "https://api.github.com/users/HHoofs/following{/other_user}",
"gists_url": "https://api.github.com/users/HHoofs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HHoofs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HHoofs/subscriptions",
"organizations_url": "https://api.github.com/users/HHoofs/orgs",
"repos_url": "https://api.github.com/users/HHoofs/repos",
"events_url": "https://api.github.com/users/HHoofs/events{/privacy}",
"received_events_url": "https://api.github.com/users/HHoofs/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=h1) Report\n> Merging [#6642](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6642 +/- ##\n=======================================\n Coverage 79.14% 79.14% \n=======================================\n Files 156 156 \n Lines 28248 28248 \n=======================================\n Hits 22358 22358 \n Misses 5890 5890 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.45% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=footer). Last update [d0e42a7...502f692](https://codecov.io/gh/huggingface/transformers/pull/6642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"As far as I can see it, this snippet in the Readme is not working correctly."
] | 1,598 | 1,609 | 1,603 | NONE | null | https://pytorch.org/docs/stable/nn.functional.html#cross-entropy | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6642/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6642",
"html_url": "https://github.com/huggingface/transformers/pull/6642",
"diff_url": "https://github.com/huggingface/transformers/pull/6642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6642.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6641/comments | https://api.github.com/repos/huggingface/transformers/issues/6641/events | https://github.com/huggingface/transformers/pull/6641 | 683,580,882 | MDExOlB1bGxSZXF1ZXN0NDcxNjQ5NzM0 | 6,641 | Dataset and DataCollator for BERT Next Sentence Prediction (NSP) task | {
"login": "mojave-pku",
"id": 26648528,
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mojave-pku",
"html_url": "https://github.com/mojave-pku",
"followers_url": "https://api.github.com/users/mojave-pku/followers",
"following_url": "https://api.github.com/users/mojave-pku/following{/other_user}",
"gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions",
"organizations_url": "https://api.github.com/users/mojave-pku/orgs",
"repos_url": "https://api.github.com/users/mojave-pku/repos",
"events_url": "https://api.github.com/users/mojave-pku/events{/privacy}",
"received_events_url": "https://api.github.com/users/mojave-pku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | Add `DataCollatorForNextSencencePrediction` and `TextDatasetForNextSencencePrediction` to support mlm and next sentence prediction objectives together. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6641/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6641",
"html_url": "https://github.com/huggingface/transformers/pull/6641",
"diff_url": "https://github.com/huggingface/transformers/pull/6641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6641.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6640/comments | https://api.github.com/repos/huggingface/transformers/issues/6640/events | https://github.com/huggingface/transformers/pull/6640 | 683,564,036 | MDExOlB1bGxSZXF1ZXN0NDcxNjM1ODAx | 6,640 | [Docs model summaries] Add pegasus to docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=h1) Report\n> Merging [#6640](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0e42a7bed3de9271ae39c575d7eeb54cf985921?el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6640 +/- ##\n==========================================\n- Coverage 79.14% 79.12% -0.03% \n==========================================\n Files 156 156 \n Lines 28248 28248 \n==========================================\n- Hits 22358 22351 -7 \n- Misses 5890 5897 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=footer). Last update [d0e42a7...773081d](https://codecov.io/gh/huggingface/transformers/pull/6640?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I guess we could in general link the model summaries to the model doc as well or vice-versa for each model? ",
"We also should probably automatically link each model card to its doc\r\n\r\nIn the meantime, I think duplication is fine:)"
] | 1,598 | 1,598 | 1,598 | MEMBER | null | Stiched together a short model summary for Pegasus. Would be great if @sshleifer and @sgugger can take a look :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6640",
"html_url": "https://github.com/huggingface/transformers/pull/6640",
"diff_url": "https://github.com/huggingface/transformers/pull/6640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6640.patch",
"merged_at": 1598019731000
} |
https://api.github.com/repos/huggingface/transformers/issues/6639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6639/comments | https://api.github.com/repos/huggingface/transformers/issues/6639/events | https://github.com/huggingface/transformers/issues/6639 | 683,537,915 | MDU6SXNzdWU2ODM1Mzc5MTU= | 6,639 | Run_glue.py, how can I continue previous fine-tuning training? | {
"login": "lzl19971215",
"id": 63151530,
"node_id": "MDQ6VXNlcjYzMTUxNTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/63151530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lzl19971215",
"html_url": "https://github.com/lzl19971215",
"followers_url": "https://api.github.com/users/lzl19971215/followers",
"following_url": "https://api.github.com/users/lzl19971215/following{/other_user}",
"gists_url": "https://api.github.com/users/lzl19971215/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lzl19971215/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lzl19971215/subscriptions",
"organizations_url": "https://api.github.com/users/lzl19971215/orgs",
"repos_url": "https://api.github.com/users/lzl19971215/repos",
"events_url": "https://api.github.com/users/lzl19971215/events{/privacy}",
"received_events_url": "https://api.github.com/users/lzl19971215/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you're using the Trainer API, it's all automatic by running the same command as before.",
"Thanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,604 | 1,604 | NONE | null | My previous training on the fine-tuning of glue's SST had been shut down, and now I want to continue my training on the base of the latest checkpoint, how can I implement this ?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6639/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6638/comments | https://api.github.com/repos/huggingface/transformers/issues/6638/events | https://github.com/huggingface/transformers/issues/6638 | 683,484,731 | MDU6SXNzdWU2ODM0ODQ3MzE= | 6,638 | Running squad_convert_examples_to_features causes warnings. | {
"login": "dominichillier",
"id": 35426768,
"node_id": "MDQ6VXNlcjM1NDI2NzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/35426768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominichillier",
"html_url": "https://github.com/dominichillier",
"followers_url": "https://api.github.com/users/dominichillier/followers",
"following_url": "https://api.github.com/users/dominichillier/following{/other_user}",
"gists_url": "https://api.github.com/users/dominichillier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dominichillier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dominichillier/subscriptions",
"organizations_url": "https://api.github.com/users/dominichillier/orgs",
"repos_url": "https://api.github.com/users/dominichillier/repos",
"events_url": "https://api.github.com/users/dominichillier/events{/privacy}",
"received_events_url": "https://api.github.com/users/dominichillier/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Found this issue looking for the same problem.\r\n\r\nI'm using a `question_answering` pipeline (which call the `squad_convert_examples_to_features` under the hood) and it's really annoying when doing an information retrieval system to get the warning each time the QnA infrence is made... \r\n\r\nI tried to disable it with `logging.getLogger(\"transformers.tokenization_utils_base\").setLevel(logging.ERROR)` without success.\r\n\r\nDo you have any clues how to disable it or to change the code so it's not deprecated anymore ? \r\n\r\nThanks in advance",
"The version v4.0.0-rc-1 that will be released today or tomorrow will not have this warning anymore.",
"You can also disable this warning with\r\n\r\n```python\r\nimport warnings\r\nwarnings.simplefilter(\"ignore\")\r\n```"
] | 1,598 | 1,606 | 1,604 | NONE | null | Running function squad_convert_examples_to_features from data/processors/squad.py causes warning. Very annoying to disable warnings when running function many times :)
Warned that tokenization_base_utils.py, max_len is depricated.
`tokenization_utils_base.py:1320: FutureWarning: The max_len attribute has been deprecated and will be removed in a future version, use model_max_length instead.`
Code to reproduce.
```python
from transformers import AutoTokenizer
from transformers.data import SquadExample, squad_convert_examples_to_features
tokenizer = AutoTokenizer.from_pretrained('a-ware/roberta-large-squadv2')
example = SquadExample(None,'what is test','test is good',None,None,None)
features = squad_convert_examples_to_features(
examples=[example],
tokenizer=tokenizer,
max_seq_length=512,
doc_stride=128,
max_query_length=64,
is_training=False,
tqdm_enabled=False
)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6637/comments | https://api.github.com/repos/huggingface/transformers/issues/6637/events | https://github.com/huggingface/transformers/pull/6637 | 683,436,570 | MDExOlB1bGxSZXF1ZXN0NDcxNTI4OTg4 | 6,637 | Add typing.overload for convert_ids_tokens | {
"login": "tamuhey",
"id": 24998666,
"node_id": "MDQ6VXNlcjI0OTk4NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamuhey",
"html_url": "https://github.com/tamuhey",
"followers_url": "https://api.github.com/users/tamuhey/followers",
"following_url": "https://api.github.com/users/tamuhey/following{/other_user}",
"gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions",
"organizations_url": "https://api.github.com/users/tamuhey/orgs",
"repos_url": "https://api.github.com/users/tamuhey/repos",
"events_url": "https://api.github.com/users/tamuhey/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamuhey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=h1) Report\n> Merging [#6637](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bdf7e5de92d76ff6dd7cee317ffa43bed8c5d233?el=desc) will **increase** coverage by `0.80%`.\n> The diff coverage is `71.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6637 +/- ##\n==========================================\n+ Coverage 79.47% 80.28% +0.80% \n==========================================\n Files 156 156 \n Lines 28245 28251 +6 \n==========================================\n+ Hits 22448 22681 +233 \n+ Misses 5797 5570 -227 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <71.42%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.17% <0.00%> (-12.53%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6637/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=footer). Last update [bdf7e5d...0d86fc4](https://codecov.io/gh/huggingface/transformers/pull/6637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM but will let others chime in",
"@LysandreJik What else should I do to merge this PR?"
] | 1,598 | 1,598 | 1,598 | CONTRIBUTOR | null | The annotation of `convert_ids_tokens` is not sufficient. When `ids` are `List[str]` and `str`, the return type are always `List[int]` and `int` respectively.
It can be solved with [typing.overload](https://docs.python.org/3/library/typing.html#typing.overload) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6637/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6637",
"html_url": "https://github.com/huggingface/transformers/pull/6637",
"diff_url": "https://github.com/huggingface/transformers/pull/6637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6637.patch",
"merged_at": 1598345828000
} |
https://api.github.com/repos/huggingface/transformers/issues/6636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6636/comments | https://api.github.com/repos/huggingface/transformers/issues/6636/events | https://github.com/huggingface/transformers/issues/6636 | 683,409,175 | MDU6SXNzdWU2ODM0MDkxNzU= | 6,636 | Pre-training a language model on a large dataset | {
"login": "go-inoue",
"id": 20531705,
"node_id": "MDQ6VXNlcjIwNTMxNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/20531705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/go-inoue",
"html_url": "https://github.com/go-inoue",
"followers_url": "https://api.github.com/users/go-inoue/followers",
"following_url": "https://api.github.com/users/go-inoue/following{/other_user}",
"gists_url": "https://api.github.com/users/go-inoue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/go-inoue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/go-inoue/subscriptions",
"organizations_url": "https://api.github.com/users/go-inoue/orgs",
"repos_url": "https://api.github.com/users/go-inoue/repos",
"events_url": "https://api.github.com/users/go-inoue/events{/privacy}",
"received_events_url": "https://api.github.com/users/go-inoue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe @LysandreJik can help here? ",
"Same question...",
"Linked to https://github.com/huggingface/transformers/issues/6873",
"@go-inoue For large datasets, recommended to use mmap. I like Apache Arrow, which is also used in huggingface datasets library. Megatron LM also uses mmap, but with different implementations.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,598 | 1,608 | 1,608 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hi,
I'm getting a memory error when I run the example code for language modeling. I'm interested in pre-training a RoBERTa model using a 25GB text data on a virtual machine with a `v3-8` TPU on Google Cloud Platform.
I'm using the following command with `transformers/examples/xla_spawn.py` and `transformers/examples/run_language_modeling.py`.
```
python xla_spawn.py --num_cores 8 \
run_language_modeling.py \
--output_dir=[*****] \
--config_name=[*****] \
--tokenizer_name=[*****] \
--do_train \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 128 \
--learning_rate 6e-4 \
--weight_decay 0.01 \
--adam_epsilon 1e-6 \
--adam_beta1 0.9 \
--adam_beta2 0.98 \
--max_steps 500_000 \
--warmup_steps 24_000 \
--save_total_limit 5 \
--save_steps=100_000 \
--block_size=512 \
--train_data_file=[*****] \
--mlm \
--line_by_line
```
When I run this, I get the following error.
```
08/20/2020 15:21:07 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at [*****]
Traceback (most recent call last):
File "xla_spawn.py", line 72, in <module>
main()
File "xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join
(error_index, name)
Exception: process 0 terminated with signal SIGKILL
```
It looks like the script gets killed while it's loading the training data [here](https://github.com/huggingface/transformers/blob/573bdb0a5d2897ff6c7520ebb38693c7acfbf17e/src/transformers/data/datasets/language_modeling.py#L89-L92).
```python
with open(file_path, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
```
When I run the above block of code separately with `transformers/examples/xla_spawn.py`, I get an error.
```
Traceback (most recent call last):
File "xla_spawn.py", line 72, in <module>
main()
File "xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join
(error_index, name)
Exception: process 0 terminated with signal SIGKILL
```
When I run the above block of code separately using `n1-highmem-16 (16 vCPUs, 104 GB memory)` without TPU, I still get an error.
```
Traceback (most recent call last):
File "debug_load.py", line 7, in <module>
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
MemoryError
```
Has anyone successfully reproduced the original RoBERTa model or pretrained a language model with a large dataset using Huggingface's transformers (with TPU)? If so, what are the specifications of your machine? Has this code (`transformers/examples/run_language_modeling.py`) tested on a large dataset?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/pre-training-a-language-model-on-a-large-dataset/790 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6636/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6636/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6635/comments | https://api.github.com/repos/huggingface/transformers/issues/6635/events | https://github.com/huggingface/transformers/issues/6635 | 683,401,448 | MDU6SXNzdWU2ODM0MDE0NDg= | 6,635 | How to convert tokenizer output to train_dataset which is required by Trainer API | {
"login": "questpavan",
"id": 63842917,
"node_id": "MDQ6VXNlcjYzODQyOTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/63842917?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/questpavan",
"html_url": "https://github.com/questpavan",
"followers_url": "https://api.github.com/users/questpavan/followers",
"following_url": "https://api.github.com/users/questpavan/following{/other_user}",
"gists_url": "https://api.github.com/users/questpavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/questpavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/questpavan/subscriptions",
"organizations_url": "https://api.github.com/users/questpavan/orgs",
"repos_url": "https://api.github.com/users/questpavan/repos",
"events_url": "https://api.github.com/users/questpavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/questpavan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @questpavan ,\r\nThis awesome [tutorial](https://huggingface.co/transformers/master/custom_datasets.html) walks you through how you can fine-tune transformer models using custom dataset. It also covers pre-processing and how to create dataset etc.",
"Thanks @patil-suraj. It helped a lot. "
] | 1,597 | 1,598 | 1,598 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I tried doing tokenisation using documentation of huggingface transformers.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
encoded_input = tokenizer(batch_of_sequences)
```
Pre Trained Tokenizer gives output of dictionary containing three keys which are
```
encoded_input = {
'input_ids': [[],[],[]],
'token_type_ids': [[],[],[]],
'attention_mask': [[],[],[]]
}
```
Trainer API requires input of Train & Eval Dataset of type `torch.utils.data.Dataset`.
How can we use this output to create training dataset required for Trainer API?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
https://stackoverflow.com/questions/63519373/how-to-convert-tokenizer-output-to-train-dataset-which-is-required-by-trainer-ap | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6634/comments | https://api.github.com/repos/huggingface/transformers/issues/6634/events | https://github.com/huggingface/transformers/pull/6634 | 683,400,181 | MDExOlB1bGxSZXF1ZXN0NDcxNDk4ODM5 | 6,634 | Fix error class instantiation | {
"login": "tamuhey",
"id": 24998666,
"node_id": "MDQ6VXNlcjI0OTk4NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamuhey",
"html_url": "https://github.com/tamuhey",
"followers_url": "https://api.github.com/users/tamuhey/followers",
"following_url": "https://api.github.com/users/tamuhey/following{/other_user}",
"gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions",
"organizations_url": "https://api.github.com/users/tamuhey/orgs",
"repos_url": "https://api.github.com/users/tamuhey/repos",
"events_url": "https://api.github.com/users/tamuhey/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamuhey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=h1) Report\n> Merging [#6634](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **increase** coverage by `0.19%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6634 +/- ##\n==========================================\n+ Coverage 79.43% 79.62% +0.19% \n==========================================\n Files 156 156 \n Lines 28245 28245 \n==========================================\n+ Hits 22436 22491 +55 \n+ Misses 5809 5754 -55 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `26.88% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.83% <0.00%> (+6.20%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6634/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=footer). Last update [e5f4522...1431eff](https://codecov.io/gh/huggingface/transformers/pull/6634?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,599 | 1,599 | CONTRIBUTOR | null | The lines I fixed are bugs, causing `TypeError: 'ModuleNotFoundError' object is not callable` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6634",
"html_url": "https://github.com/huggingface/transformers/pull/6634",
"diff_url": "https://github.com/huggingface/transformers/pull/6634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6634.patch",
"merged_at": 1599046593000
} |
https://api.github.com/repos/huggingface/transformers/issues/6633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6633/comments | https://api.github.com/repos/huggingface/transformers/issues/6633/events | https://github.com/huggingface/transformers/pull/6633 | 683,356,998 | MDExOlB1bGxSZXF1ZXN0NDcxNDYzNDY2 | 6,633 | model card for Spanish electra base | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=h1) Report\n> Merging [#6633](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **increase** coverage by `0.84%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6633 +/- ##\n==========================================\n+ Coverage 79.43% 80.28% +0.84% \n==========================================\n Files 156 156 \n Lines 28245 28245 \n==========================================\n+ Hits 22436 22676 +240 \n+ Misses 5809 5569 -240 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <0.00%> (+34.35%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6633/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=footer). Last update [e5f4522...017da63](https://codecov.io/gh/huggingface/transformers/pull/6633?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6633/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6633",
"html_url": "https://github.com/huggingface/transformers/pull/6633",
"diff_url": "https://github.com/huggingface/transformers/pull/6633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6633.patch",
"merged_at": 1598000670000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6632/comments | https://api.github.com/repos/huggingface/transformers/issues/6632/events | https://github.com/huggingface/transformers/issues/6632 | 683,315,207 | MDU6SXNzdWU2ODMzMTUyMDc= | 6,632 | Error on `PreTrainedTokenizerBase.batch_encode_plus` with `return_overflowing_tokens=True, truncation=True` | {
"login": "tamuhey",
"id": 24998666,
"node_id": "MDQ6VXNlcjI0OTk4NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamuhey",
"html_url": "https://github.com/tamuhey",
"followers_url": "https://api.github.com/users/tamuhey/followers",
"following_url": "https://api.github.com/users/tamuhey/following{/other_user}",
"gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions",
"organizations_url": "https://api.github.com/users/tamuhey/orgs",
"repos_url": "https://api.github.com/users/tamuhey/repos",
"events_url": "https://api.github.com/users/tamuhey/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamuhey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Try `padding=True` ?",
"@patil-suraj Same error occurs",
"@mfuntowicz the issue here is that one of the string returns an `overflowing_tokens` value (as it overflows) while the other doesn't. The resulting batch contains `overflowing_tokens` that have a single list, rather than two (one for each string).\r\n\r\nHere's a proposed fix https://github.com/huggingface/transformers/pull/6677."
] | 1,597 | 1,599 | 1,599 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 (master branch)
- Platform: macOS-10.14.6-x86_64-i386-64bit
- Python version: 3.8.1
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
tokenizers: @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the below code
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.batch_encode_plus(
["foo", "bar " * 1000], return_overflowing_tokens=True, truncation=True, padding=True
)
```
raises the following error:
```
Traceback (most recent call last):
File "foo.py", line 4, in <module>
tokenizer.batch_encode_plus(
File "/Users/user/work/transformers/src/transformers/tokenization_utils_base.py", line 2121, in batch_encode_plus
return self._batch_encode_plus(
File "/Users/user/work/transformers/src/transformers/tokenization_utils.py", line 534, in _batch_encode_plus
batch_outputs = self._batch_prepare_for_model(
File "/Users/user/work/transformers/src/transformers/tokenization_utils.py", line 606, in _batch_prepare_for_model
batch_outputs = self.pad(
File "/Users/user/work/transformers/src/transformers/tokenization_utils_base.py", line 2305, in pad
assert all(
AssertionError: Some items in the output dictionnary have a different batch size than others.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6632/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6631/comments | https://api.github.com/repos/huggingface/transformers/issues/6631/events | https://github.com/huggingface/transformers/issues/6631 | 683,204,998 | MDU6SXNzdWU2ODMyMDQ5OTg= | 6,631 | fine tuning with Chinese data LCQMC val_acc not increase | {
"login": "ares5221",
"id": 13331671,
"node_id": "MDQ6VXNlcjEzMzMxNjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/13331671?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ares5221",
"html_url": "https://github.com/ares5221",
"followers_url": "https://api.github.com/users/ares5221/followers",
"following_url": "https://api.github.com/users/ares5221/following{/other_user}",
"gists_url": "https://api.github.com/users/ares5221/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ares5221/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ares5221/subscriptions",
"organizations_url": "https://api.github.com/users/ares5221/orgs",
"repos_url": "https://api.github.com/users/ares5221/repos",
"events_url": "https://api.github.com/users/ares5221/events{/privacy}",
"received_events_url": "https://api.github.com/users/ares5221/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
hello everyone
when i use huggingface bert-base-chinese model fine tuning LCQMC dataset ,
the train_acc will raise normal like 0.6->0.8->0.9,but val_acc not increase, it alway same number like 0.56
do you konw what happent?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6631/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6630/comments | https://api.github.com/repos/huggingface/transformers/issues/6630/events | https://github.com/huggingface/transformers/issues/6630 | 683,195,044 | MDU6SXNzdWU2ODMxOTUwNDQ= | 6,630 | Tokenize got an unexpected keyword argument 'pad_to_max_length', 'return_attention_mask' | {
"login": "Prithvi103",
"id": 12830451,
"node_id": "MDQ6VXNlcjEyODMwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/12830451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Prithvi103",
"html_url": "https://github.com/Prithvi103",
"followers_url": "https://api.github.com/users/Prithvi103/followers",
"following_url": "https://api.github.com/users/Prithvi103/following{/other_user}",
"gists_url": "https://api.github.com/users/Prithvi103/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Prithvi103/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Prithvi103/subscriptions",
"organizations_url": "https://api.github.com/users/Prithvi103/orgs",
"repos_url": "https://api.github.com/users/Prithvi103/repos",
"events_url": "https://api.github.com/users/Prithvi103/events{/privacy}",
"received_events_url": "https://api.github.com/users/Prithvi103/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"You GPU and CPU env might have different version of transformers. Could you try updating to master ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,597 | 1,604 | 1,604 | NONE | null | This works fine when I run on my GPU but gives the above error when I try to run on my CPU. They bought have the same environment setup.
Error:
File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 786, in encode_plus
first_ids = get_input_ids(text)
File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 778, in get_input_ids
return self.convert_tokens_to_ids(self.tokenize(text, **kwargs))
File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 649, in tokenize
tokenized_text = split_on_tokens(added_tokens, text)
File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 646, in split_on_tokens
else [token] for token in tokenized_text), [])
File "C:\ProgramData\Anaconda3\lib\site-packages\transformers\tokenization_utils.py", line 646, in <genexpr>
else [token] for token in tokenized_text), [])
TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'
environment:
Python 3.7
transformers 3.0.2
torch 1.5.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6630/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6630/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6629/comments | https://api.github.com/repos/huggingface/transformers/issues/6629/events | https://github.com/huggingface/transformers/pull/6629 | 683,191,027 | MDExOlB1bGxSZXF1ZXN0NDcxMzIwNjk1 | 6,629 | Remove accidental comment | {
"login": "josephrocca",
"id": 1167575,
"node_id": "MDQ6VXNlcjExNjc1NzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1167575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josephrocca",
"html_url": "https://github.com/josephrocca",
"followers_url": "https://api.github.com/users/josephrocca/followers",
"following_url": "https://api.github.com/users/josephrocca/following{/other_user}",
"gists_url": "https://api.github.com/users/josephrocca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josephrocca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josephrocca/subscriptions",
"organizations_url": "https://api.github.com/users/josephrocca/orgs",
"repos_url": "https://api.github.com/users/josephrocca/repos",
"events_url": "https://api.github.com/users/josephrocca/events{/privacy}",
"received_events_url": "https://api.github.com/users/josephrocca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=h1) Report\n> Merging [#6629](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5f452275b3d963bdff5b9c01346bef62032a150?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6629 +/- ##\n==========================================\n- Coverage 79.43% 79.18% -0.25% \n==========================================\n Files 156 156 \n Lines 28245 28245 \n==========================================\n- Hits 22436 22366 -70 \n- Misses 5809 5879 +70 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <ø> (ø)` | |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-3.80%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=footer). Last update [e5f4522...e054289](https://codecov.io/gh/huggingface/transformers/pull/6629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6629/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6629",
"html_url": "https://github.com/huggingface/transformers/pull/6629",
"diff_url": "https://github.com/huggingface/transformers/pull/6629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6629.patch",
"merged_at": 1598000853000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6628/comments | https://api.github.com/repos/huggingface/transformers/issues/6628/events | https://github.com/huggingface/transformers/issues/6628 | 683,184,448 | MDU6SXNzdWU2ODMxODQ0NDg= | 6,628 | PreTrainedModel's tie_weights invocation needs to be configurable | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @stas00 - I think I agree with you here! I think this is not limited to Encoder Decoder models only, so I think a better configuration parameter would be `tie_word_embeddings`. `tie_word_embeddings` could be set to `True` by default and then set to `False` for the respective classes (such as Reformer, ....). What do you think? \r\n\r\nOne thing, I'm wondering: For an encoder-decoder model, I think this variable should only apply to the decoder part (and tie its input and output word embeddings) and the encoder embeddings should be set equal to the decoder input embeddings by design in the `modeling_<model_name>.py` file (as it's done in `modeling_t5.py` for example).\r\n\r\nOverall, I agree these `def tie_weights(self): pass` are not great. Thanks for opening this issue. @sgugger, @LysandreJik, @sshleifer, @thomwolf - could you add your opinion here as well? ",
"I think it's okay to control the tying in the init with a new param. For encoder/decoder models, I don't have enough experience with those to know the best default.",
"> Hey @stas00 - I think I agree with you here! I think this is not limited to Encoder Decoder models only, so I think a better configuration parameter would be `tie_word_embeddings`. `tie_word_embeddings` could be set to `True` by default and then set to `False` for the respective classes (such as Reformer, ....). What do you think?\r\n\r\nIf you think this is a clear enough that works for me. And yes, `True` by default, since most current classes use it.\r\n\r\nAnd while at it, perhaps, rename the method `tie_weights` to `tie_word_embeddings` to match the config option? the current `tie_weights` method name is not descriptive enough to tell which weights it's about tie, IMHO.\r\n\r\n> One thing, I'm wondering: For an encoder-decoder model, I think this variable should only apply to the decoder part (and tie its input and output word embeddings) and the encoder embeddings should be set equal to the decoder input embeddings by design in the `modeling_<model_name>.py` file (as it's done in `modeling_t5.py` for example).\r\n\r\nI haven't delved into t5 yet, but the fairseq transformer, unlike most (all?) translators we currently have, has different input and output vocabs, and they are of different sizes, so you can't share the two. If I look at t5 it has shared embeds:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L659\r\nI guess it's only the reformer that overrides `tie_weights` at the moment (all 5 matches are in `modeling_reformer`), but it does have a single vocab. \r\n\r\nSo we have 2 different issues here:\r\n1. tie in/out weights: yes/no\r\n2. vocabs: shared/not shared\r\n\r\nBut here we are looking at just issue 1.\r\n\r\nI haven't gotten yet to the juicy part yet, just trying to match the pretrained weights to the model and adjusting a copy of BART to the weights, I will be able to give a more intelligent follow up once I step through the whole process, and have a better understanding of what ties where.\r\n",
"This sounds like a good idea. I would advocate for a `tie_word_embeddings` parameter in the configuration as @patrickvonplaten suggested, but I would keep `tie_weights` as the method that does the weight tying rather than renaming that method as well. Just a quick glance at the configuration tells you which weights it's about to tie, and it will able to handle other cases of weight tying that we might encounter in the future without the need of adding additional new methods.",
"Awesome, I will open a PR. I actually need this feature for the `EncoderDecoderModel` as well.",
"Fixed in https://github.com/huggingface/transformers/pull/6692"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | `PreTrainedModel` defines `tie_weights` method and then in [one place suggests](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L512)
> Takes care of tying weights embeddings afterwards if the model class has a :obj:`tie_weights()` method.
But since the super-class has it defined, it's always there.
So the only way for a sub-class to avoid this "tying" is to override it with:
```
def tie_weights(self): pass
```
if nothing else happens that comment needs to be edited to suggest a noop override in the sub-class.
But it took some hunting to get there, so a better solution is needed.
Most likely, currently, most (all?) models in transformers with encoder/decoder share token embed weights, hence the issue didn't come up. I'm working on porting a fairseq transformer and there the enc/dec token embeds aren't shared.
I propose a solution which adds a new param to `PretrainedConfig`, say: `is_enc_dec_sharing_embeds=True` and let the subclass override those, then add at the start of `tie_weights` in `modeling_utils.py`
```
def tie_weights(self):
if not self.config.is_enc_dec_sharing_embeds:
return
```
that way it's easy to quickly become aware that an action needs to be taken and set the desired behavior from within the subclass.
Thoughts?
If the proposed solution is agreeable, please, let me know which config param name it should be `is_enc_dec_sharing_embeds` - or different and I will submit a PR.
Thank you.
**edit:**
OK, having had a closer look:
```
grep -r -A 2 'def tie_weights' src/transformers | grep pass | wc -l
```
we have 5 sub-classes that override it with a no-op so only some rely on the default. Bad superclass, no cookies for you.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6628/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6627/comments | https://api.github.com/repos/huggingface/transformers/issues/6627/events | https://github.com/huggingface/transformers/issues/6627 | 683,177,318 | MDU6SXNzdWU2ODMxNzczMTg= | 6,627 | BartTokenizerFast cannot decode PyTorch tensors | {
"login": "setu4993",
"id": 1833708,
"node_id": "MDQ6VXNlcjE4MzM3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/setu4993",
"html_url": "https://github.com/setu4993",
"followers_url": "https://api.github.com/users/setu4993/followers",
"following_url": "https://api.github.com/users/setu4993/following{/other_user}",
"gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/setu4993/subscriptions",
"organizations_url": "https://api.github.com/users/setu4993/orgs",
"repos_url": "https://api.github.com/users/setu4993/repos",
"events_url": "https://api.github.com/users/setu4993/events{/privacy}",
"received_events_url": "https://api.github.com/users/setu4993/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Faster snippet w same error\r\n```python\r\nimport torch\r\ntokenizer = BartTokenizerFast.from_pretrained(\"sshleifer/distilbart-xsum-1-1\")\r\nids = torch.tensor([1,2,3], dtype=torch.long)\r\ntokenizer.decode(ids)\r\n```",
"Any update on this?",
"Hi, neither the slow or fast tokenizers can decode torch tensors. They can decode lists of Python integers, as it is stated in the [docs](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.decode).",
"@LysandreJik : That's not entirely accurate.\r\n\r\nThis works:\r\n\r\n```python\r\nfrom transformers import BartTokenizer\r\ntokenizer = BartTokenizer.from_pretrained(\"facebook/bart-large\")\r\nimport torch\r\nids = torch.tensor([1,2,3], dtype=torch.long)\r\ntokenizer.decode(ids)\r\n```\r\n\r\nBut this doesn't:\r\n\r\n```python\r\nfrom transformers import BartTokenizerFast\r\ntokenizer = BartTokenizerFast.from_pretrained(\"facebook/bart-large\")\r\nimport torch\r\nids = torch.tensor([1,2,3], dtype=torch.long)\r\ntokenizer.decode(ids)\r\n```\r\n\r\nI understand the docs say it should only decode lists, but slow tokenizers do also decode tensors.",
"The above @setu4993 @LysandreJik seems to give same code twice. You have to pass in a list of integers into the decode function. Official doc says decode function can process Torch.tensors, but it does not work well in all cases. Instead, give this a try\r\n\r\n```python\r\nfrom transformers import BartTokenizerFast\r\ntokenizer = BartTokenizerFast.from_pretrained(\"facebook/bart-large\")\r\nimport torch\r\nids = torch.tensor([1,2,3], dtype=torch.long)\r\ntokenizer.decode(ids.tolist())\r\n```\r\n\r\nIf tensor is [[...]], instead of [...], do\r\n\r\n```python\r\nfrom transformers import BartTokenizerFast\r\ntokenizer = BartTokenizerFast.from_pretrained(\"facebook/bart-large\")\r\nimport torch\r\nids = torch.tensor([1,2,3], dtype=torch.long)\r\ntokenizer.decode(*ids.tolist())\r\n```"
] | 1,597 | 1,662 | 1,600 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: MacOS and Linux
- Python version: 3.6 and 3.7
- PyTorch version (GPU?): 1.6.0 (no and yes)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
examples/seq2seq: @sshleifer
(Discovered in #6610.)
## Information
Model I am using (Bert, XLNet ...): Bart.
Any Bart model (reproduced with distilbart-cnn-12-6 and distilbart-xsum-1-1.
## To reproduce
Steps to reproduce the behavior:
```python
In [1]: from transformers import BartTokenizerFast, BartForConditionalGeneration
In [2]: model = BartForConditionalGeneration.from_pretrained("sshleifer/distilbart-xsum-1-1")
In [3]: tokenizer = BartTokenizerFast.from_pretrained("sshleifer/distilbart-xsum-1-1")
In [4]: input_ids = tokenizer("This is a test string.", return_tensors="pt")
In [5]: input_ids
Out[5]: {'input_ids': tensor([[ 0, 713, 16, 10, 1296, 6755, 4, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]])}
In [6]: summary_ids = model.generate(input_ids['input_ids'], num_beams=4, max_length=5, early_stopping=True)
In [7]: print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-d476aca57720> in <module>
----> 1 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
<ipython-input-7-d476aca57720> in <listcomp>(.0)
----> 1 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
~/.pyenv/versions/finetuning-bart/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces)
437 self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
438 ) -> str:
--> 439 text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
440
441 if clean_up_tokenization_spaces:
~/.pyenv/versions/finetuning-bart/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py in decode(self, ids, skip_special_tokens)
265 raise ValueError("None input is not valid. Should be a list of integers.")
266
--> 267 return self._tokenizer.decode(ids, skip_special_tokens=skip_special_tokens)
268
269 def decode_batch(
TypeError:
In [8]:
```
## Expected behavior
Fast tokenizer should be able to decode without producing an error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6627/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6626/comments | https://api.github.com/repos/huggingface/transformers/issues/6626/events | https://github.com/huggingface/transformers/pull/6626 | 683,160,891 | MDExOlB1bGxSZXF1ZXN0NDcxMjk1NTkx | 6,626 | Specify config filename in HfArgumentParser | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=h1) Report\n> Merging [#6626](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d?el=desc) will **decrease** coverage by `0.11%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6626 +/- ##\n==========================================\n- Coverage 79.98% 79.87% -0.12% \n==========================================\n Files 153 153 \n Lines 28005 28007 +2 \n==========================================\n- Hits 22401 22371 -30 \n- Misses 5604 5636 +32 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (+1.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=footer). Last update [bc82047...cf515d1](https://codecov.io/gh/huggingface/transformers/pull/6626?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks like a nice addition, LGTM",
"Thanks for adding this!"
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | Currently, HfArgumentParser will load arguments from a config file if that config file is the same name as the script being run. So `train.py` would have a corresponding `train.args`. This extends the method to load from any config file that is specified. So `train.py` could use a `bert-large.args` or a `bert-small.args`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6626/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6626/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6626",
"html_url": "https://github.com/huggingface/transformers/pull/6626",
"diff_url": "https://github.com/huggingface/transformers/pull/6626.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6626.patch",
"merged_at": 1598268478000
} |
https://api.github.com/repos/huggingface/transformers/issues/6625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6625/comments | https://api.github.com/repos/huggingface/transformers/issues/6625/events | https://github.com/huggingface/transformers/issues/6625 | 683,126,850 | MDU6SXNzdWU2ODMxMjY4NTA= | 6,625 | **Specifically for Pegasus-arxiv** - PegasusForConditionalGeneration - Error in loading state dictionary | {
"login": "suchig",
"id": 37094536,
"node_id": "MDQ6VXNlcjM3MDk0NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/37094536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suchig",
"html_url": "https://github.com/suchig",
"followers_url": "https://api.github.com/users/suchig/followers",
"following_url": "https://api.github.com/users/suchig/following{/other_user}",
"gists_url": "https://api.github.com/users/suchig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suchig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suchig/subscriptions",
"organizations_url": "https://api.github.com/users/suchig/orgs",
"repos_url": "https://api.github.com/users/suchig/repos",
"events_url": "https://api.github.com/users/suchig/events{/privacy}",
"received_events_url": "https://api.github.com/users/suchig/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"works for me with\r\n```\r\n- Python version: 3.7.4\r\n- PyTorch version (GPU?): 1.5.1 (True)\r\n```\r\nTry\r\n```\r\nmname = \"google/pegasus-arxiv\"\r\nmodel = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True)\r\n```\r\n",
"> works for me with\r\n> \r\n> ```\r\n> - Python version: 3.7.4\r\n> - PyTorch version (GPU?): 1.5.1 (True)\r\n> ```\r\n> \r\n> Try\r\n> \r\n> ```\r\n> mname = \"google/pegasus-arxiv\"\r\n> model = PegasusForConditionalGeneration.from_pretrained(mname, force_download=True)\r\n> ```\r\n\r\nOk. It works now. Even though I did not use force_download, it downloaded a fresh copy of checkpoint. This can be closed."
] | 1,597 | 1,598 | 1,598 | NONE | null | Please note that this specifically happens only for **pegasus-arxiv** (Reopening Issue 6609). Before you close, please let me know if this works for pegasus-arxiv. Because I am getting the below error even after taking a fresh git fetch of transformers and removing all checkpoints from cache.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): N/A
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): **google/pegasus-arxiv**
## To reproduce
```ruby
mname = "google/pegasus-arxiv"
model = PegasusForConditionalGeneration.from_pretrained(mname)
```
throws error as below
File "/anaconda/envs/py37_default/lib/python3.7/site-packages/transformers/modeling_utils.py", line 894, in from_pretrained
model.class.name, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for PegasusForConditionalGeneration:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]).
size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([1024, 1024]). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6625/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6624/comments | https://api.github.com/repos/huggingface/transformers/issues/6624/events | https://github.com/huggingface/transformers/issues/6624 | 683,019,897 | MDU6SXNzdWU2ODMwMTk4OTc= | 6,624 | Bart: make decoder_input_ids correctly if labels specified. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, definitely. Better to handle it in the model. Since lots of recent issues were due to incorrect or not shifting labels",
"But then `prepare_seq2seq_batch` should return `labels` instead of `decoder_input_ids`",
"yep. I'll take a stab."
] | 1,597 | 1,598 | 1,598 | CONTRIBUTOR | null | should call shift_tokens_right(labels), like T5.
cc @patil-suraj: does that make sense? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6623/comments | https://api.github.com/repos/huggingface/transformers/issues/6623/events | https://github.com/huggingface/transformers/pull/6623 | 683,015,306 | MDExOlB1bGxSZXF1ZXN0NDcxMTczNzEz | 6,623 | Fix confusing warnings during TF2 import from PyTorch | {
"login": "jcrocholl",
"id": 118312,
"node_id": "MDQ6VXNlcjExODMxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/118312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcrocholl",
"html_url": "https://github.com/jcrocholl",
"followers_url": "https://api.github.com/users/jcrocholl/followers",
"following_url": "https://api.github.com/users/jcrocholl/following{/other_user}",
"gists_url": "https://api.github.com/users/jcrocholl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcrocholl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcrocholl/subscriptions",
"organizations_url": "https://api.github.com/users/jcrocholl/orgs",
"repos_url": "https://api.github.com/users/jcrocholl/repos",
"events_url": "https://api.github.com/users/jcrocholl/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcrocholl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This fixes the confusing warnings mentioned in #5588",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=h1) Report\n> Merging [#6623](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86c07e634f3624cdf3f9e4e81ca53b808c4b22c6?el=desc) will **decrease** coverage by `0.84%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6623 +/- ##\n==========================================\n- Coverage 80.03% 79.18% -0.85% \n==========================================\n Files 156 156 \n Lines 28217 28217 \n==========================================\n- Hits 22584 22345 -239 \n- Misses 5633 5872 +239 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <66.66%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `14.42% <0.00%> (-42.89%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.00% <0.00%> (-36.00%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.99% <0.00%> (-24.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-0.14%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6623/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=footer). Last update [86c07e6...15f3e23](https://codecov.io/gh/huggingface/transformers/pull/6623?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hello thanks for the PR! LGTM.\r\n\r\nAsking @LysandreJik to review."
] | 1,597 | 1,599 | 1,599 | CONTRIBUTOR | null | 1. Swapped missing_keys and unexpected_keys.
2. Copy&paste error caused these warnings to say "from TF 2.0" when it's actually "from PyTorch". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6623/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6623",
"html_url": "https://github.com/huggingface/transformers/pull/6623",
"diff_url": "https://github.com/huggingface/transformers/pull/6623.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6623.patch",
"merged_at": 1599730320000
} |
https://api.github.com/repos/huggingface/transformers/issues/6622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6622/comments | https://api.github.com/repos/huggingface/transformers/issues/6622/events | https://github.com/huggingface/transformers/pull/6622 | 682,952,860 | MDExOlB1bGxSZXF1ZXN0NDcxMTE3OTQ3 | 6,622 | Move threshold up for flaky test with Electra | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=h1) Report\n> Merging [#6622](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c9454507cf57d38fd863c2544300c88583fc60e3?el=desc) will **increase** coverage by `0.68%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6622 +/- ##\n==========================================\n+ Coverage 79.01% 79.69% +0.68% \n==========================================\n Files 156 156 \n Lines 28217 28217 \n==========================================\n+ Hits 22295 22487 +192 \n+ Misses 5922 5730 -192 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.42% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (+1.30%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6622/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=footer). Last update [9539583...ef1053f](https://codecov.io/gh/huggingface/transformers/pull/6622?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks for taking care of it!"
] | 1,597 | 1,598 | 1,597 | COLLABORATOR | null | As discussed on slack, this should take care of the Electra flaky test with PT/TF equivalence.
@LysandreJik just a ping for when you're back so you're aware. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6622/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6622",
"html_url": "https://github.com/huggingface/transformers/pull/6622",
"diff_url": "https://github.com/huggingface/transformers/pull/6622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6622.patch",
"merged_at": 1597946381000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.