url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3113 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3113/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3113/comments | https://api.github.com/repos/huggingface/transformers/issues/3113/events | https://github.com/huggingface/transformers/pull/3113 | 574,999,270 | MDExOlB1bGxSZXF1ZXN0MzgzMjI1MzY1 | 3,113 | model cards for both aubmindlab/bert-base-arabert models | {
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=h1) Report\n> Merging [#3113](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.51%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3113 +/- ##\n==========================================\n- Coverage 78.35% 77.84% -0.52% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n- Hits 12868 12784 -84 \n- Misses 3554 3638 +84\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3113/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3113/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.61% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=footer). Last update [e9e6efd...07a4c8b](https://codecov.io/gh/huggingface/transformers/pull/3113?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3113",
"html_url": "https://github.com/huggingface/transformers/pull/3113",
"diff_url": "https://github.com/huggingface/transformers/pull/3113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3113.patch",
"merged_at": 1583341480000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3112/comments | https://api.github.com/repos/huggingface/transformers/issues/3112/events | https://github.com/huggingface/transformers/pull/3112 | 574,967,370 | MDExOlB1bGxSZXF1ZXN0MzgzMTk3Njcz | 3,112 | Adds failing tests for the fast tokenizers | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [
"I can't assign reviewers, but you asked me in #3058 to ping @LysandreJik and @mfuntowicz when I do this.",
"I added a test for the issue from #3088.",
"Added another test for #3091.",
"@dirkgr thanks for taking the time to include your tests into ours. \r\n\r\nIt will definitively help making sure everything is working as expected on your side 👍 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This can be closed now, right?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,595 | 1,595 | CONTRIBUTOR | null | This ports some of the tests over that started failing on the AllenNLP side when the new fast tokenizers came out.
Note: These tests are failing right now. They will need updates to the fast transformers before this can be merged. Maybe it would be better to merge this branch into the branch where the tokenizers are being fixed? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3112/reactions",
"total_count": 12,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3112/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3112",
"html_url": "https://github.com/huggingface/transformers/pull/3112",
"diff_url": "https://github.com/huggingface/transformers/pull/3112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3112.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3111/comments | https://api.github.com/repos/huggingface/transformers/issues/3111/events | https://github.com/huggingface/transformers/pull/3111 | 574,964,663 | MDExOlB1bGxSZXF1ZXN0MzgzMTk1MzI5 | 3,111 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"You have to add the thumbnail",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=h1) Report\n> Merging [#3111](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b?src=pr&el=desc) will **decrease** coverage by `0.52%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3111 +/- ##\n==========================================\n- Coverage 78.35% 77.83% -0.53% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n- Hits 12868 12782 -86 \n- Misses 3554 3640 +86\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0%> (-27.6%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3111/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.29% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=footer). Last update [e9e6efd...3060b8c](https://codecov.io/gh/huggingface/transformers/pull/3111?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, Julien. Can you add the default thumbnail you usually add to models? (name of the model + Huggingface logo)"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Thumbnail is not set! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3111",
"html_url": "https://github.com/huggingface/transformers/pull/3111",
"diff_url": "https://github.com/huggingface/transformers/pull/3111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3111.patch",
"merged_at": 1583341566000
} |
https://api.github.com/repos/huggingface/transformers/issues/3110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3110/comments | https://api.github.com/repos/huggingface/transformers/issues/3110/events | https://github.com/huggingface/transformers/pull/3110 | 574,919,248 | MDExOlB1bGxSZXF1ZXN0MzgzMTU2NTM5 | 3,110 | BartForSequenceClassification: fix num_labels, add test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=h1) Report\n> Merging [#3110](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c5af879b6d45c879c987154f66d4ea978925fb2?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3110 +/- ##\n==========================================\n+ Coverage 77.83% 77.84% +<.01% \n==========================================\n Files 98 98 \n Lines 16422 16422 \n==========================================\n+ Hits 12782 12783 +1 \n+ Misses 3640 3639 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.38% <100%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.45% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=footer). Last update [5c5af87...0893268](https://codecov.io/gh/huggingface/transformers/pull/3110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3110",
"html_url": "https://github.com/huggingface/transformers/pull/3110",
"diff_url": "https://github.com/huggingface/transformers/pull/3110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3110.patch",
"merged_at": 1583268870000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3109/comments | https://api.github.com/repos/huggingface/transformers/issues/3109/events | https://github.com/huggingface/transformers/pull/3109 | 574,905,015 | MDExOlB1bGxSZXF1ZXN0MzgzMTQ0OTc2 | 3,109 | [WIP] Add tests that ensure that copied functions remain in sync | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | CONTRIBUTOR | null | Adding some tests that use ast to check that functions that were originally copied stay in sync.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3109",
"html_url": "https://github.com/huggingface/transformers/pull/3109",
"diff_url": "https://github.com/huggingface/transformers/pull/3109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3109.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3108/comments | https://api.github.com/repos/huggingface/transformers/issues/3108/events | https://github.com/huggingface/transformers/issues/3108 | 574,810,265 | MDU6SXNzdWU1NzQ4MTAyNjU= | 3,108 | BART: <mask> token ID is outside vocab bounds | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"if you install from master it seems to work on 'bart-large'. \r\nSeems like it's only an issue on 'bart-large-cnn'\r\n```\r\ntokenizer = BartTokenizer.from_pretrained('bart-large')\r\nmodel = BartForMaskedLM.from_pretrained('bart-large',output_past=True)\r\n\r\nARTICLE_TO_SUMMARIZE = \"My friends are <mask> but they eat too many carbs.\"\r\ninputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')\r\ngenerated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4)\r\nprint([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids])\r\n```\r\noutput: \r\n```\r\n['My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.', 'My kids are good, but they eat too many carbs. My friends are good.']\r\n```",
"Bart-large-cnn doesn't have a mask_token_id, which is admittedly confusing.\r\n\r\nthis is how I would do mask filling\r\n```python\r\nmodel = BartForMaskedLM.from_pretrained('bart-large')\r\ntokenizer = AutoTokenizer.from_pretrained('bart-large')\r\nARTICLE_TO_SUMMARIZE = \"My friends are <mask> but they eat too many carbs.\"\r\ninputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], return_tensors='pt')\r\ninput_ids = inputs['input_ids']\r\n#generated_ids = model(, attention_mask=inputs['attention_mask'])[0]\r\nlogits = model(input_ids)[0]\r\nmasked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()\r\nprobs = logits[0, masked_index].softmax(dim=0)\r\nvalues, predictions = probs.topk(10)\r\ntokenizer.decode(predictions).split()\r\n# ['good', 'great', 'all', 'really', 'very', 'healthy', 'also', 'not', 'the', 'doing']\r\n```\r\n",
"One liner courtesy of @julien-c\r\n```\r\nfrom transformers import pipeline\r\nnlp = pipeline('fill-mask', 'bart-large')\r\nnlp(\"My friends are <mask> but they eat too many carbs.\")\r\n```",
"Thanks @sshleifer, that will do the trick!\r\n\r\nThe following does work:\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained('bart-large-cnn')\r\ntokenizer.mask_token_id\r\n>>> 50264\r\n```\r\n...which is a bit counterintuitive as it implies that `<mask>` _is_ available. It's also not clear from the docs that `bart-large` can be used successfully with `BartForMaskedLM`. "
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BartForMaskedLM, BartTokenizer
from transformers.configuration_bart import BartConfig
config = BartConfig(vocab_size=50264, output_past=True)
model = AutoModelWithLMHead.from_pretrained('bart-large-cnn', config=config)
tokenizer = AutoTokenizer.from_pretrained('bart-large-cnn')
ARTICLE_TO_SUMMARIZE = "My friends are <mask> but they eat too many carbs."
inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids])
```
## Expected behavior
I'd expect some sort of infilling to occur, but instead I see the error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-bad65359ada6> in <module>
10 inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
11
---> 12 generated_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_return_sequences=4)
13 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in generated_ids])
~/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs)
47 def decorate_no_grad(*args, **kwargs):
48 with self:
---> 49 return func(*args, **kwargs)
50 return decorate_no_grad
51
~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in generate(self, input_ids, attention_mask, max_length, num_beams, repetition_penalty, length_penalty, num_return_sequences, min_len, no_repeat_ngram_size)
1106 input_ids, decoder_cache, decoder_input_ids, attention_mask,
1107 )
-> 1108 outputs = self(**model_inputs)
1109 lprobs = F.log_softmax(outputs[0][:, -1, :], dim=-1)
1110
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_cached_states, lm_labels, **unused)
932 encoder_outputs=encoder_outputs,
933 decoder_attention_mask=decoder_attention_mask,
--> 934 decoder_cached_states=decoder_cached_states,
935 )
936 lm_logits = self.lm_head.forward(outputs[0])
~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, decoder_cached_states)
837 assert decoder_input_ids is not None
838 if encoder_outputs is None:
--> 839 encoder_outputs = self.encoder.forward(input_ids=input_ids, attention_mask=attention_mask)
840 assert isinstance(encoder_outputs, tuple)
841 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask)
272 During training might not be of length n_layers because of layer dropout.
273 """
--> 274 inputs_embeds = self.embed_tokens(input_ids)
275 embed_pos = self.embed_positions(input_ids)
276 x = inputs_embeds + embed_pos
~/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/.local/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 50264 out of table with 50263 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
Looks to me like the `<mask>` token ID (50264) is out of bounds?
## Environment info
- `transformers` version: a088d75e510d5641808ccd72f5dca4df36d95b8e
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.3.1 (Y)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3108/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3107/comments | https://api.github.com/repos/huggingface/transformers/issues/3107/events | https://github.com/huggingface/transformers/issues/3107 | 574,780,410 | MDU6SXNzdWU1NzQ3ODA0MTA= | 3,107 | BertTokenizer.save_pretrained() ignores do_lower_case | {
"login": "yoptar",
"id": 5615053,
"node_id": "MDQ6VXNlcjU2MTUwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5615053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoptar",
"html_url": "https://github.com/yoptar",
"followers_url": "https://api.github.com/users/yoptar/followers",
"following_url": "https://api.github.com/users/yoptar/following{/other_user}",
"gists_url": "https://api.github.com/users/yoptar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoptar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoptar/subscriptions",
"organizations_url": "https://api.github.com/users/yoptar/orgs",
"repos_url": "https://api.github.com/users/yoptar/repos",
"events_url": "https://api.github.com/users/yoptar/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoptar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"When I copy/paste your code, I get this in `dumped/model/`:\r\n`vocab.txt`, `tokenizer_config.json` and `special_tokens_map.json`\r\n\r\nThe tokenizer config does save the lowercase, and the output of the code is two `False`.\r\n\r\nBelow the code snippet I had to actually run it:\r\n\r\n```py\r\nimport transformers\r\nimport os\r\n\r\ntokenizer = transformers.BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)\r\nprint(tokenizer.basic_tokenizer.do_lower_case)\r\nos.makedirs(\"~/dumped/model\", exist_ok=True)\r\ntokenizer.save_pretrained('~/dumped/model')\r\ntokenizer = transformers.BertTokenizer.from_pretrained('~/dumped/model')\r\nprint(tokenizer.basic_tokenizer.do_lower_case)\r\n\r\n```",
"Hi @LysandreJik,\r\n\r\nIt does work when you initialize the model `.from_pretrained('bert-base-cased')`, because the pretrained tokenizer already has a configuration that is saved afterwards.\r\n\r\nI was talking about a case when you do not have a config and load only the local `vocab.txt` file:\r\n```python3\r\nimport os\r\n\r\nfrom transformers import BertTokenizer\r\n\r\n# Download the `bert-base-cased` tokenizer and initialized from pretrained\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\n# A configuration is already there, `do_lower_case` is `False`\r\nprint(tokenizer.basic_tokenizer.do_lower_case)\r\nos.makedirs(\"~/dumped/model\", exist_ok=True)\r\n# Save it locally\r\ntokenizer.save_pretrained('~/dumped/model')\r\n\r\n# We can see that the config file has data\r\nwith open(\"~/dumped/model/tokenizer_config.json\") as f:\r\n print(f.read())\r\n\r\n# Initialize as if we only have a local `vocab.txt` which is my case\r\ntokenizer = BertTokenizer('~/dumped/model/vocab.txt', do_lower_case=False)\r\nprint(tokenizer.basic_tokenizer.do_lower_case)\r\ntokenizer.save_pretrained('~/dumped/model')\r\n\r\n# After saving the config is empty\r\nwith open(\"~/dumped/model/tokenizer_config.json\") as f:\r\n print(f.read())\r\n\r\n# And after initializing from pretrained `do_lower_case` is `True`\r\ntokenizer = BertTokenizer.from_pretrained('~/dumped/model')\r\nprint(tokenizer.basic_tokenizer.do_lower_case)\r\n```",
"Hmm, that makes sense. Indeed, that seems problematic. Thanks for opening this issue, we're looking into it!",
"@LysandreJik Looked a bit into it quickly, and here is the deal:\r\n\r\n`.save_pretrained` from `BertTokenizer` is inherited from `PreTrainedTokenizer`, and save config based on the `self.init_kwargs` dict.\r\n\r\nHowever, `do_lower_case` [is not passed to the `super().__init__()`](https://github.com/huggingface/transformers/blob/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b/src/transformers/tokenization_bert.py#L177) in the Bert tokenizer.\r\n\r\nAdding it to the `kwargs` passed to `super()` should do the trick. I can send a PR if that helps!\r\n\r\n\r\n",
"@RaphaelMeudec, it would not help: the config is empty even though `unt_token`, `sep_token`, etc., are passed to `super().__init__()`.\r\nAlso, I've tried it :slightly_smiling_face: ",
"@yoptar Indeed! What I can't explain is that `self.init_kwargs` is initialized [here](https://github.com/huggingface/transformers/blob/e9e6efdc452b74947d40a5a2e8af2fc444c63b5b/src/transformers/tokenization_utils.py#L327) but remains untouched in the rest of the code? \r\n\r\nI've been able to make your code work as expected by initializing `self.init_kwargs = kwargs` (instead of `{}`) and passing `do_lower_case=do_lower_case` in BertTokenizer `super()` resulting in:\r\n\r\n```\r\nFalse\r\n{\"do_lower_case\": false, \"max_len\": 512}\r\nFalse\r\n{\"unk_token\": \"[UNK]\", \"sep_token\": \"[SEP]\", \"pad_token\": \"[PAD]\", \"cls_token\": \"[CLS]\", \"mask_token\": \"[MASK]\", \"do_lower_case\": false}\r\nFalse\r\n```\r\n\r\n@LysandreJik Do you have more insights on how `self.init_kwargs` is modified in the current code?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | CONTRIBUTOR | null | # 🐛 Bug
## Information
When saving a tokenizer with the purpose of sharing, `init` arguments are not saved to a config.
## To reproduce
Steps to reproduce the behavior:
Initialize a tokenizer with `do_lower_case=False`, save pretrained, initialize from pretrained. The default `do_lower_case=True` will not be overwritten and further tokenization will be incorrect.
```python3
In[1]: import transformers
In[2]: tokenizer = transformers.BertTokenizer('my/model/vocab.txt', do_lower_case=False)
In[3]: tokenizer.basic_tokenizer.do_lower_case
Out[3]: False
In[4]: tokenizer.save_pretrained('dumped/model/')
Out[4]:
('dumped/model/vocab.txt',
'dumped/model/special_tokens_map.json',
'dumped/model/added_tokens.json')
In[5]: tokenizer = transformers.BertTokenizer.from_pretrained('dumped/model/')
In[6]: tokenizer.basic_tokenizer.do_lower_case
Out[6]: True
```
## Expected behavior
```python3
In[1]: import transformers
In[2]: tokenizer = transformers.BertTokenizer('my/model/vocab.txt', do_lower_case=False)
In[3]: tokenizer.basic_tokenizer.do_lower_case
Out[3]: False
In[4]: tokenizer.save_pretrained('dumped/model/')
Out[4]:
('dumped/model/vocab.txt',
'dumped/model/special_tokens_map.json',
'dumped/model/added_tokens.json')
In[5]: tokenizer = transformers.BertTokenizer.from_pretrained('dumped/model/')
In[6]: tokenizer.basic_tokenizer.do_lower_case
Out[6]: False
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3107/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3106/comments | https://api.github.com/repos/huggingface/transformers/issues/3106/events | https://github.com/huggingface/transformers/pull/3106 | 574,767,209 | MDExOlB1bGxSZXF1ZXN0MzgzMDMyNjE5 | 3,106 | fix beam_search behavior when sampling | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> is do_sample=True tested anywhere?\r\n\r\nYes for the randomly initialized models, with dummy input and also for some Integration tests",
"Good to merge for me"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | This PR aims to fix the beam search behavior when sampling for language generation.
For once, when doing beam_search decoding for language generation, one would usually do greedy decoding (do_sample=False), so this case should not be used very often, but it should be logical nevertheless.
It's kind of hard to see what happens when doing beam_search decoding with sampling=True, so here a quick example. Running this code:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
inputs_dict = tokenizer.encode_plus('The dog', return_tensors='pt')
outputs = model.generate(inputs_dict['input_ids'], num_beams=3, max_length=10)
```
and putting the following print statement:
`print("Sorted hyps: {}".format([x[1] for x in sorted_hyps]))`
after line: https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L1087
would print the following beam hypothesis before this PR:
```
# printed sorted_hyps from line: 1088
# Beam_idx: 1 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037])
# Beam_idx: 2 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037])
# Beam_idx: 3 - tensor([ 464, 3290, 635, 468, 257, 2041, 3895, 326, 481, 1037])
=> Result: best beam hypothesis: "The dog, named T.H., was recently"
```
It can be seen that they are all equal even the last word. And they will always be equal. The reason for this is that currently we sample only word_idx in the interval [0, vocab_size]
(see https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L975)
which forces that all beam_idx computed in this line:
https://github.com/huggingface/transformers/blob/a088d75e510d5641808ccd72f5dca4df36d95b8e/src/transformers/modeling_utils.py#L1023
always all equal 0. This means that we only consider the first (0-idx) beam and disregard all other beams no matter what.
After this PR: we sample from `[0, num_beams * vocab_size]` (as it's done in greedy decoding so that the beam_idx can be in the range `[0, num_beams]` - as it should be). Same print statement would produce:
```
# printed sorted_hyps from line: 1088
# Beam_idx: 1 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 351])
# Beam_idx: 2 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 11])
# Beam_idx: 3 - tensor([ 464, 3290, 373, 788, 3888, 284, 257, 6716, 2156, 1566])
=> Result: best beam hypothesis: "The dog was then moved to a nearby house until"
```
I discussed with @thomwolf and think this is the best solution for beam_search sampling for language generation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3106",
"html_url": "https://github.com/huggingface/transformers/pull/3106",
"diff_url": "https://github.com/huggingface/transformers/pull/3106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3106.patch",
"merged_at": 1583332252000
} |
https://api.github.com/repos/huggingface/transformers/issues/3105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3105/comments | https://api.github.com/repos/huggingface/transformers/issues/3105/events | https://github.com/huggingface/transformers/pull/3105 | 574,750,323 | MDExOlB1bGxSZXF1ZXN0MzgzMDE4NTkz | 3,105 | Change back pipeline signatures | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=h1) Report\n> Merging [#3105](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b31f7150190cdf13950607f8ee1efe11b352c909?src=pr&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3105 +/- ##\n==========================================\n+ Coverage 77.6% 77.62% +0.02% \n==========================================\n Files 98 98 \n Lines 16250 16230 -20 \n==========================================\n- Hits 12611 12599 -12 \n+ Misses 3639 3631 -8\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3105/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `70.95% <ø> (+0.52%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=footer). Last update [b31f715...da52b39](https://codecov.io/gh/huggingface/transformers/pull/3105?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | As discussed with @julien-c in the merged #3055. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3105",
"html_url": "https://github.com/huggingface/transformers/pull/3105",
"diff_url": "https://github.com/huggingface/transformers/pull/3105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3105.patch",
"merged_at": 1583533579000
} |
https://api.github.com/repos/huggingface/transformers/issues/3104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3104/comments | https://api.github.com/repos/huggingface/transformers/issues/3104/events | https://github.com/huggingface/transformers/issues/3104 | 574,720,365 | MDU6SXNzdWU1NzQ3MjAzNjU= | 3,104 | BART -- RuntimeError: expected device cuda:0 but got device cpu | {
"login": "adihaviv",
"id": 3625128,
"node_id": "MDQ6VXNlcjM2MjUxMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3625128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adihaviv",
"html_url": "https://github.com/adihaviv",
"followers_url": "https://api.github.com/users/adihaviv/followers",
"following_url": "https://api.github.com/users/adihaviv/following{/other_user}",
"gists_url": "https://api.github.com/users/adihaviv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adihaviv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adihaviv/subscriptions",
"organizations_url": "https://api.github.com/users/adihaviv/orgs",
"repos_url": "https://api.github.com/users/adihaviv/repos",
"events_url": "https://api.github.com/users/adihaviv/events{/privacy}",
"received_events_url": "https://api.github.com/users/adihaviv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Can you reproduce with the code on master?\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\n```\r\n(duplicate of https://github.com/huggingface/transformers/issues/3079)\r\n",
"Closing for now, reopen if this is broken on the latest code. Otherwise, it will be in the next pip release. Thanks!"
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
@sshleifer
I'm using BART model (bart-large), when I try to use the BartForMaskedLM i'm getting the above error. The reason is that in the _combine_masks (line 146 in modeling_bart) is creating a tensor without the device. so by default it is on CPU. To reproduce - simply use BartForMaskedLM model with GPU.
can you help? am I missing anything?
Additional details:
----------------------
- `transformers` version: 2.5.1
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.0 (with GPU)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
stuck trace:
File "/specific/netapp5_2/gamir/adi/git/BERTese/lama/training.py", line 151, in train_and_eval
outputs = model(b_in_tensor, lm_labels=b_label_tensor)
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 925, in forward
decoder_cached_states=decoder_cached_states,
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 844, in forward
decoder_cached_states=decoder_cached_states,
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 499, in forward
need_attn_weights=self.output_attentions,
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 372, in forward
attn_mask=attention_mask,
File "/specific/netapp5_2/gamir/adi/miniconda3/envs/trans_py37/lib/python3.7/site-packages/transformers/modeling_bart.py", line 629, in forward
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask
RuntimeError: expected device cuda:0 but got device cpu
Thanks,
Adi. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3104/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3103/comments | https://api.github.com/repos/huggingface/transformers/issues/3103/events | https://github.com/huggingface/transformers/pull/3103 | 574,697,751 | MDExOlB1bGxSZXF1ZXN0MzgyOTc1Mzg4 | 3,103 | Support keras JSON/HDF5 serialization of main layers | {
"login": "gthb",
"id": 153580,
"node_id": "MDQ6VXNlcjE1MzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/153580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gthb",
"html_url": "https://github.com/gthb",
"followers_url": "https://api.github.com/users/gthb/followers",
"following_url": "https://api.github.com/users/gthb/following{/other_user}",
"gists_url": "https://api.github.com/users/gthb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gthb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gthb/subscriptions",
"organizations_url": "https://api.github.com/users/gthb/orgs",
"repos_url": "https://api.github.com/users/gthb/repos",
"events_url": "https://api.github.com/users/gthb/events{/privacy}",
"received_events_url": "https://api.github.com/users/gthb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=h1) Report\n> Merging [#3103](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a088d75e510d5641808ccd72f5dca4df36d95b8e?src=pr&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3103 +/- ##\n==========================================\n+ Coverage 77.82% 77.88% +0.05% \n==========================================\n Files 98 98 \n Lines 16422 16461 +39 \n==========================================\n+ Hits 12781 12821 +40 \n+ Misses 3641 3640 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `85.53% <100%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.16% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `89.03% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.2% <100%> (+0.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.07% <100%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `91.18% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.61% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=footer). Last update [a088d75...4c91a3a](https://codecov.io/gh/huggingface/transformers/pull/3103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Crud, as the test shows, of course load doesn't work in this first-stab implementation, because the layer classes get instantiated with a `dict` instead of a `PretrainedConfig` instance. So needs more work.",
"With this change, seven of the 11 `TF*MainLayer` classes pass the test (saving, loading and producing same output for same input after the Keras save/load roundtrip). So I'm just not marking the other four `@keras_serializable` as I haven't gotten it working for those. Specifically:\r\n\r\n* TFT5MainLayer does not accept the same model inputs directly, as produced by `self.model_tester.prepare_config_and_inputs_for_common()`, so calling it fails\r\n* `TFXLMMainLayer`, `TFOpenAIGPTMainLayer`, and `TFDistilBertMainLayer` all fail the test (if I add the `@keras_serializable` attribute to them) in the same way: by outputting a tensor of shape `(7, 32)` after the save/load round-trip, which is just the first row of the `(13, 7, 32)` tensor that's output before. I haven't figured out the cause of this.",
"Ok this looks good to me. \r\n\r\nDo you want to make `make style` and `make quality` to pass the code quality checks (see our [contributor guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) if needed), and I guess we can merge it.",
"Fixed quality thing, added @functools.wraps (really shouldn't wrap without that as it causes confusing function metadata), added a docstring to the `keras_serializable` decorator, and changed to use a name specific to this library rather than the general name `config`, to be clearer where `transformers` is used along with other things in a Keras model. @thomwolf OK with all that?",
"Ok this is good to me.\r\nThanks a lot for the awesome work on this.\r\n\r\nMerging @LysandreJik @julien-c ",
"This landed in 2.6.0 but is missing in the release notes there https://github.com/huggingface/transformers/releases/tag/v2.6.0",
"Indeed, that is my fault, I'm sorry for missing it.\r\n\r\nI've added it to the v2.7.0 release notes of this morning: https://github.com/huggingface/transformers/releases/tag/v2.7.0\r\n\r\nThanks again for your contribution @gthb !"
] | 1,583 | 1,585 | 1,583 | CONTRIBUTOR | null | Fixes #3101 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3103",
"html_url": "https://github.com/huggingface/transformers/pull/3103",
"diff_url": "https://github.com/huggingface/transformers/pull/3103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3103.patch",
"merged_at": 1583495953000
} |
https://api.github.com/repos/huggingface/transformers/issues/3102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3102/comments | https://api.github.com/repos/huggingface/transformers/issues/3102/events | https://github.com/huggingface/transformers/pull/3102 | 574,670,141 | MDExOlB1bGxSZXF1ZXN0MzgyOTUyNTQx | 3,102 | bert-base-arabic model card | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=h1) Report\n> Merging [#3102](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3102 +/- ##\n==========================================\n- Coverage 77.59% 77.59% -0.01% \n==========================================\n Files 98 98 \n Lines 16250 16250 \n==========================================\n- Hits 12610 12609 -1 \n- Misses 3640 3641 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3102/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=footer). Last update [eec5ec8...2538e35](https://codecov.io/gh/huggingface/transformers/pull/3102?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for sharing this is awesome!"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Please add model card file to the newly added asafaya/bert-base-arabic model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3102/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3102",
"html_url": "https://github.com/huggingface/transformers/pull/3102",
"diff_url": "https://github.com/huggingface/transformers/pull/3102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3102.patch",
"merged_at": 1583245769000
} |
https://api.github.com/repos/huggingface/transformers/issues/3101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3101/comments | https://api.github.com/repos/huggingface/transformers/issues/3101/events | https://github.com/huggingface/transformers/issues/3101 | 574,669,114 | MDU6SXNzdWU1NzQ2NjkxMTQ= | 3,101 | Keras layers should override get_config to be JSON-serializable | {
"login": "gthb",
"id": 153580,
"node_id": "MDQ6VXNlcjE1MzU4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/153580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gthb",
"html_url": "https://github.com/gthb",
"followers_url": "https://api.github.com/users/gthb/followers",
"following_url": "https://api.github.com/users/gthb/following{/other_user}",
"gists_url": "https://api.github.com/users/gthb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gthb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gthb/subscriptions",
"organizations_url": "https://api.github.com/users/gthb/orgs",
"repos_url": "https://api.github.com/users/gthb/repos",
"events_url": "https://api.github.com/users/gthb/events{/privacy}",
"received_events_url": "https://api.github.com/users/gthb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for investigating this and submitting a fix, it's awesome.\r\n\r\nI respond in the PR it-self"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🚀 Feature request
Support JSON serialization of Keras layers by overriding `get_config`, so that they can be sent to Tensorboard to display a conceptual graph of the model.
## Motivation
### 1. Without this, can't write model graph to Tensorboard
From https://github.com/tensorflow/tensorflow/blob/d1786ea19eb41922c0d433d71ca13b123b69b4be/tensorflow/python/ops/summary_ops_v2.py#L1004-L1009
> Writing the Keras model configuration allows the TensorBoard graph plugin to render a conceptual graph, as opposed to graph of ops. In case the model fails to serialze as JSON, it ignores and returns False.
### 2. Without this, can't save model with Keras `model.save`
The base class `get_config` method actually refuses to run if the subclass initializer has positional arguments; from `tensorflow/python/keras/engine/base_layer.py`:
```python
@base_layer_utils.default
def get_config(self):
[...]
if len(extra_args) > 1 and hasattr(self.get_config, '_is_default'):
raise NotImplementedError('Layer %s has arguments in `__init__` and '
'therefore must override `get_config`.' %
self.__class__.__name__)
```
and all the `TF*MainLayer` classes have a `config` positional argument, so this says they “must” all override `get_config`.
And sure enough, if I make a simple Keras model using a TFBertMainLayer inside:
```python
import tensorflow as tf
from transformers import TFBertMainLayer, BertConfig
def create_model(max_sequence_len: int) -> tf.keras.Model:
cfg = BertConfig.from_pretrained('bert-base-cased')
bert = TFBertMainLayer(cfg)
input_ids = tf.keras.Input(shape=(max_sequence_len,), dtype=tf.int32, name='wp_input_token_ids')
input_mask = tf.keras.Input(shape=(max_sequence_len,), dtype=tf.bool, name='wp_input_mask')
pooled = bert(input_ids, input_mask)[1]
out = tf.keras.layers.Dense(units=3, activation='softmax',
kernel_initializer=tf.keras.initializers.glorot_uniform(),
use_bias=False,
name='classification'
)(pooled)
return tf.keras.Model(inputs=[input_ids, input_mask], outputs=[out])
model = create_model(40)
model.save(filepath="tf_model.h5")
```
... then `model.save` fails:
```
Traceback (most recent call last):
File "trysave.py", line 32, in <module>
model.save(filepath="tf_model.h5")
File ".../tensorflow_core/python/keras/engine/network.py", line 1008, in save
signatures, options)
File ".../tensorflow_core/python/keras/saving/save.py", line 112, in save_model
model, filepath, overwrite, include_optimizer)
File ".../tensorflow_core/python/keras/saving/hdf5_format.py", line 99, in save_model_to_hdf5
model_metadata = saving_utils.model_metadata(model, include_optimizer)
File ".../tensorflow_core/python/keras/saving/saving_utils.py", line 172, in model_metadata
raise e
File ".../tensorflow_core/python/keras/saving/saving_utils.py", line 169, in model_metadata
model_config['config'] = model.get_config()
File ".../tensorflow_core/python/keras/engine/network.py", line 918, in get_config
return copy.deepcopy(get_network_config(self))
File ".../tensorflow_core/python/keras/engine/network.py", line 1993, in get_network_config
layer_config = serialize_layer_fn(layer)
File ".../tensorflow_core/python/keras/utils/generic_utils.py", line 198, in serialize_keras_object
config = instance.get_config()
File ".../tensorflow_core/python/keras/engine/base_layer.py", line 499, in get_config
raise NotImplementedError('Layers with arguments in `__init__` must '
NotImplementedError: Layers with arguments in `__init__` must override `get_config`.
```
## Your contribution
I got this working for the one layer I was experimenting with, like this:
```patch
diff --git a/src/transformers/modeling_tf_bert.py b/src/transformers/modeling_tf_bert.py
index 19046235..74ad621c 100644
--- a/src/transformers/modeling_tf_bert.py
+++ b/src/transformers/modeling_tf_bert.py
@@ -21,6 +21,7 @@ import logging
import numpy as np
import tensorflow as tf
+from . import PretrainedConfig
from .configuration_bert import BertConfig
from .file_utils import MULTIPLE_CHOICE_DUMMY_INPUTS, add_start_docstrings, add_start_docstrings_to_callable
from .modeling_tf_utils import TFPreTrainedModel, get_initializer, shape_list
@@ -474,12 +475,20 @@ class TFBertNSPHead(tf.keras.layers.Layer):
class TFBertMainLayer(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
+ if isinstance(config, dict):
+ config = PretrainedConfig.from_dict(config)
+ self.config = config
self.num_hidden_layers = config.num_hidden_layers
self.embeddings = TFBertEmbeddings(config, name="embeddings")
self.encoder = TFBertEncoder(config, name="encoder")
self.pooler = TFBertPooler(config, name="pooler")
+ def get_config(self):
+ cfg = super().get_config()
+ cfg['config'] = self.config.to_dict()
+ return cfg
+
def get_input_embeddings(self):
return self.embeddings
```
and I didn't need to modify any other layer classes, just the main layer.
So maybe it's enough to do this for all the `MainLayer` classes:
```
$ rg 'class .*MainLayer\(tf.keras.layers.Layer\)' src | cat
src/transformers/modeling_tf_openai.py:class TFOpenAIGPTMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_transfo_xl.py:class TFTransfoXLMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_xlm.py:class TFXLMMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_xlnet.py:class TFXLNetMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_distilbert.py:class TFDistilBertMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_bert.py:class TFBertMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_albert.py:class TFAlbertMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_ctrl.py:class TFCTRLMainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_t5.py:class TFT5MainLayer(tf.keras.layers.Layer):
src/transformers/modeling_tf_gpt2.py:class TFGPT2MainLayer(tf.keras.layers.Layer):
```
... or, neater, to extract a single `TFMainLayer(tf.keras.layers.Layer)` superclass for all of them, to do this in one place. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3101/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3101/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3100/comments | https://api.github.com/repos/huggingface/transformers/issues/3100/events | https://github.com/huggingface/transformers/pull/3100 | 574,654,339 | MDExOlB1bGxSZXF1ZXN0MzgyOTM5Njg3 | 3,100 | Fix QA models binding for Flaubert, XLNet and XLM. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=h1) Report\n> Merging [#3100](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3100 +/- ##\n==========================================\n- Coverage 77.59% 77.59% -0.01% \n==========================================\n Files 98 98 \n Lines 16250 16250 \n==========================================\n- Hits 12610 12609 -1 \n- Misses 3640 3641 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3100/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=footer). Last update [eec5ec8...44215a4](https://codecov.io/gh/huggingface/transformers/pull/3100?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | Signed-off-by: Morgan Funtowicz <[email protected]>
Fix #2893 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3100",
"html_url": "https://github.com/huggingface/transformers/pull/3100",
"diff_url": "https://github.com/huggingface/transformers/pull/3100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3100.patch",
"merged_at": 1583517870000
} |
https://api.github.com/repos/huggingface/transformers/issues/3099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3099/comments | https://api.github.com/repos/huggingface/transformers/issues/3099/events | https://github.com/huggingface/transformers/pull/3099 | 574,625,849 | MDExOlB1bGxSZXF1ZXN0MzgyOTE2MTgz | 3,099 | Don't crash if fine-tuned model doesn't end with a number | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=h1) Report\n> Merging [#3099](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3099 +/- ##\n==========================================\n- Coverage 77.59% 77.59% -0.01% \n==========================================\n Files 98 98 \n Lines 16250 16250 \n==========================================\n- Hits 12610 12609 -1 \n- Misses 3640 3641 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.86% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=footer). Last update [eec5ec8...de4dc15](https://codecov.io/gh/huggingface/transformers/pull/3099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | That's the same fix applied in https://github.com/huggingface/transformers/issues/2258 , but for the GLUE example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3099",
"html_url": "https://github.com/huggingface/transformers/pull/3099",
"diff_url": "https://github.com/huggingface/transformers/pull/3099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3099.patch",
"merged_at": 1583243988000
} |
https://api.github.com/repos/huggingface/transformers/issues/3098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3098/comments | https://api.github.com/repos/huggingface/transformers/issues/3098/events | https://github.com/huggingface/transformers/issues/3098 | 574,524,709 | MDU6SXNzdWU1NzQ1MjQ3MDk= | 3,098 | why BertModel' object has no attribute 'bias' | {
"login": "bboyxu5928",
"id": 36126405,
"node_id": "MDQ6VXNlcjM2MTI2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/36126405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bboyxu5928",
"html_url": "https://github.com/bboyxu5928",
"followers_url": "https://api.github.com/users/bboyxu5928/followers",
"following_url": "https://api.github.com/users/bboyxu5928/following{/other_user}",
"gists_url": "https://api.github.com/users/bboyxu5928/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bboyxu5928/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bboyxu5928/subscriptions",
"organizations_url": "https://api.github.com/users/bboyxu5928/orgs",
"repos_url": "https://api.github.com/users/bboyxu5928/repos",
"events_url": "https://api.github.com/users/bboyxu5928/events{/privacy}",
"received_events_url": "https://api.github.com/users/bboyxu5928/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try with `BertForPreTraining` instead of `BertModel` ?",
"> Could you try with `BertForPreTraining` instead of `BertModel` ?\r\n\r\n@LysandreJik the error not occurs,thank you very much, but in the previous,when I use \r\n\r\n```py\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-cased')\r\n```\r\nI can get seq_relationship_logits like \r\n\r\n```\r\ntensor([[ 4.6285, -4.9732]], grad_fn=<AddmmBackward>)\r\n```\r\n\r\nafter softmax,I can get probs between seq_A and seq_B,whether seq_B is a continuation of seq_A\r\nlike \r\n```\r\ntensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>)\r\n```\r\nbut when I use BertForPreTraining ,\r\n\r\n```py\r\nmodel = BertForPreTraining.from_pretrained(.....)\r\n```\r\nI get seq_relationship_logits like :\r\n\r\n```\r\ntensor([[[ -7.3790, -7.2666, -7.4841, ..., -6.1682, -5.8256, -6.2910],\r\n [ -7.9165, -8.1490, -7.9572, ..., -6.5870, -6.3568, -6.8383],\r\n [-14.1834, -13.2084, -13.7673, ..., -9.0377, -9.7575, -9.4470],\r\n ...,\r\n [-12.9208, -12.8706, -13.0834, ..., -10.2187, -8.6429, -11.6360],\r\n [-13.2808, -13.3348, -13.2491, ..., -10.7655, -8.8089, -11.0420],\r\n [-15.5444, -15.2074, -15.8938, ..., -11.9712, -12.5488, -14.7295]]],\r\n grad_fn=<AddBackward0>)\r\n```\r\n\r\nhow can I get probs between seq_A and seq_B ? thanks.\r\n\r\n\r\n",
"Hi, `BertForNextSentencePrediction` is a model that can only perform the NSP objective. `BertForPreTraining` is a model that can perform both NSP and traditional MLM. \r\n\r\nIt, therefore, outputs tensors for both those tasks, the NSP result is the second value in the output tuple of `BertForPreTraining`:\r\n\r\n```py\r\nfrom transformers import BertForNextSentencePrediction, BertForPreTraining, BertTokenizer\r\n\r\nnsp = BertForNextSentencePrediction.from_pretrained(\"bert-base-cased\")\r\nbpt = BertForPreTraining.from_pretrained(\"bert-base-cased\")\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n\r\ninputs = tokenizer.encode_plus(\"I like cats.\", \"I like dogs too.\", return_tensors=\"pt\")\r\nnsp_output = nsp(**inputs)\r\nbpt_outupt = bpt(**inputs)[1]\r\n\r\nprint(nsp_output)\r\nprint(bpt_outupt)\r\n```\r\n\r\nreturns\r\n\r\n```\r\n(tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>),)\r\ntensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>)\r\n```\r\n",
"> Hi, `BertForNextSentencePrediction` is a model that can only perform the NSP objective. `BertForPreTraining` is a model that can perform both NSP and traditional MLM.\r\n> \r\n> It, therefore, outputs tensors for both those tasks, the NSP result is the second value in the output tuple of `BertForPreTraining`:\r\n> \r\n> ```python\r\n> from transformers import BertForNextSentencePrediction, BertForPreTraining, BertTokenizer\r\n> \r\n> nsp = BertForNextSentencePrediction.from_pretrained(\"bert-base-cased\")\r\n> bpt = BertForPreTraining.from_pretrained(\"bert-base-cased\")\r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n> \r\n> inputs = tokenizer.encode_plus(\"I like cats.\", \"I like dogs too.\", return_tensors=\"pt\")\r\n> nsp_output = nsp(**inputs)\r\n> bpt_outupt = bpt(**inputs)[1]\r\n> \r\n> print(nsp_output)\r\n> print(bpt_outupt)\r\n> ```\r\n> \r\n> returns\r\n> \r\n> ```\r\n> (tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>),)\r\n> tensor([[ 5.1543, -5.8147]], grad_fn=<AddmmBackward>)\r\n> ```\r\n\r\nOK, I got it ,thanks a lot "
] | 1,583 | 1,587 | 1,583 | NONE | null | ```py
from torch.nn.functional import softmax
from transformers import BertForNextSentencePrediction, BertTokenizer,BertConfig,BertModel
seq_A = 'I like cookies !'
seq_B = 'Do you like them ?'
'''
model = BertForNextSentencePrediction.from_pretrained('bert-base-cased')
'''
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
config = BertConfig.from_json_file('E:\\work\\pycharm\\transformers-master\\tf_model\\bert_config.json')
model = BertModel.from_pretrained('E:\\work\\pycharm\\transformers-master\\tf_model\\model.ckpt.index', from_tf=True, config=config)
encoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt')
print(encoded)
seq_relationship_logits = model(**encoded)[0]
probs = softmax(seq_relationship_logits, dim=1)
print(seq_relationship_logits)
print(probs)
the above demo can be used for next sentence prediction provided by BramVanroy,thank you again,
Now I want to use my own bert pre-training model or google's model in this task ,so I find the examples in modeling_utils.py LINE 366
# Loading from a TF checkpoint file instead of a PyTorch model (slower)
config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json')
model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)
```
bert_config.json :
```
{
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 21128
}
```
But when I run this demo,some errors happen,
```
AttributeError: 'BertModel' object has no attribute 'bias'
Traceback (most recent call last):
File "E:/work/pycharm/transformers-master/src/transformers/test.py", line 18, in <module>
model = BertModel.from_pretrained('E:\\work\\pycharm\\transformers-master\\tf_model\\model.ckpt.index', from_tf=True, config=config)
File "E:\work\pycharm\transformers-master\src\transformers\modeling_utils.py", line 485, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "E:\work\pycharm\transformers-master\src\transformers\modeling_bert.py", line 106, in load_tf_weights_in_bert
pointer = getattr(pointer, "bias")
File "E:\Users\Administrator\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'BertModel' object has no attribute 'bias'
```
It took a long time to fix it, but failed ,can you help me to solve it.thanks a lot!!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3098/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3097/comments | https://api.github.com/repos/huggingface/transformers/issues/3097/events | https://github.com/huggingface/transformers/issues/3097 | 574,464,756 | MDU6SXNzdWU1NzQ0NjQ3NTY= | 3,097 | Some question about training BERT after change the Vocab.txt size | {
"login": "WenTingTseng",
"id": 32416416,
"node_id": "MDQ6VXNlcjMyNDE2NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32416416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenTingTseng",
"html_url": "https://github.com/WenTingTseng",
"followers_url": "https://api.github.com/users/WenTingTseng/followers",
"following_url": "https://api.github.com/users/WenTingTseng/following{/other_user}",
"gists_url": "https://api.github.com/users/WenTingTseng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenTingTseng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenTingTseng/subscriptions",
"organizations_url": "https://api.github.com/users/WenTingTseng/orgs",
"repos_url": "https://api.github.com/users/WenTingTseng/repos",
"events_url": "https://api.github.com/users/WenTingTseng/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenTingTseng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can't really change the vocabulary without re-training the whole model. Is there some overlap between the BERT vocabulary and your custom vocabulary? If so, you can add the 20k+ tokens using `add_tokens` (which will probably slow down things, as that's a lot of added tokens).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"you can now do this. Keras (tf-nightly version) has added a new util `keras.utils.warmstart_embedding_matrix`. Using this you can continuously train your model with changing vocabulary. https://www.tensorflow.org/api_docs/python/tf/keras/utils/warmstart_embedding_matrix\r\n"
] | 1,583 | 1,664 | 1,589 | NONE | null | The original bert-base-chinese-vocab.txt size is 21128.
I use my own vocab.txt size is 44900
When I try to train Fine-tune BERT Model using BertForMaskedLM it have some promble about size mismatch.
I have try to change the BertEmbeddings its self.word_embeddings config.vocab_size to 44900 like this
`self.word_embeddings = nn.Embedding(44900, config.hidden_size, padding_idx=0) `
But it still have problem like this
```
RuntimeError: Error(s) in loading state_dict for BertForMaskedLM:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([44900, 768]).
size mismatch for cls.predictions.bias: copying a param with shape torch.Size([21128]) from checkpoint, the shape in current model is torch.Size([44900]).
size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([44900, 768]).
```
I do not sure how to fix it.
I have think about if I need to change the pre trained BERT config.json file? Its vocab_size from 21128 to 44900
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3097/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3096/comments | https://api.github.com/repos/huggingface/transformers/issues/3096/events | https://github.com/huggingface/transformers/issues/3096 | 574,417,588 | MDU6SXNzdWU1NzQ0MTc1ODg= | 3,096 | BART BartForSequenceClassification example | {
"login": "easonnie",
"id": 11016329,
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/easonnie",
"html_url": "https://github.com/easonnie",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"repos_url": "https://api.github.com/users/easonnie/repos",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting this!"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🐛 Bug
I'm trying the run the code on the documentation.
https://huggingface.co/transformers/model_doc/bart.html#bartforsequenceclassification
```
from transformers import BartTokenizer, BartForSequenceClassification
import torch
tokenizer = BartTokenizer.from_pretrained('bart-large')
model = BartForSequenceClassification.from_pretrained('bart-large')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute",
add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
output:
```
Traceback (most recent call last):
File "/ssd-playpen/yixin1/projects/cleartheuncertainty/utest/utest_transformer/utest_bart.py", line 11, in <module>
outputs = model(input_ids, labels=labels)
File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/transformers/modeling_bart.py", line 1327, in forward
loss = F.cross_entropy(logits.view(-1, self.num_labels), labels.view(-1))
File "/ssd-playpen/yixin1/projects/cleartheuncertainty/ENV/lib/python3.7/site-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'BartForSequenceClassification' object has no attribute 'num_labels'
```
I guess this should be a quick fix or so.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3095/comments | https://api.github.com/repos/huggingface/transformers/issues/3095/events | https://github.com/huggingface/transformers/issues/3095 | 574,402,707 | MDU6SXNzdWU1NzQ0MDI3MDc= | 3,095 | Getting different topk results when using past + attention mask for more than 1 sentence | {
"login": "Damiox",
"id": 599804,
"node_id": "MDQ6VXNlcjU5OTgwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damiox",
"html_url": "https://github.com/Damiox",
"followers_url": "https://api.github.com/users/Damiox/followers",
"following_url": "https://api.github.com/users/Damiox/following{/other_user}",
"gists_url": "https://api.github.com/users/Damiox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damiox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damiox/subscriptions",
"organizations_url": "https://api.github.com/users/Damiox/orgs",
"repos_url": "https://api.github.com/users/Damiox/repos",
"events_url": "https://api.github.com/users/Damiox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damiox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Damiox, \r\n\r\nFirst, you have to be careful if you do LM inference with batches of different lengths. Besides using an attention mask, you also have to change the position_ids when using GPT2 (see #3021 ). Considering batch language generation with GPT2 I highly recommend making sure that the input_ids batch is does not need padding (which is actually the case in your example above). If you do have to pad certain batch_idxs then make sure you change the input_position_ids as well and use an attention_mask, but even then it's not promised that GPT2 will generate good results. \r\n\r\nSecond, if your prefix input_ids are padded, let's say:\r\n`[ [ I like cats ], [ I like ] ] -> encode_plus -> [ [0, 1, 2], [0, 1, <PAD> ] ]` and you then append something `-> [ [ that are ], [ dogs] ]`, you essentially put in `[ [0, 1, 2, 4, 5], [0, 1, <PAD>, 3, <PAD>] ]` into the model, which is different from what would happen if you `encode_plus` everything directly (`[ [ 0, 1, 2, 4, 5], [0, 1, 3, <PAD>, <PAD> ] ]`). That's why you a) have to make sure `input_postion_ids` are correct and also b) that you sample from the last not padded token for the next word, which is not that simple (again see #3021 ). \r\nAgain, I recommend to only work with `input_ids` that are not padded or just do `batch_size=1` LM inference. \r\n\r\nThree, when appending words of lists make sure that you have a space `\" \"` between them. In your example above printing the list `[d + n for d, n in zip(docs, docs_next)]` actually gives: \r\n\r\n`['I like tosoda and ', 'Please help mewith this']`,\r\n\r\nwhich is probably not what you want. If you would change `docs_next = [\"soda and \", \"with this\"]` to `docs_next = [\" soda and \", \" with this\"] `both outputs actually produce the same results, but this is only because `docs_tensors['input_ids']` is not padded (same lengths for all batch_idxs)\r\n",
"Hey @patrickvonplaten thanks for all your help. Indeed, there was an issue in the code snippet I shared. Thanks!\r\nI see now that pads add complexity because it needs the position_ids to be accommodated, but if it's correctly implemented from my side... shouldn't it just work? I'm confused about your phrase in `it's not promised that GPT2 will generate good results`. ",
"GPT2 was never trained on padding tokens so I'm not 100% sure whether you will get good results. But it's for sure worth trying out, if you absolutely have to use batch_size > 1",
"Alright. I think it should be fine by manipulating the tensors in the \"past\" as long as I keep consistency across the 24 items in the \"past\". Right?\r\nFor instance... I see that if I generate a past for 1 input with 256 tokens, then I'm getting back a \"past\" which is a list of 24 tensors of 5 dimensions (e.g. (2, 1, 16, 256, 64)). On the other hand, if I generate a past for 2 inputs with 540 tokens, I'm getting back a \"past\" which is a list of 24 tensors of 5 dimensions too (e.g. (2, 2, 16, 540, 64)). So I think that if I wanted to exclude the last sentence from the last \"past\" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ? ",
"> So I think that if I wanted to exclude the last sentence from the last \"past\" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ?\r\n\r\nI don't really understand this sentence. If you are concerned about sampling from a padded past, you don't have to manipulate the past variable I think.\r\n\r\nLet's say we forwarded:` ['I', 'live', 'in', <PAD>, <PAD>]` then the past variable will be of dimension `(2, 5, 16, num_tokens, 64)`. If you now want to sample from \r\n`past = key, values of ['I', 'live', 'in', <PAD>, <PAD>]` and \r\n`input_id = ['London'] `\r\nthen you just have to sample from the outputted token because the operation is the same as \r\nsampling from the last token of `['I', 'live', 'in', <PAD>, <PAD>, 'London']` and the last token is not a `<PAD>` token from which output logits it can be sampled from. ",
"@patrickvonplaten I think from your above example, the past variable will be `(2, 1, 16, num_tokens, 64)` instead of `(2, 5, 16, num_tokens, 64`. Right?\r\n\r\n> So I think that if I wanted to exclude the last sentence from the last \"past\" I can simply manipulate the 2nd dimension in all the 24 items and drop the corresponding value from it. I guess... ?\r\n\r\nWhat I meant here is to manipulate the past tensors in case the tensor dimension I want to predict for the subsequent batches should exclude some sentence. I found removing information from the past tensors may be complicated. So please see my example above with another approach to use \"past\" I am experimenting now.\r\n\r\nFor now to simplify this a bit, I'm not focusing on padding the past. Thus, it'll be generated from a batch of N docs. For instance below N=3:\r\n```\r\n['I', 'live', 'in', 'NYC']\r\n['I', 'live', 'in', 'Barcelona']\r\n['I', 'live', 'in', 'Moscow']\r\n```\r\nNote: I'm going to make sure to group docs by tokens length. I'm going to get a `past` from that. Past here will be a list of 24 elements with dimensions `(2, 3, 16, 4, 64)`\r\n\r\nThen I'm planning to use that past variable along with M suffix phrases. Those suffix phrases may have different lengths and below to different sentences that were calculated above, so I'm planning to add padding here first. Also another characteristic is that M will be always equal to or greater than N. For example, M=6 (it's a coincide that num_tokens is also 6 here):\r\n```\r\n['and', 'I', 'speak', 'english', 'fluently', '<PAD>']\r\n['and', 'I', 'live', 'in', 'North', 'Manhattan']\r\n['and', 'I', 'like', 'football', '<PAD>', '<PAD>']\r\n['and', 'I', 'don\\'t', 'speak', 'catalan', '<PAD>']\r\n['and', 'I', 'take', 'the', 'bus', '<PAD>']\r\n['and', 'I', 'drink', 'vodka', 'sometimes', 'you?']\r\n```\r\nNote: I'm also building an attention mask properly as you indicated by concatenating a tensor full with 1s with length = 4 to make this work. For example:\r\n```\r\n['1', '1', '1', '1', '1', '1', '1', '1', '1', '0']\r\n['1', '1', '1', '1', '1', '1', '1', '1', '1', '1']\r\n['1', '1', '1', '1', '1', '1', '1', '1', '0', '0']\r\n['1', '1', '1', '1', '1', '1', '1', '1', '1', '0']\r\n['1', '1', '1', '1', '1', '1', '1', '1', '1', '0'] \r\n['1', '1', '1', '1', '1', '1', '1', '1', '1', '1']\r\n```\r\n\r\nTo make \"past\" fit here, I'm planning to expand the past variable. So for each tensor in the past array, expand the tensor results as if I had sent to gpt2 initially the following first batch to get the \"past\" for:\r\n```\r\n['I', 'live', 'in', 'NYC']\r\n['I', 'live', 'in', 'NYC']\r\n['I', 'live', 'in', 'NYC']\r\n['I', 'live', 'in', 'Barcelona']\r\n['I', 'live', 'in', 'Barcelona']\r\n['I', 'live', 'in', 'Moscow']\r\n```\r\n\r\n\r\nIn this case, I'll build a past with dimensions: `(2, 6, 16, 4, 64)` instead of the original dimensions: `(2, 3, 16, 4, 64)`. I found https://pytorch.org/docs/stable/torch.html#torch.repeat_interleave very useful for this...\r\n\r\nDo you think this make sense? Any warning about this? Thanks",
"@patrickvonplaten can I not manipulate the `past` variable as explained above? Do the other dimensions contain some kind of aggregated data that makes the past to be immutable?",
"@patrickvonplaten just to confirm I'm on the right track...: Can I manipulate the dimensions for the layer tensors in the past array? It's working for me with latest release (2.6.0), but just wanted to make sure I'm on the right track if I go ahead with this. So basically I want to make sure I can expand the dimension 1 of every tensor in the past array from N to M. So that I can re-use that past with different predictions that will reuse that past. Hope it makes sense my question. "
] | 1,583 | 1,585 | 1,583 | NONE | null | # ❓ Questions & Help
Hi. I'm having some issues when using the past + attention mask functionality. Results are not the ones I was expecting to get... I'm using latest master as the latest release 2.5.1 is failing (`RuntimeError: The size of tensor a (6) must match the size of tensor b (3) at non-singleton dimension 3` at modeling_gpt2.py:150). With master there's no error event being thrown.
My code works with a single sentence, but it doesn't work with more sentences because I get different predictions as I'm expecting to get the same topk results by using past vs no using past at all.
Code snippet below:
```
from transformers.tokenization_gpt2 import GPT2Tokenizer
from transformers.modeling_gpt2 import GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<|endoftext|>')
model = GPT2LMHeadModel.from_pretrained('gpt2')
# Complete phrases are: "I like to drink soda and" and "Please help me with this"
docs = ["I like to", "Please help me"]
# note: comment the above line and uncomment the following line to make it work with 1 document
#docs = ["I like to"]
docs_tensors = tokenizer.batch_encode_plus(
[d for d in docs], pad_to_max_length=True, return_tensors='pt')
docs_next = ["soda and ", "with this"]
# note: comment the above line and uncomment the following line to make it work with 1 document
#docs_next = ["soda and "]
docs_next_tensors = tokenizer.batch_encode_plus(
[d for d in docs_next], pad_to_max_length=True, return_tensors='pt')
# predicting the first part of each phrase
_, past = model(docs_tensors['input_ids'], attention_mask=docs_tensors['attention_mask'])
# predicting the rest of the phrase
attn_mask = torch.cat([docs_tensors['attention_mask'], docs_next_tensors['attention_mask']], dim=-1)
logits, _ = model(docs_next_tensors['input_ids'], attention_mask=attn_mask, past=past)
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Results with past:", words)
#####################
docs_full_tensors = tokenizer.batch_encode_plus(
[d + n for d, n in zip(docs, docs_next)], pad_to_max_length=True, return_tensors='pt')
logits, _ = model(docs_full_tensors['input_ids'], attention_mask=docs_full_tensors['attention_mask'])
logits = logits[:, -1]
_, top_indices_results = logits.topk(50)
words = [tokenizer.decode([idx.item()]) for tir in top_indices_results for idx in tir]
print("Results without past:", words)
```
Output results (please note the inconsistence between both results - I'm expecting them to match like the next test):
```
Results with past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just', 'The', 'A', 'I', '"', 'This', 'In', 'It', 'B', 'As', 'S', 'We', 'M', 'P', 'C', 'There', 'If', 'T', '1', 'By', 'F', 'You', 'D', 'Image', 'An', 'When', '(', 'On', 'What', 'For', 'L', 'H', 'R', 'About', '[', 'From', 'G', 'After', 'E', 'One', 'K', 'With', 'Still', 'So', 'W', 'by', 'N', 'My', 'Please', 'How', 'O']
Results without past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just', '.', ' site', ' page', '!', ' project', ',', ' website', ' article', ' thread', ':', ' story', ' one', ' and', ' message', ' post', ' issue', ' great', ' blog', ' amazing', ' thing', ' little', ' problem', '\n', '?', ' book', ' game', ' by', ' to', ' wonderful', ' in', ' awesome', ' guy', ' community', ' new', ' mod', ' information', ' web', ' beautiful', '...', ' man', ' stuff', ' work', ' place', ' video', '."', ' app', ' kind', ' piece', '!"', ' world']
```
If I were uncommenting the lines stated in the code I would get same results (...but for only 1 sentence):
```
Results with past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just']
Results without past: [' I', ' the', ' a', ' t', ' s', ' k', ' it', ' c', ' my', ' other', ' to', ' b', ' d', ' we', ' make', ' then', ' o', ' m', ' have', ' you', ' do', ' all', ' l', ' some', ' so', ' can', ' i', ' j', ' p', ' that', ' be', ' get', ' he', ' take', ' st', ' this', ' also', ' n', ' ch', ' is', ' use', ' h', ' they', ' f', ' put', ' go', ' g', ' w', ' not', ' just']
```
## Details
<!-- Description of your issue -->
Original stackoverflow question that @patrickvonplaten had answered initially:
https://stackoverflow.com/questions/60459292/using-past-and-attention-mask-at-the-same-time-for-gpt2/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3095/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3094/comments | https://api.github.com/repos/huggingface/transformers/issues/3094/events | https://github.com/huggingface/transformers/pull/3094 | 574,395,518 | MDExOlB1bGxSZXF1ZXN0MzgyNzI3MjA3 | 3,094 | [Bart] dont call .forward | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@julien-c any idea why this would cause \r\n```\r\nFAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction \r\n```",
"I think this one might sometimes fail randomly"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3094/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3094",
"html_url": "https://github.com/huggingface/transformers/pull/3094",
"diff_url": "https://github.com/huggingface/transformers/pull/3094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3094.patch",
"merged_at": 1583266453000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3093/comments | https://api.github.com/repos/huggingface/transformers/issues/3093/events | https://github.com/huggingface/transformers/issues/3093 | 574,350,885 | MDU6SXNzdWU1NzQzNTA4ODU= | 3,093 | wrong 'label2id' and 'id2label' in config when loading from pretrained | {
"login": "daria-pylypenko",
"id": 42044325,
"node_id": "MDQ6VXNlcjQyMDQ0MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/42044325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daria-pylypenko",
"html_url": "https://github.com/daria-pylypenko",
"followers_url": "https://api.github.com/users/daria-pylypenko/followers",
"following_url": "https://api.github.com/users/daria-pylypenko/following{/other_user}",
"gists_url": "https://api.github.com/users/daria-pylypenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daria-pylypenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daria-pylypenko/subscriptions",
"organizations_url": "https://api.github.com/users/daria-pylypenko/orgs",
"repos_url": "https://api.github.com/users/daria-pylypenko/repos",
"events_url": "https://api.github.com/users/daria-pylypenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/daria-pylypenko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
```python
from transformers import BertConfig
config = BertConfig.from_pretrained('bert-base-cased', num_labels=3)
print(config.id2label)
```
2. Prints: {0: 'LABEL_0', 1: 'LABEL_1'}
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
prints {0: 'LABEL_0', 1: 'LABEL_1', 2: 'LABEL_2'}
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Ubuntu 16.04
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3093/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3092/comments | https://api.github.com/repos/huggingface/transformers/issues/3092/events | https://github.com/huggingface/transformers/issues/3092 | 574,320,255 | MDU6SXNzdWU1NzQzMjAyNTU= | 3,092 | ċ in gpt2 | {
"login": "weiguowilliam",
"id": 31396452,
"node_id": "MDQ6VXNlcjMxMzk2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiguowilliam",
"html_url": "https://github.com/weiguowilliam",
"followers_url": "https://api.github.com/users/weiguowilliam/followers",
"following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions",
"organizations_url": "https://api.github.com/users/weiguowilliam/orgs",
"repos_url": "https://api.github.com/users/weiguowilliam/repos",
"events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiguowilliam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"The proper way to decode a value is using the `decode` method:\r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n\r\ntokenizer.decode([198]) # '\\n'\r\n```\r\n\r\nSome byte indices are shifted in the GPT-2 vocabulary, especially the control characters and characters that resemble whitespace. This is an example, and you can see the method that does it in `~transformers.tokenization_gpt2.bytes_to_unicode`."
] | 1,583 | 1,583 | 1,583 | NONE | null | There's a 'ċ' in gpt2.
```
vocab = list(tokenizer_gpt2.encoder.keys())
vocab[198]
```
output: ċ
Based on some examples, I guess it means "with_break". But I can't find this parameter in gpt2 tokenizer document. Can anyone tell me the meaning? thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3092/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3091/comments | https://api.github.com/repos/huggingface/transformers/issues/3091/events | https://github.com/huggingface/transformers/issues/3091 | 574,288,594 | MDU6SXNzdWU1NzQyODg1OTQ= | 3,091 | Fast tokenizers fail when the input is just spaces | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This is now fixed on `master`:\r\n```\r\n>>> t = transformers.AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=True)\r\n>>> t.encode_plus(\" \", add_special_tokens=False)\r\n{'input_ids': [], 'token_type_ids': [], 'attention_mask': []}\r\n```\r\nAlso, `add_special_tokens` now works exactly the same for both slow and fast tokenizers. We give it with `encode`, `tokenize`, ... and not during initialization anymore."
] | 1,583 | 1,587 | 1,587 | CONTRIBUTOR | null | Slow tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
>>> t.encode_plus(" ", add_special_tokens=False)
{'input_ids': [], 'token_type_ids': []}
```
Fast tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True, add_special_tokens=False)
>>> t.encode_plus(" ")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1894, in encode_plus
return {key: value[0] if isinstance(value[0], list) else value for key, value in batched_output.items()}
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1894, in <dictcomp>
return {key: value[0] if isinstance(value[0], list) else value for key, value in batched_output.items()}
IndexError: list index out of range
```
The `add_special_tokens=False` bit is critical. Otherwise, there is no failure because the results aren't empty. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3091/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3090/comments | https://api.github.com/repos/huggingface/transformers/issues/3090/events | https://github.com/huggingface/transformers/issues/3090 | 574,286,713 | MDU6SXNzdWU1NzQyODY3MTM= | 3,090 | Cuda error during evaluation - CUBLAS_STATUS_NOT_INITIALIZED | {
"login": "soni-n",
"id": 13745813,
"node_id": "MDQ6VXNlcjEzNzQ1ODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13745813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soni-n",
"html_url": "https://github.com/soni-n",
"followers_url": "https://api.github.com/users/soni-n/followers",
"following_url": "https://api.github.com/users/soni-n/following{/other_user}",
"gists_url": "https://api.github.com/users/soni-n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soni-n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soni-n/subscriptions",
"organizations_url": "https://api.github.com/users/soni-n/orgs",
"repos_url": "https://api.github.com/users/soni-n/repos",
"events_url": "https://api.github.com/users/soni-n/events{/privacy}",
"received_events_url": "https://api.github.com/users/soni-n/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tried debugging with CPU (an aside - this has an issue in itself apparently when --no_cuda flag is used --> run_language_modeling.py needs to set args.n_gpu to 0)\r\n\r\nFound the fix -> Needed to call model.resize_token_embeddings(len(tokenizer)) after adding tokens in the eval mode as well. "
] | 1,583 | 1,618 | 1,583 | NONE | null | # 🐛 Bug
## Information
Overview:
I am using the Bert pre-trained model and trying to finetune it using a customized dataset which requires me to add new tokens so that the tokenizer doesn't wordpiece them (these tokens are of the form <1234> and </1234> where 1234 can be any int converted to string).
I was able to go through the train step but when it comes to evaluating the perplexity I get :
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The only bit of tweak I made was to use tokenizer.add_tokens("<my_new_token>")
before tokenizing using tokenizer.batch_encode_plus
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
facebook messages dataset
## To reproduce
Steps to reproduce the behavior:
1. In LineByLineTextDataset - add new tokens by using tokenizer.add_tokens("<new_token>") for each line that is added in lines list.
(The only other change I made was to fetch the text directly from DB instead of using the text files)
2. I limited the run to use only 3 instances of text line to debug
3. Run the regular examples script to train and evaluate
Error:
```
Exception has occurred: RuntimeError
Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 987, in forward
encoder_attention_mask=encoder_attention_mask,
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 790, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 368, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 314, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 216, in forward
mixed_query_layer = self.query(hidden_states)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/data/nisoni/anaconda3/envs/trans/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 550, in evaluate
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 910, in main
result = evaluate(args, model, tokenizer, prefix=prefix)
File "/data/nisoni/transformers/transformers/examples/run_language_modeling.py", line 918, in <module>
main()
```
## Expected behavior
A regular examples run giving a perplexity score as it gives without adding new tokens
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-4.4.0-171-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not explicitly
- Using distributed or parallel set-up in script?: not explicitly
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3090/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3089/comments | https://api.github.com/repos/huggingface/transformers/issues/3089/events | https://github.com/huggingface/transformers/pull/3089 | 574,270,057 | MDExOlB1bGxSZXF1ZXN0MzgyNjI1MzI2 | 3,089 | add models cards for camembert-base-fquad camembert-base-squad | {
"login": "fmikaelian",
"id": 39884124,
"node_id": "MDQ6VXNlcjM5ODg0MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/39884124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fmikaelian",
"html_url": "https://github.com/fmikaelian",
"followers_url": "https://api.github.com/users/fmikaelian/followers",
"following_url": "https://api.github.com/users/fmikaelian/following{/other_user}",
"gists_url": "https://api.github.com/users/fmikaelian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fmikaelian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fmikaelian/subscriptions",
"organizations_url": "https://api.github.com/users/fmikaelian/orgs",
"repos_url": "https://api.github.com/users/fmikaelian/repos",
"events_url": "https://api.github.com/users/fmikaelian/events{/privacy}",
"received_events_url": "https://api.github.com/users/fmikaelian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=h1) Report\n> Merging [#3089](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f169957d0cf17b110f27cacc1b1fb43efaa01218?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3089 +/- ##\n==========================================\n+ Coverage 77.59% 77.59% +<.01% \n==========================================\n Files 98 98 \n Lines 16250 16250 \n==========================================\n+ Hits 12609 12610 +1 \n+ Misses 3641 3640 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <0%> (+0.15%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=footer). Last update [f169957...ac7be7c](https://codecov.io/gh/huggingface/transformers/pull/3089?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for sharing!"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | Following #2893
Model links:
- [`fmikaelian/camembert-base-fquad`](https://huggingface.co/fmikaelian/camembert-base-fquad)
- [`fmikaelian/camembert-base-squad`](https://huggingface.co/fmikaelian/camembert-base-squad) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3089/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3089",
"html_url": "https://github.com/huggingface/transformers/pull/3089",
"diff_url": "https://github.com/huggingface/transformers/pull/3089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3089.patch",
"merged_at": 1583186834000
} |
https://api.github.com/repos/huggingface/transformers/issues/3088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3088/comments | https://api.github.com/repos/huggingface/transformers/issues/3088/events | https://github.com/huggingface/transformers/issues/3088 | 574,261,994 | MDU6SXNzdWU1NzQyNjE5OTQ= | 3,088 | Fast tokenizers can't `encode_plus` a list of ids; slow tokenizers can | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,588 | 1,588 | CONTRIBUTOR | null | With the slow tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=False)
>>> t.encode_plus([1000])
{'input_ids': [101, 1000, 102],
'token_type_ids': [0, 0, 0],
'attention_mask': [1, 1, 1]}
```
With the fast tokenizers:
```
>>> import transformers
>>> t = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True)
>>> t.encode_plus([1000])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1889, in encode_plus
**kwargs,
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1815, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/Users/dirkg/anaconda3/envs/allennlp/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py", line 141, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3088/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3087/comments | https://api.github.com/repos/huggingface/transformers/issues/3087/events | https://github.com/huggingface/transformers/issues/3087 | 574,198,533 | MDU6SXNzdWU1NzQxOTg1MzM= | 3,087 | Error when runnig run_tf_ner.py | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should not have any issues with this as `GradientAccumulator` is correctly imported in the __init__.py file:\r\n\r\nhttps://github.com/huggingface/transformers/blob/298bed16a841fae3608d334441ccae4d9043611f/src/transformers/__init__.py#L426-L427\r\n\r\nIs `transformers` correctly installed in your pip environment, or did you simply clone the repository?",
"Ok. Thank you. I cannot check it now because Colab PRO accounts are having problems. I will let you know ASAP.",
"I met analogous problem,but it was solved after rebooting computer. And another issue is it can not find the installed module,this issue can be solved by reinstall the module that can not be found.",
"In my case I was not using tf 2.x\n\nEl mar., 10 mar. 2020 9:50, over-shine <[email protected]> escribió:\n\n> I met analogous problem,but it was solved after rebooting computer. And\n> another issue is it can not find the installed module,this issue can be\n> solved by reinstall the module that can not be found.\n>\n> —\n> You are receiving this because you modified the open/close state.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3087?email_source=notifications&email_token=AA34BHNOLTNBPN37WKT4UELRGX5MHA5CNFSM4K73DTO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEOKQYOY#issuecomment-596970555>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AA34BHIJAGHRKSZE5TNILB3RGX5MHANCNFSM4K73DTOQ>\n> .\n>\n"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🐛 Bug
## Information
```python
!python3 /content/transformers/examples/ner/run_tf_ner.py --data_dir /content/ner_dataset \
--model_type bert \
--labels /content/labels.txt \
--model_name_or_path dccuchile/bert-base-spanish-wwm-cased \
--output_dir model_output \
--max_seq_length 256 \
--num_train_epochs 5\
--per_gpu_train_batch_size 42 \
--save_steps 2000\
--do_train \
--do_eval
Traceback (most recent call last):
File "/content/transformers/examples/ner/run_tf_ner.py", line 14, in <module>
from transformers import (
ImportError: cannot import name 'GradientAccumulator'
```
## Transformer version ```transformers==2.5.1``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3087/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3086/comments | https://api.github.com/repos/huggingface/transformers/issues/3086/events | https://github.com/huggingface/transformers/issues/3086 | 574,189,410 | MDU6SXNzdWU1NzQxODk0MTA= | 3,086 | Disabling Eager Mode Prevents Loading Pre-Trained BERT | {
"login": "msandrewhoang03",
"id": 58231708,
"node_id": "MDQ6VXNlcjU4MjMxNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/58231708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msandrewhoang03",
"html_url": "https://github.com/msandrewhoang03",
"followers_url": "https://api.github.com/users/msandrewhoang03/followers",
"following_url": "https://api.github.com/users/msandrewhoang03/following{/other_user}",
"gists_url": "https://api.github.com/users/msandrewhoang03/gists{/gist_id}",
"starred_url": "https://api.github.com/users/msandrewhoang03/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msandrewhoang03/subscriptions",
"organizations_url": "https://api.github.com/users/msandrewhoang03/orgs",
"repos_url": "https://api.github.com/users/msandrewhoang03/repos",
"events_url": "https://api.github.com/users/msandrewhoang03/events{/privacy}",
"received_events_url": "https://api.github.com/users/msandrewhoang03/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"TFBertModel can auto use gpu? i find model(np.array(input_ids)) is slow,cost 100+ms\r\nIn [14]: %timeit model_outputs = model(np.array(input_ids))\r\n133 ms ± 3.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\r\n\r\nIn [15]: %time model_outputs = model(np.array(input_ids))\r\nCPU times: user 330 ms, sys: 14.4 ms, total: 344 ms\r\nWall time: 158 ms\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have a similar issue. Did you manage to find a fix for this @msandrewhoang03 ?"
] | 1,583 | 1,646 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**BERT**
Language I am using the model on (English, Chinese ...):
**English**
## To reproduce
Steps to reproduce the behavior:
1. Disable Tensorflow eager mode `tf.compat.v1.disable_v2_behavior()`
2. Create a pretrained BERT instance `model = transformers.TFBertModel.from_pretrained("bert-base-uncased")`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`model` contains a pre-trained BERT model
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3085/comments | https://api.github.com/repos/huggingface/transformers/issues/3085/events | https://github.com/huggingface/transformers/pull/3085 | 574,164,479 | MDExOlB1bGxSZXF1ZXN0MzgyNTQwNjg2 | 3,085 | TF GPU CI | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=h1) Report\n> Merging [#3085](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0e56b37e805279ecb61670159fa8c71487214e0a?src=pr&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3085 +/- ##\n=========================================\n- Coverage 77.62% 76.6% -1.03% \n=========================================\n Files 98 98 \n Lines 16230 16230 \n=========================================\n- Hits 12599 12433 -166 \n- Misses 3631 3797 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3085/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <0%> (-0.16%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=footer). Last update [0e56b37...83f65ff](https://codecov.io/gh/huggingface/transformers/pull/3085?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3085/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3085",
"html_url": "https://github.com/huggingface/transformers/pull/3085",
"diff_url": "https://github.com/huggingface/transformers/pull/3085.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3085.patch",
"merged_at": 1583181926000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3084/comments | https://api.github.com/repos/huggingface/transformers/issues/3084/events | https://github.com/huggingface/transformers/pull/3084 | 574,164,324 | MDExOlB1bGxSZXF1ZXN0MzgyNTQwNTYy | 3,084 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | - Add example of usage
- Update metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3084/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3084",
"html_url": "https://github.com/huggingface/transformers/pull/3084",
"diff_url": "https://github.com/huggingface/transformers/pull/3084.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3084.patch",
"merged_at": 1583174136000
} |
https://api.github.com/repos/huggingface/transformers/issues/3083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3083/comments | https://api.github.com/repos/huggingface/transformers/issues/3083/events | https://github.com/huggingface/transformers/issues/3083 | 574,133,802 | MDU6SXNzdWU1NzQxMzM4MDI= | 3,083 | Memory error : load 200GB file in run_language_model.py | {
"login": "omerarshad",
"id": 16164105,
"node_id": "MDQ6VXNlcjE2MTY0MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/16164105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omerarshad",
"html_url": "https://github.com/omerarshad",
"followers_url": "https://api.github.com/users/omerarshad/followers",
"following_url": "https://api.github.com/users/omerarshad/following{/other_user}",
"gists_url": "https://api.github.com/users/omerarshad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omerarshad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omerarshad/subscriptions",
"organizations_url": "https://api.github.com/users/omerarshad/orgs",
"repos_url": "https://api.github.com/users/omerarshad/repos",
"events_url": "https://api.github.com/users/omerarshad/events{/privacy}",
"received_events_url": "https://api.github.com/users/omerarshad/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"I agree that an on-the-fly tokenisation would be neat as an alternative to pre-processing the whole input file and saving the tensors in memory. ",
"Hi @BramVanroy , this is mentioned in the blog post bout training models from scratch, as something that could be done (https://huggingface.co/blog/how-to-train). Is it possible?\r\nThanks!",
"I am fairly new to how contributing to HuggingFace works but gave this a little thought today.\r\nAt first I thought we could maybe solve it like this:\r\n\r\nIf we consider this code:\r\n`tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))`\r\n\r\nMy first intuition was that the space needed to save the result of this code is significantly less than the space needed to store f.read(). \r\nSo if we could get the result of this code by just reading a line (or reading parts of the texts with overlaps to the previous part as big as max_len of tokens), we might solve it...\r\nHowever I ran a litte experiment and it turns out that `tokenized_text` would still take up around 133GB with an input txt file of 200GB.\r\nSo not a solution.\r\nDo you guys have any idea how to approach this differently? Because of the RandomSampler we also can't store parts of the file trivially.",
"The solution would be to work with a dataset that on every call, fetches the lines from the file that are required, rather than reading the whole file in memory. This is bound to be slower, but it is very lenient on memory. A good candidate is linecache, which does smart caching in the process.\r\n\r\n```python\r\nimport linecache\r\nfrom pathlib import Path\r\n\r\nfrom torch.utils.data import Dataset\r\n\r\n\r\nclass LazyTextDataset(Dataset):\r\n def __init__(self, fin):\r\n self.fin = fin\r\n self.num_entries = self._get_n_lines(self.fin)\r\n\r\n @staticmethod\r\n def _get_n_lines(fin):\r\n with Path(fin).resolve().open(encoding='utf-8') as fhin:\r\n for line_idx, _ in enumerate(fhin, 1):\r\n pass\r\n\r\n return line_idx\r\n\r\n def __getitem__(self, idx):\r\n # linecache starts counting from one, not zero, +1 the given index\r\n idx += 1\r\n return linecache.getline(self.fin, idx)\r\n\r\n def __len__(self):\r\n return self.num_entries\r\n```\r\n\r\nThen you can add a custom `collate_fn` to your data loader that will automatically tokenize the whole batch.",
"I thought about this but figured it would definitely be too slow for such big files. I didn't know about linecache though, cool!",
"I've been using a similar solution to @BramVanroy for a couple of weeks, though I too was not aware of `linecache`, so assume that my solution can be improved by using that tool. \r\n\r\nI implemented this because the up-front loading was taking hours and hours. I did some rough comparisons on smaller data files and found that I was getting the same iters/second using this method as the existing methods.\r\n\r\n\r\n```python\r\nclass LazyUnSupervisedTextExamples:\r\n \"\"\"\r\n Deals with file i/o for lazy retrieval of specific lines of text in file.\r\n\r\n \"\"\"\r\n def __init__(self, path):\r\n \"\"\"\r\n :args:\r\n path: str\r\n : The path to the data file to be loaded.\r\n \"\"\"\r\n self.data_path = path\r\n self.data_stream = open(self.data_path, 'r')\r\n self.offsets = [0]\r\n for line in self.data_stream:\r\n self.offsets.append(self.offsets[-1] + len(line.encode('utf-8')))\r\n self.offsets = self.offsets[1:-1]\r\n self.data_stream.seek(0)\r\n self.current_offset = 0\r\n\r\n def __len__(self):\r\n return len(self.offsets)\r\n\r\n def __getitem__(self, _id):\r\n \"\"\"\r\n :returns:\r\n str; the line of text given by _id if no errors.\r\n None if errors occur.\r\n\r\n PEP8 note: we really do want a bare exception here because an uncaught exception in here has the potential\r\n to bring down a large training run with an error in a single line of the data file.\r\n \"\"\"\r\n\r\n offset = self.offsets[_id]\r\n try:\r\n self.data_stream.seek(offset)\r\n line = self.data_stream.readline()\r\n example = line.strip()\r\n self.data_stream.seek(self.current_offset)\r\n except:\r\n example = None\r\n return example\r\n\r\n def __next__(self):\r\n line = self.data_stream.readline()\r\n self.current_offset = self.data_stream.tell()\r\n return line.strip()\r\n\r\n def close(self):\r\n self.data_stream.close()\r\n\r\n\r\nclass LazyUnSupervisedTextDataset(Dataset):\r\n \"\"\"\r\n Works with datasets of simple lines of text. Lines are loaded and tokenized\r\n lazily rather than being pulled into memory up-front. This reduces the memory\r\n footprint when using large datasets, and also remedies a problem seen when using\r\n the other Datasets (above) whereby they take too long to load all\r\n of the data and tokenize it before doing any training.\r\n\r\n The file i/o work is handled within self.examples. This class just indexes\r\n into that object and applies the tokenization.\r\n \"\"\"\r\n def __init__(self, tokenizer, file_path, block_size=512):\r\n \"\"\"\r\n :args:\r\n tokenizer: tokenizer.implementations.BaseTokenizer object (instantiated)\r\n : This tokenizer will be directly applied to the text data\r\n to prepare the data for passing through the model.\r\n file_path: str\r\n : Path to the data file to be used.\r\n block_size: int\r\n : The maximum length of a sequence (truancated beyond this length).\r\n :returns: None.\r\n \"\"\"\r\n self.examples = LazyUnSupervisedTextExamples(file_path)\r\n self.tokenizer = tokenizer\r\n self.max_len = block_size\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def _text_to_tensor(self, item):\r\n \"\"\"\r\n Defines the logic for transforming a single raw text item to a tokenized\r\n tensor ready to be passed into a model.\r\n\r\n :args:\r\n item: str\r\n : The text item as a string to be passed to the tokenizer.\r\n \"\"\"\r\n return torch.tensor(self.tokenizer.encode(item, max_length=self.max_len))\r\n\r\n def _text_to_item(self, text):\r\n \"\"\"\r\n Convenience functino to encapsulate re-used logic for converting raw\r\n text to the output of __getitem__ of __next__.\r\n\r\n :returns:\r\n torch.Tensor of tokenized text if no errors.\r\n None if any errors encountered.\r\n \"\"\"\r\n try:\r\n if (text is not None):\r\n return self._text_to_tensor(text)\r\n else:\r\n return None\r\n except:\r\n return None\r\n\r\n def __getitem__(self, _id):\r\n \"\"\"\r\n :returns:\r\n torch.Tensor of tokenized text if no errors.\r\n None if any errors encountered.\r\n \"\"\"\r\n text = self.examples[_id]\r\n return self._text_to_item(text)\r\n\r\n def __next__(self):\r\n text = next(self.examples)\r\n return self._text_to_item(text)\r\n\r\n def close(self):\r\n \"\"\"\r\n Since the LazyUnSupervisedTextExamples object self.examples contains a\r\n file handle, this method provides access to its close function to safely\r\n close the open data file when finished. This should be run when the\r\n dataset object is finished with.\r\n \"\"\"\r\n self.examples.close()\r\n```\r\n\r\nThe only change I found necessary to make to the `collate_fn` was a line to filter out lines that failed to load. I'm currently tokenising one item at a time, but prefer @BramVanroy's suggestion of batch tokenisation in the `collate_fn`. \r\n\r\n```python\r\ndef collate(examples: List[torch.Tensor]):\r\n examples = list(filter(lambda ex: ex is not None, examples))\r\n if tokenizer._pad_token is None:\r\n return pad_sequence(examples, batch_first=True)\r\n return pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id)\r\n```\r\n\r\n\r\nHappy to make above-mentioned sensible changes and contribute.\r\n\r\nDoes anyone have any advice about more sophisticated performance testing to shore-up my above claim that lazy loading isn't any slower per iteration?",
"I would recommend to, indeed, run the tokenisation in collate_fn. You can use `batch_encode_plus` there. Concerning your collate function: the filter function can be simplified to `filter(None, examples)` but in fact I'd go with a list comprehension right away: `[ex for ex in examples if ex is not None]`. \r\n\r\nFor timing you can use the timeit module.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,590 | 1,590 | NONE | null |
line 107 in run_language_model.py
with open(file_path, encoding="utf-8") as f:
text = f.read()
any idea how to use generators to load large files? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3083/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3082/comments | https://api.github.com/repos/huggingface/transformers/issues/3082/events | https://github.com/huggingface/transformers/pull/3082 | 574,082,363 | MDExOlB1bGxSZXF1ZXN0MzgyNDczMTQ2 | 3,082 | Summarization Examples: add Bart CNN Evaluation | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=h1) Report\n> Merging [#3082](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b74c9b106b97b0722ff8f98e77e2e2210c688b23?src=pr&el=desc) will **increase** coverage by `0.4%`.\n> The diff coverage is `87.27%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3082 +/- ##\n========================================\n+ Coverage 77.19% 77.6% +0.4% \n========================================\n Files 98 98 \n Lines 16063 16219 +156 \n========================================\n+ Hits 12400 12586 +186 \n+ Misses 3663 3633 -30\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.76% <76.66%> (+0.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3082/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.05% <88.82%> (+8.47%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=footer). Last update [b74c9b1...5656b5e](https://codecov.io/gh/huggingface/transformers/pull/3082?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"\r\nNew organization @LysandreJik ",
"I like that organisation"
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3082",
"html_url": "https://github.com/huggingface/transformers/pull/3082",
"diff_url": "https://github.com/huggingface/transformers/pull/3082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3082.patch",
"merged_at": 1583267400000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3081/comments | https://api.github.com/repos/huggingface/transformers/issues/3081/events | https://github.com/huggingface/transformers/issues/3081 | 574,075,405 | MDU6SXNzdWU1NzQwNzU0MDU= | 3,081 | Why there is no TransfoXLForSequenceClassification class? | {
"login": "acriptis",
"id": 2207706,
"node_id": "MDQ6VXNlcjIyMDc3MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2207706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acriptis",
"html_url": "https://github.com/acriptis",
"followers_url": "https://api.github.com/users/acriptis/followers",
"following_url": "https://api.github.com/users/acriptis/following{/other_user}",
"gists_url": "https://api.github.com/users/acriptis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acriptis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acriptis/subscriptions",
"organizations_url": "https://api.github.com/users/acriptis/orgs",
"repos_url": "https://api.github.com/users/acriptis/repos",
"events_url": "https://api.github.com/users/acriptis/events{/privacy}",
"received_events_url": "https://api.github.com/users/acriptis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,589 | 1,589 | NONE | null | Hello, huggingface! Thank you for the great work with doing neat interfaces for transformers family!
I'm analyzing performance of transformers on a task of texts classification (like SST-2 in GLUE). I found that many architectures has `<ModelName>ForSequenceClassification` class, But `transformerXL` has restricted set of auxiliary classes (https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py).
Is there any reasons for missing these useful class? Is it planned to be implemented or is there any restrictions for such implementation?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3080/comments | https://api.github.com/repos/huggingface/transformers/issues/3080/events | https://github.com/huggingface/transformers/issues/3080 | 574,072,632 | MDU6SXNzdWU1NzQwNzI2MzI= | 3,080 | Docker Hub automatic tests failing | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Should be fixed, this tests were running because I asked to build on every master's commit. \r\n\r\nNow it should only build on new tags."
] | 1,583 | 1,583 | 1,583 | CONTRIBUTOR | null | The automated docker tests are failing on HEAD and when you click details you get 404, so tough to debug | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3080/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3079/comments | https://api.github.com/repos/huggingface/transformers/issues/3079/events | https://github.com/huggingface/transformers/issues/3079 | 574,041,379 | MDU6SXNzdWU1NzQwNDEzNzk= | 3,079 | Bart CUDA not working | {
"login": "marmg",
"id": 25741926,
"node_id": "MDQ6VXNlcjI1NzQxOTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/25741926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marmg",
"html_url": "https://github.com/marmg",
"followers_url": "https://api.github.com/users/marmg/followers",
"following_url": "https://api.github.com/users/marmg/following{/other_user}",
"gists_url": "https://api.github.com/users/marmg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marmg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marmg/subscriptions",
"organizations_url": "https://api.github.com/users/marmg/orgs",
"repos_url": "https://api.github.com/users/marmg/repos",
"events_url": "https://api.github.com/users/marmg/events{/privacy}",
"received_events_url": "https://api.github.com/users/marmg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"I just merged a PR and your example works for me. Would you mind seeing if it is still broken in your system? Thanks for posting! \r\n(note that the fix won't be in v 2.5.1 if you pip installed). \r\n",
"Reopen if still broken!"
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART - bart-large
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load model
2. Tokenize text
3. Send model and tensor to cuda
4. Forward the model
```python
from transformers import BartConfig, BartTokenizer, BartForMaskedLM, BartModel
configuration = BartConfig()
tokenizer_class = BartTokenizer
model_class = BartForMaskedLM(configuration)
tokenizer = tokenizer_class.from_pretrained('bart-large')
model = model_class.from_pretrained('bart-large')
model.eval()
model.to('cuda')
tokens = tokenizer.encode("Text example to test natural language generation with bart.")
input_ids = torch.tensor([tokens])
input_ids = input_ids.to('cuda')
with torch.no_grad():
last_hidden_states = model(input_ids)[0]
print("Len tokens:", len(tokens))
print("Shape last hidden states:", last_hidden_states.shape)
```
This code raises the following error:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-24-3bba66ed6aeb> in <module>
3
4 with torch.no_grad():
----> 5 last_hidden_states = model(input_ids)[0]
6
7 print("Len tokens:", len(tokens))
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, decoder_cached_states, lm_labels, **unused)
923 encoder_outputs=encoder_outputs,
924 decoder_attention_mask=decoder_attention_mask,
--> 925 decoder_cached_states=decoder_cached_states,
926 )
927 lm_logits = self.lm_head.forward(outputs[0])
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, decoder_cached_states)
842 attention_mask,
843 decoder_attn_mask,
--> 844 decoder_cached_states=decoder_cached_states,
845 )
846 # Attention and hidden_states will be [] or None if they aren't needed
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, encoder_hidden_states, encoder_padding_mask, combined_mask, decoder_cached_states, **unused)
497 decoder_cached_states=layer_state,
498 attention_mask=combined_mask,
--> 499 need_attn_weights=self.output_attentions,
500 )
501 if self.output_past:
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, x, encoder_hidden_states, encoder_attn_mask, decoder_cached_states, attention_mask, need_attn_weights)
370 decoder_cached_states=decoder_cached_states,
371 need_weights=need_attn_weights,
--> 372 attn_mask=attention_mask,
373 )
374 x = F.dropout(x, p=self.dropout, training=self.training)
~/projects/p130-nlg/.conda/envs/p130-nlg-env-kgpubart/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, query, key, value, key_padding_mask, decoder_cached_states, need_weights, static_kv, attn_mask)
627
628 if attn_mask is not None:
--> 629 attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask
630 attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
631
RuntimeError: expected device cuda:0 but got device cpu
```
But I have tested this code with another examples (like GPT-2) and it works.
## Expected behavior
I would expect to get the tensor size, as with another models I have tested this code.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0. GPU: Yes
- Tensorflow version (GPU?): No used
- Using GPU in script?: Trying to
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3078/comments | https://api.github.com/repos/huggingface/transformers/issues/3078/events | https://github.com/huggingface/transformers/pull/3078 | 573,996,519 | MDExOlB1bGxSZXF1ZXN0MzgyNDAyMjgw | 3,078 | correct greedy generation when doing beam search | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=h1) Report\n> Merging [#3078](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c0135194ebc5de4b1bbef98b31f9c457a0bf746a?src=pr&el=desc) will **decrease** coverage by `0.99%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3078 +/- ##\n=======================================\n- Coverage 77.6% 76.6% -1% \n=======================================\n Files 98 98 \n Lines 16221 16230 +9 \n=======================================\n- Hits 12588 12433 -155 \n- Misses 3633 3797 +164\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.02% <100%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3078/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=footer). Last update [c013519...5b9164a](https://codecov.io/gh/huggingface/transformers/pull/3078?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The self-hosted runner tests that fail are:\r\n\r\nFAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction\r\nFAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask\r\nFAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_ner\r\nFAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_sentiment_analysis\r\n\r\nand are all related to memory exhaustion (`Resource exhausted: OOM when allocating tensor ... `). \r\nThe tests are not related to the PR. Not sure what to do @julien-c \r\n",
"LGTM for merging",
"Discussed with @thomwolf as well and also agreed that generate() function is not too complex and good as it is now. I will take a closer look at the issue with beam search decoding when `do_sample=True` (PR #2317 ) in a separate PR. Good to merge for me! "
] | 1,583 | 1,583 | 1,583 | MEMBER | null | This PR changes the behavior of greedy beam search generation as discussed and wished in #2415 .
Also two assertion statements are added:
1. It is not allowed to generate multiple sequences from the same input_ids when greedy generation (`num_return_sequences > 1`, `do_sample=False`, `num_beams` == 1 => `AssertionError`) because it would always lead to the same output sequence for all `num_return_sequences`.
2. It is not allowed to generate more sequences when doing greedy beam serach generation than the number of beams that are used (`num_return_sequences` <= `num_beams`, `do_sample=False` => `AssertionError`) because this is not possible or would also lead to the same output sequences.
Discussion:
- [x] the generation function becomes bigger and bigger handling more and more exceptions - might need a big refactoring at some point which modularizes it for more flexibility and more readability. Also when thinking about including the encoder-decoder models in the model.generate() function.
Also maybe the `no_beam_search_generation` fn could simply be handled by `beam_search_generation(num_beams=1)` ?
- [x] beam search when do_sample=True still does not work really (see PR #2317 ). Should discuss how exactly it should be implemented.
@thomwolf, @LysandreJik, @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3078/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3078",
"html_url": "https://github.com/huggingface/transformers/pull/3078",
"diff_url": "https://github.com/huggingface/transformers/pull/3078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3078.patch",
"merged_at": 1583168410000
} |
https://api.github.com/repos/huggingface/transformers/issues/3077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3077/comments | https://api.github.com/repos/huggingface/transformers/issues/3077/events | https://github.com/huggingface/transformers/pull/3077 | 573,609,932 | MDExOlB1bGxSZXF1ZXN0MzgyMDkwNTYz | 3,077 | fix n_gpu count when no_cuda flag is activated | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=h1) Report\n> Merging [#3077](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3077 +/- ##\n==========================================\n+ Coverage 77.18% 77.19% +<.01% \n==========================================\n Files 98 98 \n Lines 16063 16063 \n==========================================\n+ Hits 12399 12400 +1 \n+ Misses 3664 3663 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3077/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=footer). Last update [298bed1...69041c1](https://codecov.io/gh/huggingface/transformers/pull/3077?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,583 | 1,583 | 1,583 | MEMBER | null | As I understand it, `no_cuda` should prevent the use of GPU in the `run_*` example scripts. However, `n_gpu` doesn't take it into account and count the numbers of GPUs available on the machine. It sends the model on the GPUs while the tensors are still on CPUs... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3077/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3077",
"html_url": "https://github.com/huggingface/transformers/pull/3077",
"diff_url": "https://github.com/huggingface/transformers/pull/3077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3077.patch",
"merged_at": 1583162422000
} |
https://api.github.com/repos/huggingface/transformers/issues/3076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3076/comments | https://api.github.com/repos/huggingface/transformers/issues/3076/events | https://github.com/huggingface/transformers/issues/3076 | 573,593,160 | MDU6SXNzdWU1NzM1OTMxNjA= | 3,076 | XLNet multiple sentence modeling token type ids | {
"login": "jyhanna",
"id": 16064051,
"node_id": "MDQ6VXNlcjE2MDY0MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/16064051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jyhanna",
"html_url": "https://github.com/jyhanna",
"followers_url": "https://api.github.com/users/jyhanna/followers",
"following_url": "https://api.github.com/users/jyhanna/following{/other_user}",
"gists_url": "https://api.github.com/users/jyhanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jyhanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jyhanna/subscriptions",
"organizations_url": "https://api.github.com/users/jyhanna/orgs",
"repos_url": "https://api.github.com/users/jyhanna/repos",
"events_url": "https://api.github.com/users/jyhanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/jyhanna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,588 | 1,588 | NONE | null | XLNet was designed to handle multiple segment modeling (i.e. > 2 sentences) by using relative segment encodings. For sentence-level classification tasks with arbitrary sentence counts, what is the structure of the segment (token type) ids? I’ve found from the documentation that 2-sequence classification is supported by using `create_token_type_ids` but what about more than two segments?
If more than two segments are supported, would it be correct to infer (from examples in the documentation) that a 3-sentence input with `<cls>` after each sentence (`<sep>` token) should have the form:
0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 3, 4, 4, 4, 4, 5.
where 1, 3, 5 are classification token segment ids? Would the transformers XLNet implementation support segment ids of this form? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3076/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3075/comments | https://api.github.com/repos/huggingface/transformers/issues/3075/events | https://github.com/huggingface/transformers/issues/3075 | 573,583,858 | MDU6SXNzdWU1NzM1ODM4NTg= | 3,075 | Training TFBertForSequenceClassification with custom X and Y data | {
"login": "rahulg963",
"id": 8866218,
"node_id": "MDQ6VXNlcjg4NjYyMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8866218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahulg963",
"html_url": "https://github.com/rahulg963",
"followers_url": "https://api.github.com/users/rahulg963/followers",
"following_url": "https://api.github.com/users/rahulg963/following{/other_user}",
"gists_url": "https://api.github.com/users/rahulg963/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahulg963/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahulg963/subscriptions",
"organizations_url": "https://api.github.com/users/rahulg963/orgs",
"repos_url": "https://api.github.com/users/rahulg963/repos",
"events_url": "https://api.github.com/users/rahulg963/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahulg963/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Maybe this is a little late but you could take a look in both `examples/run_tf_glue.py` and [this function](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L31-L168) from`src/transformers/data/processors/glue.py` and write a custom training script based from those.",
"To make things a little more concrete, I've written and annotated [an end-to-end example](https://gist.github.com/papapabi/124c6ac406e6bbd1f28df732e953ac6d) of how to fine-tune a `bert-base-cased` model from your `DataFrame`'s spec. Do comment if it helps you out!",
"@papapabi Thank you for your inputs. I will check this out.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,591 | 1,591 | NONE | null | I am working on a TextClassification problem, for which I am trying to traing my model on TFBertForSequenceClassification given in huggingface-transformers library.
I followed the example given on their github page, I am able to run the sample code with given sample data using tensorflow_datasets.load('glue/mrpc'). However, I am unable to find an example on how to load my own custom data and pass it in model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7).
How can I define my own X, do tokenization of my X and prepare train_dataset with my X and Y. Where X represents my input text and Y represents classification category of given X.
Sample Training dataframe :
```
text category_index
0 Assorted Print Joggers - Pack of 2 ,/ Gray Pri... 0
1 "Buckle" ( Matt ) for 35 mm Width Belt 0
2 (Gagam 07) Barcelona Football Jersey Home 17 1... 2
3 (Pack of 3 Pair) Flocklined Reusable Rubber Ha... 1
4 (Summer special Offer)Firststep new born baby ... 0
```
```
Question already asked on SO :
https://stackoverflow.com/questions/60463829/training-tfbertforsequenceclassification-with-custom-x-and-y-data
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3074/comments | https://api.github.com/repos/huggingface/transformers/issues/3074/events | https://github.com/huggingface/transformers/issues/3074 | 573,558,797 | MDU6SXNzdWU1NzM1NTg3OTc= | 3,074 | Only enable some labels in BERT fine-tuned NER | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"I'm not sure I understand your question. The O tag is here in order to identify a token which is not an entity in the IOB1 tagging scheme. If you don't have such a tag, every token will have to be classified as an entity, which does not make sense?\r\n\r\nIf you would like to define a custom tagging scheme and train a model to predict on that tagging scheme, you would have to create a dataset for that and train your model on that dataset.",
"Thanks @LysandreJik \r\nLet me provide some additional details. Given a sentence, I use some external resources to find what are the candidates for tagging. For the candidates, I need to classify between 2 different labels (binary classification). That's the reason why I wrote that I am not interested in predicting the O tag, since I use external resources for it. \r\nI have data for train/test. ",
"Hey @jmamou , we might be able to help. Mind sending me an email at clement [at] huggingface [dot] co?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
Only enable some labels in BERT fine-tuned NER
## Details
I would be interested to enable only some labels in BERT fine-tuned NER prediction.
For example, I know what are the entities and I would like to train a model that will classify entities - but I am not interested to train/predict O tag.
What would be the best way to do it?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3073/comments | https://api.github.com/repos/huggingface/transformers/issues/3073/events | https://github.com/huggingface/transformers/issues/3073 | 573,558,193 | MDU6SXNzdWU1NzM1NTgxOTM= | 3,073 | Finetuned BERT model does not seem to predict right labels/work properly? | {
"login": "PieterDujardin",
"id": 48496355,
"node_id": "MDQ6VXNlcjQ4NDk2MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48496355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PieterDujardin",
"html_url": "https://github.com/PieterDujardin",
"followers_url": "https://api.github.com/users/PieterDujardin/followers",
"following_url": "https://api.github.com/users/PieterDujardin/following{/other_user}",
"gists_url": "https://api.github.com/users/PieterDujardin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PieterDujardin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PieterDujardin/subscriptions",
"organizations_url": "https://api.github.com/users/PieterDujardin/orgs",
"repos_url": "https://api.github.com/users/PieterDujardin/repos",
"events_url": "https://api.github.com/users/PieterDujardin/events{/privacy}",
"received_events_url": "https://api.github.com/users/PieterDujardin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please post your code using [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks). Don't post screenshots.",
"FYI: \r\nphoto of **format input data**; https://i.stack.imgur.com/t472b.png\r\nphoto of **tag2name** ; https://i.stack.imgur.com/RO7dp.png\r\n\r\n\r\nAssuming the data goes in in the right format, here is the model initialization and evaluation loop.\r\n```\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased-finetuned-conll03-english\")\r\nmodel = BertForTokenClassification.from_pretrained('bert-base-cased-finetuned-conll03-english')\r\n\r\n\r\n#eval LOOP\r\n\r\nmodel.eval();\r\neval_loss, eval_accuracy = 0, 0\r\nnb_eval_steps, nb_eval_examples = 0, 0\r\ny_true = []\r\ny_pred = []\r\nvaldataset = []\r\n\r\nprint(\"***** Running evaluation *****\")\r\nprint(\" Num examples ={}\".format(len(val_inputs)))\r\nprint(\" Batch size = {}\".format(batch_num))\r\n\r\nfor step, batch in enumerate(valid_dataloader):\r\n batch = tuple(t.to(device) for t in batch) # set every example of batch to device\r\n input_ids, input_mask, label_ids = batch #same as we did in training loop but only 1 epoch now\r\n \r\n \r\n with torch.no_grad(): #means we don't care about gradients and updating tensors \r\n outputs = model(input_ids, token_type_ids=None,\r\n attention_mask=input_mask)\r\n # For eval mode, the first result of outputs is logits (for training mode this was loss)\r\n logits = outputs[0] # In context of deep learning the logits layer means the layer that feeds in to softmax (or other such normalization).\r\n \r\n # Get NER predict result\r\n logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)#feed logits into softmax and take the prediction that is maximal \r\n logits = logits.detach().cpu().numpy()\r\n \r\n if step==1:\r\n print(logits[0][0:15])\r\n print(logits[1][0:15])\r\n print(logits[3][0:15])\r\n print(logits[4][0:15])\r\n print(logits[5][0:15])\r\n\r\n print(label_ids[0][0:15])\r\n print(label_ids[1][0:15])\r\n print(label_ids[2][0:15])\r\n print(label_ids[3][0:15])\r\n \r\n \r\n # Get NER true result\r\n label_ids = label_ids.to('cpu').numpy()\r\n \r\n \r\n # Only predict the real word, mark=0, will not calculate\r\n input_mask = input_mask.to('cpu').numpy()\r\n \r\n # Compare the valuable predict result\r\n for i,mask in enumerate(input_mask):\r\n # Real one\r\n temp_1 = []\r\n # Predicted one\r\n temp_2 = []\r\n \r\n valtemp = []\r\n\r\n for j, m in enumerate(mask):\r\n # Mark=0, meaning its a pad word, dont compare\r\n if m:\r\n if tag2name[label_ids[i][j]] != \"X\" and tag2name[label_ids[i][j]] != \"[CLS]\" and tag2name[label_ids[i][j]] != \"[SEP]\" : # Exclude the X label\r\n temp_1.append(tag2name[label_ids[i][j]])\r\n temp_2.append(tag2name[logits[i][j]])\r\n \r\n if tag2name[label_ids[i][j]] != \"[CLS]\" and tag2name[label_ids[i][j]] != \"[SEP]\" :\r\n valtemp.append(input_ids[i][j].item())\r\n \r\n else:\r\n break\r\n \r\n #here are the two lists that contain true and pred labels. \r\n y_true.append(temp_1)\r\n y_pred.append(temp_2)\r\n\r\n \r\n valdataset.append(valtemp)\r\n\r\ntokenized_text_con = [tokenizer.decode(val) for val in valdataset]\r\n\r\n\r\n\r\n ```\r\nprint output: https://i.stack.imgur.com/qS62L.png\r\n\r\n",
"Hi! From my experience using the community-contributed `dbmdz/bert-large-cased-finetuned-conll03-english` (which is the same checkpoint as) `bert-large-cased-finetuned-conll03-english`, using the `bert-base-cased` tokenizer instead of the tokenizer loaded from that checkpoint works better.\r\n\r\nYou can see an example of this in the [usage](https://huggingface.co/transformers/usage.html#named-entity-recognition), let me know if it helps.\r\n\r\nI suspect the difference between the tokenizers is due to a lowercasing of all inputs. I'm looking into it now.\r\n\r\nPS: the file `bert-large-cased-finetuned-conll03-english` is deprecated in favor of the aforementionned `dbmdz/bert-large-cased-finetuned-conll03-english` as they are duplicates. @julien-c is currently deleting it from the S3, please use the `dbmdz` file/folder.",
"Also cc'ing @stefan-it for information :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,583 | 1,588 | 1,588 | NONE | null | # ❓ Questions & Help
I am trying out a finetuned BERT model for token classification (--> https://huggingface.co/bert-base-cased-finetuned-conll03-english), but when I observe the model output (i.e. the logits after applying the softmax) and compare it with the true label_ids, they are totally uncorrelated (see pictures).
https://i.stack.imgur.com/gVyMn.png
https://i.stack.imgur.com/qS62L.png
## Details
I assume that the finetuned model (bert-base-cased-finetuned-conll03-english) is correctly pretrained, but I don't seem to understand why its predictions are off. I think one issue is that the pretrained model has another labelling scheme than I made myself during data prep (so that the tag2name dict is different), but I don't know how I can find out what label-index map the model uses for its predictions. Even then it is not the case that the model consistently makes the same mistakes, it is outputting things quite randomly.
Any idea what the issue could be?
`` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3073/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3072/comments | https://api.github.com/repos/huggingface/transformers/issues/3072/events | https://github.com/huggingface/transformers/issues/3072 | 573,532,646 | MDU6SXNzdWU1NzM1MzI2NDY= | 3,072 | Chinese BERT model can be used represented by words instead of character | {
"login": "WenTingTseng",
"id": 32416416,
"node_id": "MDQ6VXNlcjMyNDE2NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32416416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenTingTseng",
"html_url": "https://github.com/WenTingTseng",
"followers_url": "https://api.github.com/users/WenTingTseng/followers",
"following_url": "https://api.github.com/users/WenTingTseng/following{/other_user}",
"gists_url": "https://api.github.com/users/WenTingTseng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenTingTseng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenTingTseng/subscriptions",
"organizations_url": "https://api.github.com/users/WenTingTseng/orgs",
"repos_url": "https://api.github.com/users/WenTingTseng/repos",
"events_url": "https://api.github.com/users/WenTingTseng/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenTingTseng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am not sure which Chinese BERT you are referring to, but the original multilingual BERT has trained Chinese on the character-level. From their [README](https://github.com/google-research/bert/blob/master/multilingual.md):\r\n\r\n> Because Chinese (and Japanese Kanji and Korean Hanja) does not have whitespace characters, we add spaces around every character in the CJK Unicode range before applying WordPiece. This means that Chinese is effectively character-tokenized. Note that the CJK Unicode block only includes Chinese-origin characters and does not include Hangul Korean or Katakana/Hiragana Japanese, which are tokenized with whitespace+WordPiece like all other languages.",
"OK, Thanks for your help \r\nBut I mean I want Chinese BERT for word-level not for character-level.",
"Yes, I understand that, and as I said the default multilingual BERT does **not** support that. You'll have to find another implementation, perhaps https://arxiv.org/abs/1906.08101"
] | 1,583 | 1,583 | 1,583 | NONE | null | # ❓ Questions & Help
I want to ask about the Chinese BERT model can be used represented by words instead of character?
Because when I do BERT visual for Chinese can only see its attention from character to character.
I want to see attention from words to words. Can I change this?
Thanks a lot for your help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3071/comments | https://api.github.com/repos/huggingface/transformers/issues/3071/events | https://github.com/huggingface/transformers/issues/3071 | 573,520,345 | MDU6SXNzdWU1NzM1MjAzNDU= | 3,071 | Predict the next word in sentence context from the list of possible words in Russian | {
"login": "svk-man",
"id": 17108861,
"node_id": "MDQ6VXNlcjE3MTA4ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17108861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svk-man",
"html_url": "https://github.com/svk-man",
"followers_url": "https://api.github.com/users/svk-man/followers",
"following_url": "https://api.github.com/users/svk-man/following{/other_user}",
"gists_url": "https://api.github.com/users/svk-man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svk-man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svk-man/subscriptions",
"organizations_url": "https://api.github.com/users/svk-man/orgs",
"repos_url": "https://api.github.com/users/svk-man/repos",
"events_url": "https://api.github.com/users/svk-man/events{/privacy}",
"received_events_url": "https://api.github.com/users/svk-man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Example\r\nMy sentence: Мой кот ... (My cat ...)\r\n3 word is ест (eat)\r\nList of possible words: ест (eat), поглощает (absorb), глотает (swallow), кушают (eats) etc.\r\nI need to determine the probabilities of each word from the given list in the context of the phrase and make the most correct sentence.\r\nOutput: Мой кот ест (My cat eat).",
"This is a very general question. Please use [Stack Overflow](https://stackoverflow.com/) for this.\r\n\r\nNote that you'll need to use a model that is pretrained on Russian."
] | 1,583 | 1,583 | 1,583 | NONE | null | Hello from Russia. I have a task of predicting the next word from the list of possible words in Russian. How can I do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3070/comments | https://api.github.com/repos/huggingface/transformers/issues/3070/events | https://github.com/huggingface/transformers/issues/3070 | 573,488,746 | MDU6SXNzdWU1NzM0ODg3NDY= | 3,070 | load_tf_weights_in_bert : 'BertModel' object has no attribute 'bias' | {
"login": "luyi6111",
"id": 51707126,
"node_id": "MDQ6VXNlcjUxNzA3MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/51707126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luyi6111",
"html_url": "https://github.com/luyi6111",
"followers_url": "https://api.github.com/users/luyi6111/followers",
"following_url": "https://api.github.com/users/luyi6111/following{/other_user}",
"gists_url": "https://api.github.com/users/luyi6111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luyi6111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luyi6111/subscriptions",
"organizations_url": "https://api.github.com/users/luyi6111/orgs",
"repos_url": "https://api.github.com/users/luyi6111/repos",
"events_url": "https://api.github.com/users/luyi6111/events{/privacy}",
"received_events_url": "https://api.github.com/users/luyi6111/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"change \r\n`BertModel.form_pretrained`\r\nto\r\n`BertForPreTraining.from_pretrained`\r\nit seems to work",
"Glad you could get it to work! Indeed, `BertForPreTraining` should be used to convert from official BERT models."
] | 1,583 | 1,583 | 1,583 | NONE | null | ```
AttributeError Traceback (most recent call last)
<ipython-input-14-0d66155b396d> in <module>
12
13 K.clear_session()
---> 14 model = create_model()
15 optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5)
16 model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['acc', 'mae'])
<ipython-input-13-4f429fe61419> in create_model()
5 config = BertConfig.from_pretrained(BERT_PATH + 'bert_config.json')
6 config.output_hidden_states = False
----> 7 bert_model = BertModel.from_pretrained(BERT_PATH + 'bert_model.ckpt.index', from_tf=True, config=config)
8 # if config.output_hidden_states = True, obtain hidden states via bert_model(...)[-1]
9 embedding = bert_model(input_id, attention_mask=input_mask, token_type_ids=input_atn)[0]
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
482 if resolved_archive_file.endswith(".index"):
483 # Load from a TensorFlow 1.X checkpoint - provided by original authors
--> 484 model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
485 else:
486 # Load from our TensorFlow 2.0 checkpoints
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/transformers/modeling_bert.py in load_tf_weights_in_bert(model, config, tf_checkpoint_path)
103 pointer = getattr(pointer, "weight")
104 elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
--> 105 pointer = getattr(pointer, "bias")
106 elif scope_names[0] == "output_weights":
107 pointer = getattr(pointer, "weight")
~/anaconda3/envs/fasterai/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
AttributeError: 'BertModel' object has no attribute 'bias'
```
related libs&version:
transformers 2.5.1
tensorflow 2.1.0
environment:
NVIDIA-SMI 440.59 Driver Version: 440.59 CUDA Version: 10.2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3069/comments | https://api.github.com/repos/huggingface/transformers/issues/3069/events | https://github.com/huggingface/transformers/issues/3069 | 573,477,072 | MDU6SXNzdWU1NzM0NzcwNzI= | 3,069 | No Causal Attention Masking in GPT-2 LM Finetuning Script | {
"login": "alvinchangw",
"id": 18749046,
"node_id": "MDQ6VXNlcjE4NzQ5MDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/18749046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvinchangw",
"html_url": "https://github.com/alvinchangw",
"followers_url": "https://api.github.com/users/alvinchangw/followers",
"following_url": "https://api.github.com/users/alvinchangw/following{/other_user}",
"gists_url": "https://api.github.com/users/alvinchangw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvinchangw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvinchangw/subscriptions",
"organizations_url": "https://api.github.com/users/alvinchangw/orgs",
"repos_url": "https://api.github.com/users/alvinchangw/repos",
"events_url": "https://api.github.com/users/alvinchangw/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvinchangw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @alvinchangw, \r\n\r\nGPT2 always uses causal masking no matter what kind of attention_mask you give it. \r\nIt's easy to see when you print out the computed attentions for each layer (by setting `output_attentions=True`) => see for this also #2975. \r\n\r\nIn the code this is done in this line:\r\nhttps://github.com/huggingface/transformers/blob/298bed16a841fae3608d334441ccae4d9043611f/src/transformers/modeling_gpt2.py#L146\r\n\r\nI admit it is very cryptic and probably should have better naming. Essentially what happens here is the following:\r\n`self.bias` is defined as a lower triangular mask (see torch function [here](https://pytorch.org/docs/stable/torch.html?highlight=tril#torch.tril) ). according to the sequence length (params `nd` and `ns`), we derive `b`. `b` is then a lower triangular mask of shape sequence length x sequence length. Using this mask, we substract 10^4 from all values in `w` which should be masked, which sets their attention to 0.\r\n",
"Hi @patrickvonplaten,\r\n\r\nThank you for pointing this out and for the detailed explanation! "
] | 1,583 | 1,583 | 1,583 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
running run_language_modeling.py on WikiText-2 dataset
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. attention_mask is None at each forward step of the GPT-2 model (GPT2LMHeadModel)
## Expected behavior
attention_mask should reflect causal attention masking for the LM objective in finetuning GPT-2 so that outputs (t) only attend to inputs at previous time steps (1,..,t-1) instead of relying on input at the same time-step of output (t) where GPT-2 can simply learn to copy the input as output to optimize the LM objective.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3069/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3068/comments | https://api.github.com/repos/huggingface/transformers/issues/3068/events | https://github.com/huggingface/transformers/issues/3068 | 573,046,241 | MDU6SXNzdWU1NzMwNDYyNDE= | 3,068 | Problem with using pretrained BertTokenizer for Korean | {
"login": "alirezamshi-zz",
"id": 43453239,
"node_id": "MDQ6VXNlcjQzNDUzMjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/43453239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alirezamshi-zz",
"html_url": "https://github.com/alirezamshi-zz",
"followers_url": "https://api.github.com/users/alirezamshi-zz/followers",
"following_url": "https://api.github.com/users/alirezamshi-zz/following{/other_user}",
"gists_url": "https://api.github.com/users/alirezamshi-zz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alirezamshi-zz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alirezamshi-zz/subscriptions",
"organizations_url": "https://api.github.com/users/alirezamshi-zz/orgs",
"repos_url": "https://api.github.com/users/alirezamshi-zz/repos",
"events_url": "https://api.github.com/users/alirezamshi-zz/events{/privacy}",
"received_events_url": "https://api.github.com/users/alirezamshi-zz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Intuitively, I would say that this might not be a bug but a limitation of the size of the vocabulary. In the cased version, all data that the tokenizer is 'trained' on is cased, meaning that there are tokens in the vocabulary that only differs by case (e.g. `Be` and `be`). As a consequence, this may cause the vocabulary to be a lot bigger, leaving less room for other tokens. \r\n\r\nThat is just my intuition and not based on any research.\r\n\r\nYou can try out KoBERT, though: https://github.com/SKTBrain/KoBERT",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | I have a corpus that contains Korean sentences. Here is the output of a berttokenizer for a token:
`tok = BertTokenizer.from_pretrained('bert-base-multilingual-uncased')`
`tok.convert_tokens_to_ids(tok.tokenize('잘해놨습니다'))`
`[44628, 14840, 97071, 97089, 97104, 13212, 79427]`
`tok = BertTokenizer.from_pretrained('bert-base-multilingual-cased')`
`tok.convert_tokens_to_ids(tok.tokenize('잘해놨습니다'))`
`[100]`
transformers version: 2.4.1
Totally, 'cased' tokenizer produces more 'unknown' token than 'uncased' one for Korean.
Is it a bug? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3068/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3067/comments | https://api.github.com/repos/huggingface/transformers/issues/3067/events | https://github.com/huggingface/transformers/issues/3067 | 573,020,947 | MDU6SXNzdWU1NzMwMjA5NDc= | 3,067 | No speed diference when doing prediction between BERT and ALBERT | {
"login": "ayrtondenner",
"id": 13112588,
"node_id": "MDQ6VXNlcjEzMTEyNTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13112588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayrtondenner",
"html_url": "https://github.com/ayrtondenner",
"followers_url": "https://api.github.com/users/ayrtondenner/followers",
"following_url": "https://api.github.com/users/ayrtondenner/following{/other_user}",
"gists_url": "https://api.github.com/users/ayrtondenner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayrtondenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayrtondenner/subscriptions",
"organizations_url": "https://api.github.com/users/ayrtondenner/orgs",
"repos_url": "https://api.github.com/users/ayrtondenner/repos",
"events_url": "https://api.github.com/users/ayrtondenner/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayrtondenner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, there's no reason ALBERT would be faster than BERT. They have the same number of layers, ALBERT just uses repeating layers (its `n` layers are a just a single one) instead of different layers (BERT's `n` layers are `n` different layers).",
"Hi @LysandreJik !\r\nIf ALBERT is a lot smaller than BERT (in terms of parameters), wouldn't it be faster? Taking up less memory, allowing bigger batch sizes, etc. Isn't this \"scalability\" one of the advantages of ALBERT over BERT?\r\nThanks!",
"If training speeds \"proportionality\" is similar to inferencing, then Section 4.3 and Table 2, page 7 of the latest Albert paper, [https://arxiv.org/pdf/1909.11942.pdf](https://arxiv.org/pdf/1909.11942.pdf) compares \"Speedup\" of BERT & ALBERT models. For example, Albert_xxlarge \"speed of data throughput\" is 0.3x of BERT_large the baseline, so only 30% the throughput.",
"> If training speeds \"proportionality\" is similar to inferencing, then Section 4.3 and Table 2, page 7 of the latest Albert paper, https://arxiv.org/pdf/1909.11942.pdf compares \"Speedup\" of BERT & ALBERT models.\r\n\r\nInteresting. Section 4.3 also says:\r\n\r\n>ALBERT models have higher data throughput compared to their corresponding BERT models. If we\r\nuse BERT-large as the baseline, we observe that ALBERT-large is about 1.7 times faster in iterating\r\nthrough the data while ALBERT-xxlarge is about 3 times slower because of the larger structure.\r\n\r\nJudging from this Section and from Table 2, some versions of ALBERT are indeed faster than BERT.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,590 | 1,590 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm comparing two trained models, one using BERT and the other using ALBERT. I'm using the following code to do a prediction in both models: it tokenizes a list of phrases, padding them according to biggest max length per tokenized phrase, and then it applies a prediction:
```
def padding(phrase_list):
"""
Add padding to phrases in phrase list
"""
max_size = 0
for phrase in phrase_list:
max_size = max(max_size, len(tokenizer_phrase.encode(phrase)))
print(f"Max_size: {max_size}")
padded_list = []
for phrase in phrase_list:
phrase_encoded = tokenizer_phrase.encode_plus(phrase, max_length=max_size, pad_to_max_length=True)
padded_list.append(phrase_encoded)
return padded_list
```
```
def predict_batch_outputs(phrase_list):
"""
Receive list of phrases and return model prediction WITHOUT softmax
"""
with torch.no_grad():
phrases_padded = padding(phrase_list)
input_ids = torch.tensor([pad['input_ids'] for pad in phrases_padded])
token_type_ids = torch.tensor([pad['token_type_ids'] for pad in phrases_padded])
attention_mask = torch.tensor([pad['attention_mask'] for pad in phrases_padded])
labels = torch.tensor([[1] for i in range(0, len(input_ids))])
outputs = model_phrase(input_ids.to('cuda'), token_type_ids = token_type_ids.to('cuda'), attention_mask = attention_mask.to('cuda'), labels = labels.to('cuda'))
return outputs[1].tolist()
```
My question is: I'm not seeing any difference in speed between two models. For instance, I have a script that read a csv dataset, break every row into a list of phrases using nltk, and then send such frases to a model (either Bert or Albert) and print it prediction. Using the same script, with the same methods, same dataset, and changing only which model is doing the prediction, I have the opposite result than expected: Bert can predict 11.89 docs per second, while Albert can predict 9.13 docs per second. I've done other tests and they also showed Albert being SLOWER than Bert.
Can someone share their experiences between Bert and Albert in matters of speed? Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3066/comments | https://api.github.com/repos/huggingface/transformers/issues/3066/events | https://github.com/huggingface/transformers/issues/3066 | 573,001,235 | MDU6SXNzdWU1NzMwMDEyMzU= | 3,066 | Documentation and code mismatch in BertForMaskedLM forward method | {
"login": "onurgu",
"id": 56893,
"node_id": "MDQ6VXNlcjU2ODkz",
"avatar_url": "https://avatars.githubusercontent.com/u/56893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/onurgu",
"html_url": "https://github.com/onurgu",
"followers_url": "https://api.github.com/users/onurgu/followers",
"following_url": "https://api.github.com/users/onurgu/following{/other_user}",
"gists_url": "https://api.github.com/users/onurgu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/onurgu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/onurgu/subscriptions",
"organizations_url": "https://api.github.com/users/onurgu/orgs",
"repos_url": "https://api.github.com/users/onurgu/repos",
"events_url": "https://api.github.com/users/onurgu/events{/privacy}",
"received_events_url": "https://api.github.com/users/onurgu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,582 | 1,587 | 1,587 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
BertForMaskedLM
Language I am using the model on (English, Chinese ...):
Not important.
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
I am trying to obtain LM probabilities for research purposes.
## To reproduce
Steps to reproduce the behavior:
1. According to the documentation, the code below should work as expected.
However, masked_lm_loss and ltr_lm_loss values are in fact in 2nd and 1st positions. This is apparent if the code is inspected, i.e. lm_labels related code is executed after masked_lm_labels related code.
See https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_bert.py#L1001-L1014
and
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_bert.py#L1014
The following code is an excerpt from my code.
```python
model = BertForMaskedLM.from_pretrained('bert-base-cased')
with torch.no_grad():
outputs = model(token_ids_in_sentence_tensor,
masked_lm_labels=token_ids_in_sentence_tensor,
lm_labels=token_ids_in_sentence_tensor,
token_type_ids=segments_tensors)
masked_lm_loss = outputs[0]
ltr_lm_loss = outputs[1]
predictions = outputs[2]
```
## Expected behavior
Explained above.
## Environment info
- `transformers` version:
- Platform: Darwin XXX-3.local 19.3.0 Darwin Kernel Version 19.3.0: Thu Jan 9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64 x86_64
- Python version: 3.7.4
- PyTorch version (GPU?): no GPU, 1.3.1
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3066/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3065/comments | https://api.github.com/repos/huggingface/transformers/issues/3065/events | https://github.com/huggingface/transformers/issues/3065 | 572,936,346 | MDU6SXNzdWU1NzI5MzYzNDY= | 3,065 | XLM-RoBERTa can't add new tokens. | {
"login": "Pelellone",
"id": 6177687,
"node_id": "MDQ6VXNlcjYxNzc2ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6177687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pelellone",
"html_url": "https://github.com/Pelellone",
"followers_url": "https://api.github.com/users/Pelellone/followers",
"following_url": "https://api.github.com/users/Pelellone/following{/other_user}",
"gists_url": "https://api.github.com/users/Pelellone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pelellone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pelellone/subscriptions",
"organizations_url": "https://api.github.com/users/Pelellone/orgs",
"repos_url": "https://api.github.com/users/Pelellone/repos",
"events_url": "https://api.github.com/users/Pelellone/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pelellone/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Can you post a minimal verifiable example that we can just copy-and-paste to try? I guess you don't really try to add empty strings as tokens?",
"Hi @BramVanroy, no ofc, probably i've pasted the wrong snippet of code. \r\nIf you try to expand the vocabulary of the tokenizer with the following code: \r\n\r\n```\r\ntokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large', do_lower_case=True)\r\ntokenizer.add_tokens(['[A1]', '[/A1]', '[A2]', '[/A2]']) \r\n```\r\n\r\nthe size of tokenizer remains the same. \r\nObviously if you try the same identical code with Bert or DistilBert (the ones i'm testing) all works fine. \r\nAll seems connected with the second condition of if-else block: \r\n\r\n```\r\nif (\r\n token != self.unk_token\r\n and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)\r\n and token not in to_add_tokens\r\n):\r\n```\r\nThis condition seems returns the wrong id for self.unk_token.\r\nRemoving this condition let me add the new tokens and extend the tokenizers.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
Model I am using (Bert, XLNet ...): XLM-RoBERTa
## To reproduce
Steps to reproduce the behavior:
1. tokenizer = XLMRobertaTokenizer.from_pretrained('xlm-roberta-large', do_lower_case=True)
2. tokenizer.add_tokens(['<a1>', '</a1>', '<a2>', '</a2>'])
3. tokenizer.convert_tokens_to_ids('<a1>')
It always respond 1 as ids for the new tokens.
The problem seems connected to:
if (
token != self.unk_token
and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token)
and token not in to_add_tokens
):
in the second condition.
When it tries to get self.convert_tokens_to_ids(token) always returns 1 instead of 3.
3 is the id for unk_token.
1 is the id for pad. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3065/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3064/comments | https://api.github.com/repos/huggingface/transformers/issues/3064/events | https://github.com/huggingface/transformers/issues/3064 | 572,911,248 | MDU6SXNzdWU1NzI5MTEyNDg= | 3,064 | Add LM capabilities to TFTransfoXLLMHead | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,582 | 1,584 | 1,584 | MEMBER | null | # 🚀 Feature request
It is currently not possible to generate language with the TFTransfoXLLMHeadModel because
the `lm_head` is not implemented (see https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_tf_transfo_xl.py#L745)
Doing:
```
from transformers import TFTransfoXLLMHeadModel
model = TFTransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
model(tf.convert_to_tensor([[1, 5, 8]]))
```
currently leads to an error.
## Motivation
Pytorch has TransfoXLLMHead model working - TF should has well.
## Your contribution
@LysandreJik , @thomwolf I could implement it if needed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3064/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3063/comments | https://api.github.com/repos/huggingface/transformers/issues/3063/events | https://github.com/huggingface/transformers/pull/3063 | 572,795,543 | MDExOlB1bGxSZXF1ZXN0MzgxNDIzNDAy | 3,063 | Add generate() functionality to TF 2.0 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=h1) Report\n> Merging [#3063](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eec5ec807135ae61fa2266f3c7ad947cc207abda?src=pr&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `90.1%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3063 +/- ##\n==========================================\n+ Coverage 77.59% 77.82% +0.22% \n==========================================\n Files 98 98 \n Lines 16250 16422 +172 \n==========================================\n+ Hits 12610 12780 +170 \n- Misses 3640 3642 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.22% <100%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.14% <100%> (+1.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.07% <100%> (-0.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `88.98% <100%> (+0.6%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `99.57% <100%> (+1.74%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.63% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `91.13% <20%> (-1%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.56% <89.85%> (-1.22%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/3063/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=footer). Last update [eec5ec8...b996a97](https://codecov.io/gh/huggingface/transformers/pull/3063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> This is cool, good job on the TensorFlow implementation! Regarding the use of the `past` and `mems`, I don't think they're actually implemented in the models though?\r\n\r\nI think the tf models also have the `past` and `mems` functionality implemented (when looking into the tf modeling files the `past` and `mems` variables are used in the code.",
"Good to merge for me! \r\nChanged all `<tf_tensor>.shape` to `shape_list(<tf_tensor>)` to make function compatible with both eagermode and no eagermode after discussion with @thomwolf and @jplu .\r\n\r\nWill add additional Integration tests (so far only `tf_gpt2`) for other LMHead models and add beam_search once completed in torch version."
] | 1,582 | 1,583 | 1,583 | MEMBER | null | I added the `_generate_no_beam_search` functionality for TF 2.0. It works for the following models:
'gpt2', 'openai', 'xlnet', 'xlm', 'ctrl'. Only for the model 'transfo-xl' it doesn't work because the
lm_head is not implemented yet in TF 2.0 (added an issue here: #3064 ).
Also I checked whether the pytorch 'distilgpt2' and TF 2.0 'distilgpt2' generate the same output (added one Integration test for this). Will add other integration tests in a future PR.
Setting only certain indices to values is much less straight forward in TF 2.0 than in pytorch, which is why I added more code for the TF 2.0 version.
Would be very happy about some feedback @LysandreJik @thomwolf
EDIT: There was also a bug in TFCTRL where tf.concat uses a pytorch argument 'dim' instead of 'axis'
## TODO
- [x] Discuss how to change the test_torch_tf_conversion.py() @LysandreJik @thomwolf
## Future PR:
- [ ] Adapt all LMHead Integration Tests to greedy generate to be able to compare PT & TF
- [ ] Add generate() to TFTransfoXL (see Issue #3064 ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3063",
"html_url": "https://github.com/huggingface/transformers/pull/3063",
"diff_url": "https://github.com/huggingface/transformers/pull/3063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3063.patch",
"merged_at": 1583246535000
} |
https://api.github.com/repos/huggingface/transformers/issues/3062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3062/comments | https://api.github.com/repos/huggingface/transformers/issues/3062/events | https://github.com/huggingface/transformers/issues/3062 | 572,619,732 | MDU6SXNzdWU1NzI2MTk3MzI= | 3,062 | Should weight distribution change more when fine-tuning transformers-based classifier? | {
"login": "marrrcin",
"id": 6958772,
"node_id": "MDQ6VXNlcjY5NTg3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6958772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrrcin",
"html_url": "https://github.com/marrrcin",
"followers_url": "https://api.github.com/users/marrrcin/followers",
"following_url": "https://api.github.com/users/marrrcin/following{/other_user}",
"gists_url": "https://api.github.com/users/marrrcin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrrcin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrrcin/subscriptions",
"organizations_url": "https://api.github.com/users/marrrcin/orgs",
"repos_url": "https://api.github.com/users/marrrcin/repos",
"events_url": "https://api.github.com/users/marrrcin/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrrcin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am curious: what kind of behaviour do you see when you freeze the whole base model, and only train the classifier? Also, you may want avoid the use of the `cls` variable name because it is a reserved keyword for the class. In general, the classifier is trained quite quickly (often in two or epochs or less), so you are right in saying that in relative terms the weights of the layers that you add matter very little compared to the rest of the model.",
"When I freeze the base model, the overall learning pace is drastically slower - 5 epochs is only enough to reach fraction of the quality when base model not frozen.\r\n\r\nHistograms when base model is frozen:\r\n<img width=\"929\" alt=\"Screenshot 2020-03-02 at 13 56 24\" src=\"https://user-images.githubusercontent.com/6958772/75678405-dc09df00-5c8d-11ea-995e-853fbaa15b71.png\">\r\n",
"@BramVanroy do you have any further insights on this?",
"Something else that might cause this is that your layers are stuck in some local optimum, or that they are nullified by ReLU. What happens if you use, e.g., gelu instead of relu? But that can't explain everything (because the first linear layer also barely changes its weights. So I'm not sure. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,590 | 1,590 | CONTRIBUTOR | null | ## ❓Should weight distribution change more when fine-tuning transformers-based classifier?
This question was posted on DataScience stack exchange:
[https://datascience.stackexchange.com/questions/68641/should-weight-distribution-change-more-when-fine-tuning-transformers-based-class](https://datascience.stackexchange.com/questions/68641/should-weight-distribution-change-more-when-fine-tuning-transformers-based-class)
## Details
I'm using pre-trained DistilBERT model with custom classification head, which is almost the same as in the [reference implementation ](https://github.com/huggingface/transformers/blob/fb560dcb075497f61880010245192e7e1fdbeca4/src/transformers/modeling_distilbert.py#L579)
```python
class PretrainedTransformer(nn.Module):
def __init__(
self, target_classes):
super().__init__()
base_model_output_shape=768
self.base_model = DistilBertModel.from_pretrained("distilbert-base-uncased")
self.classifier = nn.Sequential(
nn.Linear(base_model_output_shape, out_features=base_model_output_shape),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(base_model_output_shape, out_features=target_classes),
)
for layer in self.classifier:
if isinstance(layer, nn.Linear):
layer.weight.data.normal_(mean=0.0, std=0.02)
if layer.bias is not None:
layer.bias.data.zero_()
def forward(self, input_, y=None):
X, length, attention_mask = input_
base_output = self.base_model(X, attention_mask=attention_mask)[0]
base_model_last_layer = base_output[:, 0]
cls = self.classifier(base_model_last_layer)
return cls
```
During training, I use linear LR warmup schedule with max LR=5-e5 and cross entropy loss. In general, the model is able to learn on my dataset and reach high precision/recall metrics.
**My question is:**
Should weights distributions and biases in classification layers change more during training? It seems like the weights almost do not change at all, even when I do not initialize them as in the code (to mean=0.0 and std=0.02). Is this an indication that something is wrong with my model or it's just because the layers I've added are redundant and model does not learn nothing new?
Take look at the image of weight from the tensorboard:
<img width="1021" alt="Screenshot 2020-02-24 at 20 56 36" src="https://user-images.githubusercontent.com/6958772/75526050-4bbf6600-5a11-11ea-8d62-37407f968e06.png"> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3062/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3061/comments | https://api.github.com/repos/huggingface/transformers/issues/3061/events | https://github.com/huggingface/transformers/issues/3061 | 572,519,279 | MDU6SXNzdWU1NzI1MTkyNzk= | 3,061 | Bad word list for text generation | {
"login": "Peter-Devine",
"id": 49399312,
"node_id": "MDQ6VXNlcjQ5Mzk5MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/49399312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Peter-Devine",
"html_url": "https://github.com/Peter-Devine",
"followers_url": "https://api.github.com/users/Peter-Devine/followers",
"following_url": "https://api.github.com/users/Peter-Devine/following{/other_user}",
"gists_url": "https://api.github.com/users/Peter-Devine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Peter-Devine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Peter-Devine/subscriptions",
"organizations_url": "https://api.github.com/users/Peter-Devine/orgs",
"repos_url": "https://api.github.com/users/Peter-Devine/repos",
"events_url": "https://api.github.com/users/Peter-Devine/events{/privacy}",
"received_events_url": "https://api.github.com/users/Peter-Devine/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052129,
"node_id": "MDU6TGFiZWwxODM0MDUyMTI5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/High-Level%20feature",
"name": "High-Level feature",
"color": "f7c9a3",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I like this idea. Perhaps it can be added as an example, or even as an argument to the generation script."
] | 1,582 | 1,585 | 1,585 | NONE | null | # 🚀 Feature request
Add a word of lists that you do not want the model to generate for whatever reason.
## Motivation
When creating a text generation model, especially if you will serve that model publicly, it is desirable to have assurances that a model is physically incapable of outputting certain tokens.
Such tokens would include profanity of all kinds, prejudicial language, or even just out-of-domain vocabulary.
## Your contribution
I am unsure as to your coding style guide, but I will detail how I would implement this below.
Firstly, you cannot simply set the offending word's values to `-Inf` as done here https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1114 and here https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1131
when calculating `top_k_top_p_filtering` as it would not capture multi-token words.
As an example, I will use the common Irish exclamation "feck" (see [here ](https://en.wikipedia.org/wiki/Feck) for an off-topic discussion of the word).
'feck' is tokenized into `['fe', 'ck']` in the `gpt2-xl` tokenizer. It would be unreasonable to simply set the logit values of any mention of the token `'fe'` or `'ck'` to -Inf as that would stop the creation of purely harmless words such as 'feasible' (`['fe', 'as', 'ible']`) and 'peck' (`['pe', 'ck']`). Therefore, I suggest a list of tokens that are not allowed at any one instance, and are updated, depending on the previous tokens.
This functionality would mean that a model would be allowed to generate the `'fe'` token, but would not be able to follow it with a `'ck'` token straight after.
I would operationalize this by having:
* a `bad_words_list` which the user has passed into the generate function
* a `possible_bad_words` dict which describes all the possible bad words that you can make with the current token history and how far along they are
* a `prohibited_tokens_list` which would prevent the model from choosing those tokens at a given time.
E.g.
Let's say that we pass the following list to the .generate function:
`bad_words_list = ["feck", "arse", "booze", "up yours"]`
which would be then tokenized and made into ids in the .generator function before the generation starts:
`bad_words_list = [tokenizer.encode(x, add_special_tokens=False) for x in bad_words_list]`
Then, the following function would be run just before the model outputs are obtained from both beam and no beam generators
```
def update_possible_bad_words(previous_token, bad_words_list, possible_bad_words):
# Start with an empty list of prohibited bad words
prohibited_tokens_list = []
unmatched_bad_words= []
# We cycle through the provided list of bad words, and if a bad word has only one token, then we add it to our prohibited list
# Else, if a bad word starts with the token of our previous token, then we add it to our list of possible_bad_words
for bad_word in bad_words_list:
if len(bad_word) == 1:
unmatched_bad_words.append(bad_word[0])
elif previous_token == bad_word[0] and bad_word not in possible_bad_words.keys():
possible_bad_words[bad_word] = 0
# We cycle through all our possible bad words
for bad_word in possible_bad_words.keys():
bad_word_index = possible_bad_words[bad_word]
# if the previous token matches the token currently indicated by the stored bad word index, then we increase this index by one.
if previous_token == bad_word[bad_word_index]:
new_bad_word_index = bad_word_index +1
# If the length of the bad word is one greater than the currently stored bad word index, then that means that we need to stop the next token from being generated. Thus, we add it to the prohibited list. We also add this word to the unmatched_bad_words, as we can now consider deleting it from possible bad words as it has been potentially mitigated.
if len(bad_word) == new_bad_word_index+1:
prohibited_tokens_list.append(bad_word[-1])
unmatched_bad_words.append(bad_word)
# We set the dict value to be this new incremented index
possible_bad_words[bad_word] = new_bad_word_index
else:
# Else if there is no match (I.e. the pattern of tokens created is different to the bad word) then we can mark this word as unmatched
unmatched_bad_words.append(bad_word)
# We cycle through all words that were not matched and we delete them from the possible_bad_words dict, as long as their starting token was not the previous token, in which case it could still be possible bad word.
for unmatched_bad_word in unmatched_bad_words:
if previous_token == unmatched_bad_word[0]:
possible_bad_words[unmatched_bad_word] = 0
else:
del possible_bad_words[unmatched_bad_word]
return prohibited_tokens_list
```
and I would call this function like so:
`prohibited_tokens_list = update_possible_bad_words(input_ids[-1], bad_words_list, possible_bad_words)`
here
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L827
and here
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L934
where `possible_bad_words` has been initialized as an empty dict directly before the generation loop.
Finally, we would pass `prohibited_tokens_list` to `top_k_top_p_filtering`
https://github.com/huggingface/transformers/blob/908fa43b543cf52a3238129624f502240725a6a6/src/transformers/modeling_utils.py#L1100
and would simply perform `logits[prohibited_tokens_list] = filter_value` before or after the top p and k filtering in that function. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3061/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3061/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3060/comments | https://api.github.com/repos/huggingface/transformers/issues/3060/events | https://github.com/huggingface/transformers/issues/3060 | 572,490,874 | MDU6SXNzdWU1NzI0OTA4NzQ= | 3,060 | How to init a subclass of BertForTokenClassification | {
"login": "haorannlp",
"id": 52477842,
"node_id": "MDQ6VXNlcjUyNDc3ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/52477842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haorannlp",
"html_url": "https://github.com/haorannlp",
"followers_url": "https://api.github.com/users/haorannlp/followers",
"following_url": "https://api.github.com/users/haorannlp/following{/other_user}",
"gists_url": "https://api.github.com/users/haorannlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haorannlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haorannlp/subscriptions",
"organizations_url": "https://api.github.com/users/haorannlp/orgs",
"repos_url": "https://api.github.com/users/haorannlp/repos",
"events_url": "https://api.github.com/users/haorannlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/haorannlp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I want to build a subclass of BertForTokenClassification and also want to use the weights of pretrained model
`
class SeqLabelClassifier(BertForTokenClassification):
def __init__(self, pretrained_model_name, config):
super(SeqLabelClassifier,self).__init__(config)
self.lstm=nn.LSTM(...)
config = BertConfig()
model = SeqLabelClassifier(pretrained_model_name, config)
model = model.from_pretrained(args.pretrained_model_name, config=config)
`
But I get this error
> File "/home/haoran/anaconda3/envs/nsd/lib/python3.8/site-packages/transformers/modeling_utils.py", line 466, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() missing 1 required positional argument: 'config'
How to correctly pass the args?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3059/comments | https://api.github.com/repos/huggingface/transformers/issues/3059/events | https://github.com/huggingface/transformers/pull/3059 | 572,480,785 | MDExOlB1bGxSZXF1ZXN0MzgxMTcwNDg2 | 3,059 | Bart-CNN | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@271344f`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `86.95%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3059 +/- ##\n=========================================\n Coverage ? 76.59% \n=========================================\n Files ? 98 \n Lines ? 16219 \n Branches ? 0 \n=========================================\n Hits ? 12423 \n Misses ? 3796 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.08% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3059/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.05% <86.95%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=footer). Last update [271344f...6e13b56](https://codecov.io/gh/huggingface/transformers/pull/3059?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> ## Differences with PretrainedModel.generate\r\n> these are all bc thats the way fairseq does it!\r\n> \r\n> * BeamHypotheses(early_stopping=True)\r\nI think we have that option as well\r\n> * assumptions about various token_ids being present\r\n> * force decoder to start with EOS, then predict BOS\r\nThat's weird, no?\r\n> * decoder only considers the most recently generated token bc everything else is cached.\r\n> * prevent predictions of various special tokens at inopportune moments (all the -inf stuff)\r\n> * force eos if you hit max length\r\nWe had this in our code before as well - I deleted it because I think unfinished sentences (sentences that were finished because they hit `max_length`) should not be ended with an EOS.\r\n> * max_length is about how many tokens you want to generate. Doesn't matter how many you have.\r\nThis makes sense since encoder-decoder models always start from 0 `input_ids` for the decoder model and only have `encoder_input_ids` where as the standard \"only-decoder\" models (GPT2) have `decoder_input_ids` and append their output to it\r\n\r\n"
] | 1,582 | 1,583 | 1,583 | CONTRIBUTOR | null | ## Sources
- copy pastes code from generate but does not share very much in an effort to simplify. there is no big abstraction yet.
- also copy pastes some code from fairseq
- encoder_outputs and previous decoder attentions are cached.
## Differences with PretrainedModel.generate
these are all bc thats the way fairseq does it!
- BeamHypotheses(early_stopping=True)
- assumptions about various token_ids being present
- force decoder to start with EOS, then predict BOS
- decoder only considers the most recently generated token bc everything else is cached.
- prevent predictions of various special tokens at inopportune moments (all the -inf stuff)
- force eos if you hit max length
- max_length is about how many tokens you want to generate. Doesn't matter how many you have.
- min_len parameter to prevent short summaries
- no_ngram_repetition parameter (set to 3 in Bart-CNN) to prevent repetition.
## TODO
- [ ] docstrings
- [ ] Mystery: results are identical to fairseq 98.6% of the time, 1.4% of the time they differ by a few words.
- [ ] run rouge metrics, compare run time to fairseq.
- [ ] Resist pressure to make big seq2seq abstraction before there are more callers
- [ ] Deeper dive on the MaskedLM.tie_weights hack, what is right way to do it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3059",
"html_url": "https://github.com/huggingface/transformers/pull/3059",
"diff_url": "https://github.com/huggingface/transformers/pull/3059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3059.patch",
"merged_at": 1583163354000
} |
https://api.github.com/repos/huggingface/transformers/issues/3058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3058/comments | https://api.github.com/repos/huggingface/transformers/issues/3058/events | https://github.com/huggingface/transformers/issues/3058 | 572,460,345 | MDU6SXNzdWU1NzI0NjAzNDU= | 3,058 | Fast tokenizers calculate wrong offsets when special characters are present | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"@mfuntowicz, would it make sense for me to integrate our tokenizer tests into your code, so you can see these things immediately? I'd be happy to do so.",
"Looks like this is a duplicate of #2917.",
"On second reading, this is not the same issue as #2917, though they may be related.",
"Roberta has another related issue:\r\n```\r\n>>> import transformers\r\n>>> t_fast = transformers.AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True, add_special_tokens=False)\r\n>>> sentence = \"I went to the zoo yesterday, but they had only one animal.\"\r\n>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n>>> for start, end in (t for t in tokenized['offset_mapping'] if t is not None):\r\n... print(repr(sentence[start:end]))\r\n'I'\r\n' went'\r\n' to'\r\n' the'\r\n' zoo'\r\n' yesterday'\r\n','\r\n' but'\r\n' they'\r\n' had'\r\n' only'\r\n' one'\r\n' animal'\r\n'.'\r\n```\r\n\r\nThere are two problems here. `add_special_tokens` is being ignored (#2919), but also, it adds those extra spaces at the front of the words.",
"Hi @dirkgr, \r\n\r\nThanks for your report.\r\n\r\nRegarding the integration of your tests, it definitively a good idea, if you can put @LysandreJik and myself as reviewers of the PR, we'll have a look 👍.\r\n\r\nRegarding `add_special_tokens`, the behaviour on Rust side is quite different as it's a parameter that needs to be provided a construction time, whereas Python allows at tokenisation time. We should make it clearer in the doc.\r\n\r\n```python\r\n>>> t = transformers.BertTokenizerFast.from_pretrained('bert-base-cased')\r\n>>> tokenized = t.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n>>> tokenized['input_ids']\r\n[101, 138, 117, 22607, 103, 4522, 20734, 2101, 5650, 119, 102]\r\n\r\n>>> t = transformers.BertTokenizerFast.from_pretrained('bert-base-cased', add_special_tokens=False)\r\n>>> tokenized = t.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n>>> tokenized['input_ids']\r\n>>> [138, 117, 22607, 103, 4522, 20734, 2101, 5650, 119]\r\n```\r\n\r\nFor Roberta we're aware of this extra space being included, cc'ing @n1t0 for keeping track of this.",
"Ok some more context, `GPT2TokenizerFast` has also the same behaviour, so I would extrapolate this occurs more generally on BPE model. \r\n\r\nAdding `<ModelFast>.from_pretrained(..., add_prefix_space=True)` doesn't append the space before but after the token:\r\n\r\n```python\r\n'I '\r\n'went '\r\n'to '\r\n'the '\r\n'zoo '\r\n'yesterday,'\r\n' '\r\n'but '\r\n'they '\r\n'had '\r\n'only '\r\n'one '\r\n'animal.'\r\n```",
"Hi @dirkgr!\r\n\r\nSo there are multiple different things here:\r\n - Your first example is due to the space between `naïve` and `[MASK]` not being trimmed out. We are aware of this behavior and are currently working on a fix.\r\n - The fact that `add_prefix_space=True` moves the space at the end is actually a bug too. This happens because we mess with the offsets while adding the prefix. I am working on a fix for this too.\r\n - Now, the second example you gave is actually expected behavior:\r\n```python\r\nimport transformers\r\nt_fast = transformers.AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True, add_special_tokens=False)\r\nsentence = \"I went to the zoo yesterday, but they had only one animal.\"\r\ntokenized = t_fast.encode_plus(sentence, return_offsets_mapping=True)\r\noffsets = tokenized['offset_mapping']\r\ntokens = t_fast.tokenize(sentence)\r\nfor token, (start, end) in (t for t in zip(tokens, offsets) if t[1] is not None):\r\n print(repr(token))\r\n print(repr(sentence[start:end]))\r\n print(repr(t_fast.decode(t_fast.convert_tokens_to_ids([token]))))\r\n```\r\nwill give the following output:\r\n```\r\n'I'\r\n'I'\r\n'I'\r\n'Ġwent'\r\n' went'\r\n' went'\r\n'Ġto'\r\n' to'\r\n' to'\r\n'Ġthe'\r\n' the'\r\n' the'\r\n'Ġzoo'\r\n' zoo'\r\n' zoo'\r\n'Ġyesterday'\r\n' yesterday'\r\n' yesterday'\r\n','\r\n','\r\n','\r\n'Ġbut'\r\n' but'\r\n' but'\r\n'Ġthey'\r\n' they'\r\n' they'\r\n'Ġhad'\r\n' had'\r\n' had'\r\n'Ġonly'\r\n' only'\r\n' only'\r\n'Ġone'\r\n' one'\r\n' one'\r\n'Ġanimal'\r\n' animal'\r\n' animal'\r\n'.'\r\n'.'\r\n'.'\r\n```\r\nHere you can see that the space is actually part of these tokens. That's just the way the byte-level BPE used by GPT-2 and Roberta works. The `Ġ` is actually an encoded space. Does it make sense?",
"We hit a similar issue when we add new tokens. The input ids are correct, but offsets after the new token are off.\r\n\r\nExample: https://colab.research.google.com/drive/1e2a3iyLF9NSMWZR50pnDYRhpizcsRV_6\r\n\r\n```\r\ntext = \"A [test] C\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\n\r\nresults = tokenizer.encode_plus(text, \r\n return_offsets_mapping=True,\r\n pad_to_max_length=False,\r\n max_length=128,\r\n return_overflowing_tokens=False,\r\n add_special_tokens=True)\r\n\r\nfor se in results['offset_mapping']:\r\n if se:\r\n print(text[se[0]:se[1]], se)\r\n```\r\n\r\n```\r\n[101, 1037, 30522, 1039, 102]\r\nA (0, 1)\r\n [test (1, 7)\r\n (8, 9)\r\n```\r\n\r\nPotentially related issue huggingface/tokenizers#143\r\n",
"I think all of the mentioned bugs on this issue should now be fixed on `master`",
"```\r\n>>> import transformers\r\n>>> t_fast = transformers.AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=True, add_special_tokens=False)\r\n>>> sentence = \"A, naïve [MASK] AllenNLP sentence.\"\r\n>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n>>> for start, end in tokenized['offset_mapping']:\r\n... print(repr(sentence[start:end]))\r\n'A'\r\n','\r\n'naïve'\r\n'[MASK]'\r\n'Allen'\r\n'NL'\r\n'P'\r\n'sentence'\r\n'.'\r\n```\r\nand\r\n```\r\n>>> import transformers\r\n>>> t_fast = transformers.AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True, add_special_tokens=False)\r\n>>> sentence = \"I went to the zoo yesterday, but they had only one animal.\"\r\n>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)\r\n>>> for start, end in (t for t in tokenized['offset_mapping'] if t is not None):\r\n... print(repr(sentence[start:end]))\r\n'I'\r\n'went'\r\n'to'\r\n'the'\r\n'zoo'\r\n'yesterday'\r\n','\r\n'but'\r\n'they'\r\n'had'\r\n'only'\r\n'one'\r\n'animal'\r\n'.' \r\n```\r\nand the last one:\r\n```\r\nfrom transformers import BertTokenizerFast\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\ntokenizer.add_tokens(['[test]'])\r\ntext = \"A [test] C\"\r\n\r\nprint(tokenizer.encode(text, add_special_tokens=True))\r\n\r\nresults = tokenizer.encode_plus(text, \r\n return_offsets_mapping=True,\r\n pad_to_max_length=False,\r\n max_length=128,\r\n return_overflowing_tokens=False,\r\n add_special_tokens=True)\r\n\r\nfor se in results['offset_mapping']:\r\n if se:\r\n print(text[se[0]:se[1]], se)\r\n```\r\ngives\r\n```\r\n[101, 1037, 30522, 1039, 102]\r\n (0, 0)\r\nA (0, 1)\r\n[test] (2, 8)\r\nC (9, 10)\r\n (0, 0)\r\n```"
] | 1,582 | 1,587 | 1,587 | CONTRIBUTOR | null | Example:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True, add_special_tokens=False)
>>> sentence = "A, naïve [MASK] AllenNLP sentence."
>>> tokenized = t_fast.encode_plus(sentence, add_special_tokens=False, return_offsets_mapping=True)
>>> for start, end in tokenized['offset_mapping']:
... print(repr(sentence[start:end]))
'A'
','
'naïve'
' [MASK'
' Alle'
'nN'
'L'
' sentenc'
'e'
```
As you can see, after the word "naïve", the offsets go off the rails. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3057/comments | https://api.github.com/repos/huggingface/transformers/issues/3057/events | https://github.com/huggingface/transformers/issues/3057 | 572,456,125 | MDU6SXNzdWU1NzI0NTYxMjU= | 3,057 | Fast tokenizers don't properly tokenize special tokens | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think it's a duplicate of #2919. Btw, I should have told you when I saw you open the PR, [I opened a bunch of issues related to AllenNLP usage](https://github.com/huggingface/transformers/issues?utf8=%E2%9C%93&q=is%3Aissue+author%3Abryant1410). I think one that was closed it's not completely solved, but not sure.",
"It's not the same issue. This one is about special tokens in text form in the middle of your string. #2919 is about `[CLS]` and `[SEP]` being added to the beginning and end. Also, #2919 has already been fixed.",
"Ohh, :+1: ",
"This is now fixed on `master`:\r\n\r\n```\r\n>>> import transformers\r\n>>> t_fast = transformers.AutoTokenizer.from_pretrained(\"roberta-base\", use_fast=True)\r\n>>> t_fast.encode_plus(\"A <mask> sentence.\")\r\n{'input_ids': [0, 83, 50264, 3645, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1]}\r\n>>> t_fast.convert_ids_to_tokens([0, 83, 50264, 3645, 4, 2])\r\n['<s>', 'ĠA', '<mask>', 'Ġsentence', '.', '</s>']\r\n```"
] | 1,582 | 1,587 | 1,587 | CONTRIBUTOR | null | Slow tokenizers:
```
>>> import transformers
>>> t_slow = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=False)
>>> t_slow.encode_plus("A <mask> sentence.")
{'input_ids': [0, 83, 50264, 3645, 4, 2],
'token_type_ids': [0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1]}
>>> t_slow.convert_ids_to_tokens([0, 83, 50264, 3645, 4, 2])
['<s>', 'ĠA', '<mask>', 'Ġsentence', '.', '</s>']
```
Fast tokenizers:
```
>>> import transformers
>>> t_fast = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
>>> t_fast.encode_plus("A <mask> sentence.")
{'input_ids': [0, 250, 1437, 50264, 3645, 4, 2],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
>>> t_fast.convert_ids_to_tokens([0, 250, 1437, 50264, 3645, 4, 2])
['<s>', 'A', 'Ġ', '<mask>', 'Ġsentence', '.', '</s>']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3057/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3056/comments | https://api.github.com/repos/huggingface/transformers/issues/3056/events | https://github.com/huggingface/transformers/pull/3056 | 572,370,905 | MDExOlB1bGxSZXF1ZXN0MzgxMDgyNjU2 | 3,056 | (Gross WIP) Bart-CNN | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,583 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3056/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3056",
"html_url": "https://github.com/huggingface/transformers/pull/3056",
"diff_url": "https://github.com/huggingface/transformers/pull/3056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3056.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3055/comments | https://api.github.com/repos/huggingface/transformers/issues/3055/events | https://github.com/huggingface/transformers/pull/3055 | 572,345,835 | MDExOlB1bGxSZXF1ZXN0MzgxMDYyMTg3 | 3,055 | Pipeline doc | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmm, why do we want to be able to spawn a `NerPipeline` or a `QuestionAnsweringPipeline` without a model or a tokenizer?\r\n\r\nIsn't this what `pipeline(\"ner\")` or `pipeline(\"question-answering\")` is for?",
"I think the question is rather why can't we spawn a `NerPipeline` or a `QuestionAnsweringPipeline` even though we have defined defaults for them?\r\n\r\nWhat I see in the `pipeline` factory is the ability to simply specify a task and get the appropriate pipeline. I don't see a strong reason for the task-specific pipelines to not be able to load a default, but I may be missing part of the picture.\r\n\r\nIf you think this is unnecessary I can revert the code - we'll just need to make sure that the doc explains what the `pipeline` factory is for and how it handles defaults compared to task-specific pipelines, because I was misled.",
"I'll let @mfuntowicz and @thomwolf chime in, but for me, the subclasses of Pipeline are the actual implementations – preferably well-typed – that do not expose too much magic. \r\n\r\nI don't see the point of having two public APIs that do exactly the same thing.\r\n\r\nE.g., the logic behind [get_defaults](https://github.com/huggingface/transformers/pull/3055/files#diff-1e87b75d7b313550a38be1daecd653f7R485-R504) is a duplication of what's already in `pipeline()`\r\n\r\nIn any cases, the subclasses of Pipeline dont accept a model/tokenizer of type `str`, in contradiction to the doc (it crashes), because the spawning of model/tokenizer with `from_pretrained()` is only inside the `pipeline` wrapper ",
"Ok, this makes sense. I'll revert that later today. Thanks @julien-c"
] | 1,582 | 1,583 | 1,583 | MEMBER | null | This PR adds documentation to the pipelines and slightly modifies their behavior:
- `modelcard` is no longer available when using the `pipeline` factory method. As discussed with @thomwolf and @mfuntowicz, it doesn't serve any purpose for the user.
- All task-specific pipelines can now be instantiated without any model/tokenizer, instead relying on the defaults defined for the `pipeline` factory method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3055/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3055",
"html_url": "https://github.com/huggingface/transformers/pull/3055",
"diff_url": "https://github.com/huggingface/transformers/pull/3055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3055.patch",
"merged_at": 1583176031000
} |
https://api.github.com/repos/huggingface/transformers/issues/3054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3054/comments | https://api.github.com/repos/huggingface/transformers/issues/3054/events | https://github.com/huggingface/transformers/pull/3054 | 572,300,701 | MDExOlB1bGxSZXF1ZXN0MzgxMDI2OTk5 | 3,054 | Bart: Use bool attention_mask for encoder | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe the `bool` operator was introduced in PyTorch 1.2.0, won't this break compatibility with PyTorch 1.0.0?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=h1) Report\n> Merging [#3054](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8bcb37bfb80d77e06001f989ad982c9961a69c31?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3054 +/- ##\n==========================================\n- Coverage 77.2% 77.18% -0.02% \n==========================================\n Files 98 98 \n Lines 16063 16063 \n==========================================\n- Hits 12401 12399 -2 \n- Misses 3662 3664 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3054/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=footer). Last update [8bcb37b...2116c2b](https://codecov.io/gh/huggingface/transformers/pull/3054?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,583 | 1,583 | CONTRIBUTOR | null | Wasn't breaking because bool(-1e4) is True, but clearer this way. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3054",
"html_url": "https://github.com/huggingface/transformers/pull/3054",
"diff_url": "https://github.com/huggingface/transformers/pull/3054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3054.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3053/comments | https://api.github.com/repos/huggingface/transformers/issues/3053/events | https://github.com/huggingface/transformers/pull/3053 | 572,230,696 | MDExOlB1bGxSZXF1ZXN0MzgwOTcyNjQy | 3,053 | Changes to NER examples for PLT and TPU | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=h1) Report\n> Merging [#3053](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8bcb37bfb80d77e06001f989ad982c9961a69c31?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3053 +/- ##\n==========================================\n- Coverage 77.2% 76.16% -1.04% \n==========================================\n Files 98 98 \n Lines 16063 16063 \n==========================================\n- Hits 12401 12234 -167 \n- Misses 3662 3829 +167\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3053/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=footer). Last update [8bcb37b...9dc0964](https://codecov.io/gh/huggingface/transformers/pull/3053?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik This doesn't touch the main code, so should be fine to merge. "
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | * Simplify the NER example to support new features added for us by pytorch-lightning.
* Pull out all rank and backend special casing in the code base.
* Setup data so that TPU examples work using the new code base.
Testing:
* Verify that standard examples of training work.
* Confirm that new TPU code works and runs https://colab.research.google.com/drive/1dBN-wwYUngLYVt985wGs_OKPlK_ANB9D
In my example, I see a 4x speedup over colab GPU and multi-gpu k40, but a slow down on loading and saving model. So certainly a win for larger datsets. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3053",
"html_url": "https://github.com/huggingface/transformers/pull/3053",
"diff_url": "https://github.com/huggingface/transformers/pull/3053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3053.patch",
"merged_at": 1582839933000
} |
https://api.github.com/repos/huggingface/transformers/issues/3052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3052/comments | https://api.github.com/repos/huggingface/transformers/issues/3052/events | https://github.com/huggingface/transformers/issues/3052 | 572,210,623 | MDU6SXNzdWU1NzIyMTA2MjM= | 3,052 | Is ALBERT the right implement from paper ? | {
"login": "adamluo1995",
"id": 18718520,
"node_id": "MDQ6VXNlcjE4NzE4NTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/18718520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adamluo1995",
"html_url": "https://github.com/adamluo1995",
"followers_url": "https://api.github.com/users/adamluo1995/followers",
"following_url": "https://api.github.com/users/adamluo1995/following{/other_user}",
"gists_url": "https://api.github.com/users/adamluo1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adamluo1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamluo1995/subscriptions",
"organizations_url": "https://api.github.com/users/adamluo1995/orgs",
"repos_url": "https://api.github.com/users/adamluo1995/repos",
"events_url": "https://api.github.com/users/adamluo1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/adamluo1995/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I read the code carefully, and it do share."
] | 1,582 | 1,582 | 1,582 | NONE | null | I read ALBERT paper and code form Google: github.com/google-research/ALBERT. One of the main contribute of ALBert is cross-layer parameter sharing and i can see it on Google's code. But i can't see sharing on this code. Every Layers(or Blocks) are new object and their parameter will be different after fine-tuning.
Is the implement is wrong or i have misunderstanding about parameter sharing? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3052/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3051/comments | https://api.github.com/repos/huggingface/transformers/issues/3051/events | https://github.com/huggingface/transformers/pull/3051 | 572,180,506 | MDExOlB1bGxSZXF1ZXN0MzgwOTMxNjMx | 3,051 | Adding Docker images for transformers + notebooks | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=h1) Report\n> Merging [#3051](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3051 +/- ##\n======================================\n Coverage 76.1% 76.1% \n======================================\n Files 98 98 \n Lines 15946 15946 \n======================================\n Hits 12136 12136 \n Misses 3810 3810\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=footer). Last update [53ce385...ff701d9](https://codecov.io/gh/huggingface/transformers/pull/3051?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"On the Docker files, do we really need conda? (I try to stay away from conda as much as possible)",
"Well, conda brings Intel MKL with just `conda install mkl, mkl-devel` which improves PyTorch and TF by a significant factor. \r\n\r\nDepending on what level of performances we want to provide: \r\n\r\n- I totally remove MKL and install some Open BLAS/LAPACK libraries\r\n- I'll build MKL in the images and include into PATH",
"MKL is also available on PyPi. Have a looks-y [here](https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and) to check whether everything you need is there. This might also be of interest: https://software.intel.com/en-us/distribution-for-python/choose-download/linux\r\n"
] | 1,582 | 1,583 | 1,583 | MEMBER | null | Docker images are as follow:
- transformers-cpu (PyTorch + TF)
- transformers-gpu (PyTorch + TF)
- transformers-pytorch-cpu
- transformers-pytorch-gpu
- transformers-tensorflow-cpu
- transformers-tensorflow-gpu
Images are tagged according to the version of the library they bring and always use the latest version of DL frameworks.
Notebooks introduce:
- How to use tokenizers
- Overall transformers overview
- How to use pipelines
Currently notebooks are added to the repo, let's discuss internally if it's a good idea. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3051/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3051",
"html_url": "https://github.com/huggingface/transformers/pull/3051",
"diff_url": "https://github.com/huggingface/transformers/pull/3051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3051.patch",
"merged_at": 1583340357000
} |
https://api.github.com/repos/huggingface/transformers/issues/3050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3050/comments | https://api.github.com/repos/huggingface/transformers/issues/3050/events | https://github.com/huggingface/transformers/issues/3050 | 572,135,710 | MDU6SXNzdWU1NzIxMzU3MTA= | 3,050 | Should be able to turn off logging | {
"login": "shamoons",
"id": 1019677,
"node_id": "MDQ6VXNlcjEwMTk2Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1019677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamoons",
"html_url": "https://github.com/shamoons",
"followers_url": "https://api.github.com/users/shamoons/followers",
"following_url": "https://api.github.com/users/shamoons/following{/other_user}",
"gists_url": "https://api.github.com/users/shamoons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamoons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamoons/subscriptions",
"organizations_url": "https://api.github.com/users/shamoons/orgs",
"repos_url": "https://api.github.com/users/shamoons/repos",
"events_url": "https://api.github.com/users/shamoons/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamoons/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Any progress on this? Has anyone found a way to disable the logging this?\r\n\r\n\r\nThe issue appears to be tqdm. A work-around is to disable it before importing transformers:\r\n```\r\nimport tqdm\r\n\r\ndef nop(it, *a, **k):\r\n return it\r\ntqdm.tqdm = nop\r\nimport transformers\r\n```",
"I agree that it's a valid requirement, we'll look into it",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Wish this would stay open.",
"Logger could be really annoying when it comes to applications. Should have some way to turn it off.",
"Here you go:\r\n\r\n```\r\n# To control logging level for various modules used in the application:\r\nimport logging\r\nimport re\r\ndef set_global_logging_level(level=logging.ERROR, prefices=[\"\"]):\r\n \"\"\"\r\n Override logging levels of different modules based on their name as a prefix.\r\n It needs to be invoked after the modules have been loaded so that their loggers have been initialized.\r\n\r\n Args:\r\n - level: desired level. e.g. logging.INFO. Optional. Default is logging.ERROR\r\n - prefices: list of one or more str prefices to match (e.g. [\"transformers\", \"torch\"]). Optional.\r\n Default is `[\"\"]` to match all active loggers.\r\n The match is a case-sensitive `module_name.startswith(prefix)`\r\n \"\"\"\r\n prefix_re = re.compile(fr'^(?:{ \"|\".join(prefices) })')\r\n for name in logging.root.manager.loggerDict:\r\n if re.match(prefix_re, name):\r\n logging.getLogger(name).setLevel(level)\r\n```\r\nUsage:\r\n\r\n1. override all module-specific loggers to a desired level (except whatever got logged during modules importing)\r\n```\r\nimport everything, you, need\r\nimport logging\r\nset_global_logging_level(logging.ERROR)\r\n```\r\n\r\n2. In case of transformers you most likely need to call it as:\r\n```\r\nimport transformers, torch, ...\r\nimport logging\r\nset_global_logging_level(logging.ERROR, [\"transformers\", \"nlp\", \"torch\", \"tensorflow\", \"tensorboard\", \"wandb\"])\r\n```\r\nadd/remove modules as needed.\r\n\r\n\r\nTo disable logging globally - place at the beginning of the script\r\n```\r\nimport logging\r\nlogging.disable(logging.INFO) # disable INFO and DEBUG logging everywhere\r\n# or \r\n# logging.disable(logging.WARNING) # disable WARNING, INFO and DEBUG logging everywhere\r\n```\r\n\r\nIf desired, `set_global_logging_level` could be expanded to be a scope manager too.",
"Will that kill tqdm? I want to keep tqdm!",
"> Will that kill tqdm? I want to keep tqdm!\r\n\r\n```\r\nset_global_logging_level(logging.ERROR, [\"transformers\", \"nlp\", \"torch\", \"tensorflow\", \"tensorboard\", \"wandb\"])\r\n\r\nfrom tqdm import tqdm\r\nfor i in tqdm(range(10000)): x = i**i\r\n```\r\nworks just fine\r\n\r\nand so does disable all:\r\n```\r\nset_global_logging_level(logging.ERROR])\r\n\r\nfrom tqdm import tqdm\r\nfor i in tqdm(range(10000)): x = i**i\r\n```\r\n\r\nor in the case of \"total logging silence\" setting:\r\n```\r\nimport logging\r\nlogging.disable(logging.INFO) # disable INFO and DEBUG logging everywhere\r\n\r\nfrom tqdm import tqdm\r\nfor i in tqdm(range(10000)): x = i**i\r\n```\r\n\r\nworks too. \r\n\r\nI don't think it uses `logging`.\r\n",
"PR with the proposed code, plus adding the ability to do that during `pytest` https://github.com/huggingface/transformers/pull/6816",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This was resolved by https://github.com/huggingface/transformers/pull/6434"
] | 1,582 | 1,604 | 1,604 | NONE | null | # 🚀 Feature request
When doing a simple pipeline, I want to supress:
```
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 230/230 [00:00<00:00, 136kB/s]
convert squad examples to features: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 241.08it/s]
add example index and unique id: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7037.42it/s]
```
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
My code is pretty straightforward:
```
args = parse_args()
f = open(args.text_path, "r")
context = f.read()
# print(context)
tokenizer = AutoTokenizer.from_pretrained(args.model)
model = AutoModelForQuestionAnswering.from_pretrained(args.model)
qa = pipeline('question-answering',
model='distilbert-base-uncased-distilled-squad', tokenizer='bert-base-cased')
response = qa(context=context,
question=args.question)
print(response['answer'])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3050/reactions",
"total_count": 15,
"+1": 15,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3050/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3049/comments | https://api.github.com/repos/huggingface/transformers/issues/3049/events | https://github.com/huggingface/transformers/issues/3049 | 572,087,617 | MDU6SXNzdWU1NzIwODc2MTc= | 3,049 | Regarding attention received by the distilbert model | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | I receive the attention of all the 6 layers with all the 12 heads while exporting the tfdistilbert model. I just want to take the attention of dimension equal to the sequence length. Which layer and head attention is the most effective attention value that I should take in order to get the best value of attention scores with respect to my sentence. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3049/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3048/comments | https://api.github.com/repos/huggingface/transformers/issues/3048/events | https://github.com/huggingface/transformers/issues/3048 | 572,071,925 | MDU6SXNzdWU1NzIwNzE5MjU= | 3,048 | Set specific hidden_size for ClassificationHead | {
"login": "GuillaumeDesforges",
"id": 1882000,
"node_id": "MDQ6VXNlcjE4ODIwMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1882000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillaumeDesforges",
"html_url": "https://github.com/GuillaumeDesforges",
"followers_url": "https://api.github.com/users/GuillaumeDesforges/followers",
"following_url": "https://api.github.com/users/GuillaumeDesforges/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillaumeDesforges/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillaumeDesforges/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillaumeDesforges/subscriptions",
"organizations_url": "https://api.github.com/users/GuillaumeDesforges/orgs",
"repos_url": "https://api.github.com/users/GuillaumeDesforges/repos",
"events_url": "https://api.github.com/users/GuillaumeDesforges/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillaumeDesforges/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🚀 Feature request
Add a config term `head_hidden_size` to the model configurations that will be used for the head of models such as `RobertaForSequenceClassification`
## Motivation
HuggingFace transformers library provides a very accessible API and PyTorch models that can be used "plug n play" for various task such as classification.
In many case, varying the hidden size of the last layer (that outputs the logits) is one of the first thing we tweak to improve the performance on such a task.
Currently, the dense layer uses the `hidden_size` config parameter, which is the same as the one used in the transformer (BERT). One cannot change the hidden size of the last layer without changing the hidden size of the entire transformer model behind it.
This means we have to code a new PyTorch module in order to do something as simple as that.
## Your contribution
I could PR this should the change be welcomed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3047/comments | https://api.github.com/repos/huggingface/transformers/issues/3047/events | https://github.com/huggingface/transformers/pull/3047 | 572,070,438 | MDExOlB1bGxSZXF1ZXN0MzgwODM5MDI3 | 3,047 | [WIP] deleted special tokens as attributes from model config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not feasible for bart at the moment, sadly.",
"As mentioned in #3011, I think #3011 is the way to go."
] | 1,582 | 1,583 | 1,583 | MEMBER | null | Given our conversation in #3011 , I thought about deleting all model special config attributes to make sure that no data is duplicated between the tokenizer of a model and the model.
There are two instances where config special attributes were used (e.g. self.config.pad_token_id)
1. the generate() function. But the generate function also takes all those tokens as attributes, so it should not at all rely on self.config.pad_token_id. This one is trivial to fix.
2. the bart model. It seems like the pad_token_id is actually an integral part of the bart model itself, so to me it seems very hard to disentangle the pad_token_id from the bart model.
I see three options:
1) Leave the code as it is and leave default attributes self.config.pad_token_id, self.config.bos_token_id, self.config.eos_token_id = None in the PretrainedConfig class.
2) Remove the self.config.pad_token_id, ... from the PretrainedConfig class, make the generate function independent of those variables, but add those variables to the BartConfig only.
3) Make the models completely independent from all special tokens. This probably would mean that the bart model class needs quite a lot of changes.
I tend to option 2) or 3). I like the idea of separation token ids from internal model behavior completely, but I cannot really estimate whether this is feasible for Bart (@sshleifer you probably have a better opinion on this).
What do you think? @LysandreJik , @julien-c , @thomwolf , @sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3047/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3047",
"html_url": "https://github.com/huggingface/transformers/pull/3047",
"diff_url": "https://github.com/huggingface/transformers/pull/3047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3047.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3046/comments | https://api.github.com/repos/huggingface/transformers/issues/3046/events | https://github.com/huggingface/transformers/pull/3046 | 572,000,809 | MDExOlB1bGxSZXF1ZXN0MzgwNzgxNzE2 | 3,046 | [WIP] add generate tests to more models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=h1) Report\n> Merging [#3046](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3046 +/- ##\n==========================================\n+ Coverage 77.18% 77.19% +<.01% \n==========================================\n Files 98 98 \n Lines 16063 16063 \n==========================================\n+ Hits 12399 12400 +1 \n+ Misses 3664 3663 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=footer). Last update [298bed1...e295138](https://codecov.io/gh/huggingface/transformers/pull/3046?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,583 | 1,583 | MEMBER | null | - [ ] add prepare_input_for_generation() to all bert masked lm models
- [ ] add slow integration tests to check results (also include camembert for this one) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3046/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3046",
"html_url": "https://github.com/huggingface/transformers/pull/3046",
"diff_url": "https://github.com/huggingface/transformers/pull/3046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3046.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3045/comments | https://api.github.com/repos/huggingface/transformers/issues/3045/events | https://github.com/huggingface/transformers/issues/3045 | 571,865,975 | MDU6SXNzdWU1NzE4NjU5NzU= | 3,045 | [docs] Provide a barebones GPT-2 colab notebook | {
"login": "aolko",
"id": 581458,
"node_id": "MDQ6VXNlcjU4MTQ1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/581458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aolko",
"html_url": "https://github.com/aolko",
"followers_url": "https://api.github.com/users/aolko/followers",
"following_url": "https://api.github.com/users/aolko/following{/other_user}",
"gists_url": "https://api.github.com/users/aolko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aolko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aolko/subscriptions",
"organizations_url": "https://api.github.com/users/aolko/orgs",
"repos_url": "https://api.github.com/users/aolko/repos",
"events_url": "https://api.github.com/users/aolko/events{/privacy}",
"received_events_url": "https://api.github.com/users/aolko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Do you mean just for inference? Or fine-tuning too, like in the one you linked?",
"Yes, fine-tuning as well.",
"The notebook described in #2676 is a good example of something that could work; however the current implementation is not very user friendly, which was the design goal of the `gpt-2-simple` notebook. (my text generating package which extends `transformers` will have it as a feature)",
"> The notebook described in #2676 is a good example of something that could work; however the current implementation is not very user friendly, which was the design goal of the `gpt-2-simple` notebook. (my text generating package which extends `transformers` will have it as a feature)\r\n\r\n@minimaxir your provided notebook has external dependencies (`examples/run_lm_finetuning.py`), which is a no-no for this case, all the source has to be laid out in the notebook's code blocks, **just like in gpt-2-simple**'s.",
"Agreed. The issue is that there is no functional training interface in the library itself, which is why I'm creating one that extends it (as it's a narrow use case).",
"@minimaxir so perhaps you can make a notebook that **fully** satisfies this issue in this case?",
"so, guys, can you give me an approx ETA for this issue? Kinda need that fix now",
"> so, guys, can you give me an approx ETA for this issue? Kinda need that fix now\r\n\r\nI don't think there are currently specific plans to create a GPT-2 notebook. If you have a look at all the pull requests (https://github.com/huggingface/transformers/pulls) you can see that the team is hard at work on a range of different features and fixes. One of those is ready-to-go docker images with notebooks (https://github.com/huggingface/transformers/pull/3051) but as far as I can see GPT-2 doesn't have a special place there.\r\n\r\nYou can always try to create this yourself or ask specific questions on [Stack Overflow](https://stackoverflow.com/).\r\n\r\nThat being said, you can have a look at https://github.com/huggingface/transformers/pull/3063 which is currently implementing generation for GPT-2 and others in Tensorflow.",
"> which is currently implementing generation for GPT-2 and others in Tensorflow.\r\n\r\nthat actually sucks, since i'm targeting pytorch",
"If you *really* need to generate text in PyTorch on short notice, you can finetune the GPT-2 model using gpt-2-simple, and run the TF -> PyTorch conversion scripts in transformers, then you can load that and generate it from it.",
"The #3063 that Bram mentioned targets TensorFlow because it's already implemented in [PyTorch](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.generate).",
"> If you _really_ need to generate text in PyTorch on short notice, you can finetune the GPT-2 model using gpt-2-simple, and run the TF -> PyTorch conversion scripts in transformers, then you can load that and generate it from it.\r\n\r\nexcept finetuning should be done later (if at all), as for right now it's either distilgpt2 or gpt-2-large, pretrained.",
"So essentially there's nothing so far. Even after ~~7~~ ~~14~~ ~~21~~ ~~28~~ ~~35~~ 42+ days, issue is still hot 🔥 ",
"I'm interested on this",
"upd: despite not providing any feedback in this issue they've sneakily added **at least** [**something**](https://huggingface.co/transformers/notebooks.html)",
"> upd: despite not providing any feedback in this issue they've sneakily added **at least** [**something**](https://huggingface.co/transformers/notebooks.html)\r\n\r\nPlease be aware that this is a large open-source repository that is maintained by a company that has many other concerns, too. However, being open-source, collaboration is encouraged. Because of the huge interest in NLP and specifically this library, it is incredibly hard to monitor all new issues while also fixing bugs and taking care of all other responsibilities, i.e. a day-to-day job.\r\n\r\nBumping this topic by complaining does not help anyone, but the team is very open to receiving and reviewing PRs, so feel free to add your contributions to make the library better. Alternatively, you can encourage others to help you out by sharing this issue on other platforms. I have marked the issue as a \"Good first issue', encouraging others to give it a go.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,598 | 1,598 | NONE | null | Please provide a barebones "pick up and go" GPT-2 colab notebook for text generation, [just like gpt-2-simple does](https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3045/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3045/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3044/comments | https://api.github.com/repos/huggingface/transformers/issues/3044/events | https://github.com/huggingface/transformers/issues/3044 | 571,865,360 | MDU6SXNzdWU1NzE4NjUzNjA= | 3,044 | How can I use the this result? | {
"login": "bboyxu5928",
"id": 36126405,
"node_id": "MDQ6VXNlcjM2MTI2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/36126405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bboyxu5928",
"html_url": "https://github.com/bboyxu5928",
"followers_url": "https://api.github.com/users/bboyxu5928/followers",
"following_url": "https://api.github.com/users/bboyxu5928/following{/other_user}",
"gists_url": "https://api.github.com/users/bboyxu5928/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bboyxu5928/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bboyxu5928/subscriptions",
"organizations_url": "https://api.github.com/users/bboyxu5928/orgs",
"repos_url": "https://api.github.com/users/bboyxu5928/repos",
"events_url": "https://api.github.com/users/bboyxu5928/events{/privacy}",
"received_events_url": "https://api.github.com/users/bboyxu5928/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"That answer links to an old version of the codebase. Here is an example using the current-day implementation. The documentation can be improved for this particular task, though, since as currently written the given example doesn't seem to do any NSP.\r\n\r\n```python\r\nfrom torch.nn.functional import softmax\r\nfrom transformers import BertForNextSentencePrediction, BertTokenizer\r\n\r\n\r\nseq_A = 'I like cookies !'\r\nseq_B = 'Do you like them ?'\r\n\r\n# load pretrained model and a pretrained tokenizer\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-cased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\n\r\n# encode the two sequences. Particularly, make clear that they must be \r\n# encoded as \"one\" input to the model by using 'seq_B' as the 'text_pair'\r\nencoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt')\r\nprint(encoded)\r\n# {'input_ids': tensor([[ 101, 146, 1176, 18621, 106, 102, 2091, 1128, 1176, 1172, 136, 102]]),\r\n# 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]),\r\n# 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\r\n# NOTE how the token_type_ids are 0 for all tokens in seq_A and 1 for seq_B, \r\n# this way the model knows which token belongs to which sequence\r\n\r\n# a model's output is a tuple, we only need the output tensor containing\r\n# the relationships which is the first item in the tuple\r\nseq_relationship_logits = model(**encoded)[0]\r\n\r\n# we still need softmax to convert the logits into probabilities\r\n# index 0: sequence B is a continuation of sequence A\r\n# index 1: sequence B is a random sequence\r\nprobs = softmax(seq_relationship_logits, dim=1)\r\n\r\nprint(seq_relationship_logits)\r\nprint(probs)\r\n# tensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>)\r\n# very high value for index 0: high probability of seq_B being a continuation of seq_A\r\n# which is what we expect!\r\n```",
"Got it,thanks a lot!"
] | 1,582 | 1,582 | 1,582 | NONE | null | I wan to call the next sentence prediction function on new data.
and this webpage tell it
https://stackoverflow.com/questions/55111360/using-bert-for-next-sentence-prediction
input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]])
input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]])
token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]])
config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768,
num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072)
model = BertForNextSentencePrediction(config)
seq_relationship_logits = model(input_ids, token_type_ids, input_mask)
but when I run this demo and get the result ( seq_relationship_logits) ,
(tensor([[-0.0728, 0.1863],
[ 0.3190, -0.1528]], grad_fn=<AddmmBackward>),)
how to use it as the predict result (like if sentence B follows sentence A, so the predict label is 0,else the label is 1) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3043/comments | https://api.github.com/repos/huggingface/transformers/issues/3043/events | https://github.com/huggingface/transformers/issues/3043 | 571,854,599 | MDU6SXNzdWU1NzE4NTQ1OTk= | 3,043 | Issue with Makefile | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert ...):
Language I am using the model on (English):
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. pip install -e ".[testing]"
2. pip install -r examples/requirements.txt
3. make test-examples
Running 3 throws a few errors and warnings including a AttributeError: 'TestCaseFunction' object has no attribute 'get_marker'. Wasn't happening a few days ago. What should I do? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3043/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3042/comments | https://api.github.com/repos/huggingface/transformers/issues/3042/events | https://github.com/huggingface/transformers/pull/3042 | 571,840,793 | MDExOlB1bGxSZXF1ZXN0MzgwNjUxODg2 | 3,042 | Fix spelling of strictly in error messages | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=h1) Report\n> Merging [#3042](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b370cc7e99c5b8c7436154d4694c33b461ea0f08?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3042 +/- ##\n==========================================\n- Coverage 77.28% 77.27% -0.01% \n==========================================\n Files 98 98 \n Lines 16038 16038 \n==========================================\n- Hits 12395 12394 -1 \n- Misses 3643 3644 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <100%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=footer). Last update [b370cc7...5bdaba9](https://codecov.io/gh/huggingface/transformers/pull/3042?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,583 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3042",
"html_url": "https://github.com/huggingface/transformers/pull/3042",
"diff_url": "https://github.com/huggingface/transformers/pull/3042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3042.patch",
"merged_at": 1582816956000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3041/comments | https://api.github.com/repos/huggingface/transformers/issues/3041/events | https://github.com/huggingface/transformers/pull/3041 | 571,815,803 | MDExOlB1bGxSZXF1ZXN0MzgwNjMyNzk5 | 3,041 | Fix batch_encode_plus | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=h1) Report\n> Merging [#3041](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b370cc7e99c5b8c7436154d4694c33b461ea0f08?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3041 +/- ##\n==========================================\n+ Coverage 77.28% 77.29% +<.01% \n==========================================\n Files 98 98 \n Lines 16038 16037 -1 \n==========================================\n Hits 12395 12395 \n+ Misses 3643 3642 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3041/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.7% <100%> (+0.13%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=footer). Last update [b370cc7...59d9a21](https://codecov.io/gh/huggingface/transformers/pull/3041?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Fix #3037 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3041",
"html_url": "https://github.com/huggingface/transformers/pull/3041",
"diff_url": "https://github.com/huggingface/transformers/pull/3041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3041.patch",
"merged_at": 1582815407000
} |
https://api.github.com/repos/huggingface/transformers/issues/3040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3040/comments | https://api.github.com/repos/huggingface/transformers/issues/3040/events | https://github.com/huggingface/transformers/issues/3040 | 571,797,031 | MDU6SXNzdWU1NzE3OTcwMzE= | 3,040 | Knowledge distillation from internal representation GPT2 | {
"login": "snaik2016",
"id": 18183245,
"node_id": "MDQ6VXNlcjE4MTgzMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snaik2016",
"html_url": "https://github.com/snaik2016",
"followers_url": "https://api.github.com/users/snaik2016/followers",
"following_url": "https://api.github.com/users/snaik2016/following{/other_user}",
"gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions",
"organizations_url": "https://api.github.com/users/snaik2016/orgs",
"repos_url": "https://api.github.com/users/snaik2016/repos",
"events_url": "https://api.github.com/users/snaik2016/events{/privacy}",
"received_events_url": "https://api.github.com/users/snaik2016/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The attention outputs are indeed softmax probabilities and should all be > 0. Could you post a code snippet that reproduces negative attention outputs? Or a code snippet of what exactly you are doing with the attention outputs? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # ❓ Questions & Help
I am trying to implement above paper for the GPT2 model. The attention outputs are softmax probabilities as seen from modeling_gpt2.py line (152-162). If KLD loss is computed on these values from teacher and student I am getting negative value of KLD indicating that the inputs are not probability distributions.
The attention output is of dimensions (bs, nh, sl, sl) nh=12 I am just flattening the output and computing kld loss.
Is my understanding correct.?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3040/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3039/comments | https://api.github.com/repos/huggingface/transformers/issues/3039/events | https://github.com/huggingface/transformers/issues/3039 | 571,747,868 | MDU6SXNzdWU1NzE3NDc4Njg= | 3,039 | Paragraph re-ranking using MS MARCO dataset | {
"login": "Sharathmk99",
"id": 3970340,
"node_id": "MDQ6VXNlcjM5NzAzNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3970340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sharathmk99",
"html_url": "https://github.com/Sharathmk99",
"followers_url": "https://api.github.com/users/Sharathmk99/followers",
"following_url": "https://api.github.com/users/Sharathmk99/following{/other_user}",
"gists_url": "https://api.github.com/users/Sharathmk99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sharathmk99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sharathmk99/subscriptions",
"organizations_url": "https://api.github.com/users/Sharathmk99/orgs",
"repos_url": "https://api.github.com/users/Sharathmk99/repos",
"events_url": "https://api.github.com/users/Sharathmk99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sharathmk99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | How can I use transformers package to train on MS MARCO dataset by adding my own domain data. Or can I use pre trained model to add my own domain data?
SO link: https://stackoverflow.com/questions/60424723/paragraph-re-ranking-using-ms-marco-dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3038/comments | https://api.github.com/repos/huggingface/transformers/issues/3038/events | https://github.com/huggingface/transformers/issues/3038 | 571,743,299 | MDU6SXNzdWU1NzE3NDMyOTk= | 3,038 | AttributeError: 'Model2Model' object has no attribute 'prepare_model_kwargs' in 2.5.1 | {
"login": "andr-ec",
"id": 16169185,
"node_id": "MDQ6VXNlcjE2MTY5MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16169185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andr-ec",
"html_url": "https://github.com/andr-ec",
"followers_url": "https://api.github.com/users/andr-ec/followers",
"following_url": "https://api.github.com/users/andr-ec/following{/other_user}",
"gists_url": "https://api.github.com/users/andr-ec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andr-ec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andr-ec/subscriptions",
"organizations_url": "https://api.github.com/users/andr-ec/orgs",
"repos_url": "https://api.github.com/users/andr-ec/repos",
"events_url": "https://api.github.com/users/andr-ec/events{/privacy}",
"received_events_url": "https://api.github.com/users/andr-ec/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"Looks like `PreTrainedEncoderDecoder.prepare_model_kwargs()` was removed in #2745\r\nIs there a reason for that? either `prepare_model_kwargs` should be added again or the line: \r\n`kwargs_encoder, kwargs_decoder = self.prepare_model_kwargs(**kwargs)`\r\nshould be removed from the forward call. I'd be happy to submit a PR with that guidance",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Model2Model
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
based on the quick start guide for Model2Model.
if I create a model using any of the following, it get an exception during the forward call :
```
model = Model2Model.from_pretrained('bert-base-uncased')
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased','bert-base-uncased')
decoder_config = BertConfig.from_pretrained('bert-base-uncased', is_decoder=True)
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)
```
```
model(torch.tensor([[10,20,300,4,500,600]]).cuda(), torch.tensor([[400,500]]).cuda(), decoder_lm_labels=torch.tensor([[400,500]]).cuda())[0]
```
this started happening in 2.5.1
2.5.0 didn't throw the error
stacktrace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-77add3526cdd> in <module>()
----> 1 model(torch.tensor([[10,20,300,4,500,600]]).cuda(), torch.tensor([[400,500]]).cuda(), decoder_lm_labels=torch.tensor([[400,500]]).cuda())[0]
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_encoder_decoder.py in forward(self, encoder_input_ids, decoder_input_ids, **kwargs)
221 kwargs: (`optional`) Remaining dictionary of keyword arguments.
222 """
--> 223 kwargs_encoder, kwargs_decoder = self.prepare_model_kwargs(**kwargs)
224
225 # Encode if needed (training, first prediction pass)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
AttributeError: 'Model2Model' object has no attribute 'prepare_model_kwargs'
```
## Expected behavior
No error should occur.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: google colab
- Python version: 3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3037/comments | https://api.github.com/repos/huggingface/transformers/issues/3037/events | https://github.com/huggingface/transformers/issues/3037 | 571,734,995 | MDU6SXNzdWU1NzE3MzQ5OTU= | 3,037 | Wrong logic in `batch_encode_plus()` | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | # 🐛 Bug
I'm trying to use `batch_encode_plus()` with already tokenized ID as input.
So I have a `list of list of int` as input. For example :
```python
x = [[0, 83, 313, 11, 10551, 2278, 16, 2183, 1958, 804, 7, 916, 11, 13933, 982, 4, 286, 68, 5046, 6, 37, 40, 3627, 231, 2697, 9, 1958, 11, 41, 36027, 27894, 1001, 21466, 424, 2233, 4, 2], [0, 20, 291, 212, 13989, 191, 3772, 42, 983, 4, 815, 34, 1714, 8617, 187, 63, 17692, 11, 8008, 4, 993, 864, 549, 1492, 2624, 5391, 9686, 8, 12291, 240, 7, 464, 4, 2]]
```
When I call `batch_encode_plus()` with this input, I receive `AssertionError` :
```
tokenizer.batch_encode_plus(x)
# >>> line 1130, in batch_encode_plus
# assert len(ids_or_pair_ids) == 2
```
---
After reviewing the code, it seems there is an error in the logic. Here :
https://github.com/huggingface/transformers/blob/b370cc7e99c5b8c7436154d4694c33b461ea0f08/src/transformers/tokenization_utils.py#L1128-L1137
since I gave only tokenized ID and not a tuple of tokenized ID (not a pair), the logic should bring me to the `else` clause. But instead, I enter the `if` clause, leading to the assertion error.
**It's currently not possible to use `batch_encode_plus()` with already tokenized text / IDs** if the input is not a pair.
---
I think this line :
https://github.com/huggingface/transformers/blob/b370cc7e99c5b8c7436154d4694c33b461ea0f08/src/transformers/tokenization_utils.py#L1129
Should be changed to this :
```python
if isinstance(ids_or_pair_ids, (list, tuple)) and len(ids_or_pair_ids) == 2:
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3037/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3036/comments | https://api.github.com/repos/huggingface/transformers/issues/3036/events | https://github.com/huggingface/transformers/issues/3036 | 571,606,904 | MDU6SXNzdWU1NzE2MDY5MDQ= | 3,036 | Cannot use `transformers.GradientAccumulator` with `tf.function` | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"`transformers`'s implementation of the GradientAccumulator has been inspired by the OpenNMT implementation. Can you try their (updated) version and report the results? You can find it here:\r\n\r\nhttps://github.com/OpenNMT/OpenNMT-tf/blob/0311e473d9788b0363ca663cafbdfaf7777e53f9/opennmt/optimizers/utils.py#L64\r\n\r\nYou can also see how they use it in their training script, e.g.\r\n\r\nhttps://github.com/OpenNMT/OpenNMT-tf/blob/0311e473d9788b0363ca663cafbdfaf7777e53f9/opennmt/optimizers/utils.py#L64\r\n\r\nDisclaimer: I'm a PyTorch guy, so I can't help with TF. It just seems like something to test.",
"@jarednielsen Can you try moving the call to `gradient_accumulator(step_grads)` directly inside `batch_step()` and avoid returning the gradients from the function?\r\n\r\nThe implementation over at OpenNMT-tf was indeed updated because the previous one (and included in `transformers`) used the `device_map` property which is removed in recent TensorFlow versions.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,589 | 1,589 | CONTRIBUTOR | null | # 🐛 Bug
I'm using gradient accumulation to get larger batch sizes when pretraining BERT, and GradientAccumulator works wonderfully. Thanks for including that!
However, tf.AutoGraph fails when I wrap in the gradient accumulation step in tf.function. I have to write code like this:
```code
@tf.function
def batch_step():
step_grads = ...
return step_grads
@tf.function
def allreduce():
grads = [hvd.allreduce(grad) for grad in gradient_accumulator.gradients]
optimizer.apply_gradients(grads)
gradient_accumulator.reset()
def step():
for i in range(gradient_accumulation_steps):
step_grads = batch_step(dataset[i])
gradient_accumulator(step_grads)
allreduce()
```
This code works. However, it is very inefficient because TensorFlow stores the GradientAccumulator on CPU rather than GPU. My PCI throughput is around 9 GiB/s (on 8-GPU single-node training) when using gradient accumulation, as opposed to in the kilobytes when everything is wrapped in a tf.function. GPU utilization also drops because the bottleneck is transfer of gradients between GPU and CPU. I also suspect that it's causing a memory leak somewhere, but that's another story :) Being able to wrap the GradientAccumulator in a tf.function (or somehow pin it to the GPU, using `tf.device()` doesn't seem to be working) would be wonderful. Any tips?
Sample PCI throughput, ideally RX (GPU-to-CPU) should be close to 0.
<img width="682" alt="Screen Shot 2020-02-26 at 11 21 47 AM" src="https://user-images.githubusercontent.com/4564897/75380114-5223d580-588b-11ea-9594-e85755ac0db0.png">
## Information
Model I am using (Bert, XLNet ...): TFAlbert
Language I am using the model on (English, Chinese ...): English
## Environment info
- `transformers` version: 2.5.0
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, but it also applies to single-node
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3035/comments | https://api.github.com/repos/huggingface/transformers/issues/3035/events | https://github.com/huggingface/transformers/pull/3035 | 571,553,976 | MDExOlB1bGxSZXF1ZXN0MzgwNDIwNDk3 | 3,035 | Force pad_token_id to be set before padding for standard tokenizer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=h1) Report\n> Merging [#3035](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/298bed16a841fae3608d334441ccae4d9043611f?src=pr&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3035 +/- ##\n==========================================\n- Coverage 77.18% 76.16% -1.03% \n==========================================\n Files 98 98 \n Lines 16063 16065 +2 \n==========================================\n- Hits 12399 12236 -163 \n- Misses 3664 3829 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.72% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=footer). Last update [298bed1...2879683](https://codecov.io/gh/huggingface/transformers/pull/3035?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Is good to merge for me after checking adapted tests in tokenization_utils.py @LysandreJik "
] | 1,582 | 1,583 | 1,583 | MEMBER | null | I think batch_encode_plus with a proper padding strategy should not be allowed if the pad_token_id is not set. I don't feel like it helps the user to have a python list of lists with None values that he can't transform to a torch.Tensor anyways.
As a remedy I think it is alright if a new pad_token is added or whether it is set to an existing special_token.
This behavior is already enforced for FastTokenizer, so the PR should also make it easier to transition from Tokenizer to FastTokenizer.
I will fix the tests and add a new one if you guys agree.
@mfuntowicz @thomwolf @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3035",
"html_url": "https://github.com/huggingface/transformers/pull/3035",
"diff_url": "https://github.com/huggingface/transformers/pull/3035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3035.patch",
"merged_at": 1583164436000
} |
https://api.github.com/repos/huggingface/transformers/issues/3034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3034/comments | https://api.github.com/repos/huggingface/transformers/issues/3034/events | https://github.com/huggingface/transformers/pull/3034 | 571,528,030 | MDExOlB1bGxSZXF1ZXN0MzgwMzk4OTgy | 3,034 | fix several typos in Distil* readme | {
"login": "awalker88",
"id": 18567203,
"node_id": "MDQ6VXNlcjE4NTY3MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/18567203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awalker88",
"html_url": "https://github.com/awalker88",
"followers_url": "https://api.github.com/users/awalker88/followers",
"following_url": "https://api.github.com/users/awalker88/following{/other_user}",
"gists_url": "https://api.github.com/users/awalker88/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awalker88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awalker88/subscriptions",
"organizations_url": "https://api.github.com/users/awalker88/orgs",
"repos_url": "https://api.github.com/users/awalker88/repos",
"events_url": "https://api.github.com/users/awalker88/events{/privacy}",
"received_events_url": "https://api.github.com/users/awalker88/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=h1) Report\n> Merging [#3034](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9df74b8bc42eedc496f7148b9370728054ca3b6a?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3034 +/- ##\n==========================================\n+ Coverage 77.27% 77.29% +0.01% \n==========================================\n Files 98 98 \n Lines 16037 16037 \n==========================================\n+ Hits 12393 12395 +2 \n+ Misses 3644 3642 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.54% <0%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=footer). Last update [9df74b8...1482c03](https://codecov.io/gh/huggingface/transformers/pull/3034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Hi, feel free to reject if you think these changes aren't necessary. Just saw a couple typos while reading the documentation and wanted to help 😄. The first change looks a bit hard to tell what changed, but it was 'superseeds' to 'supersedes'. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3034",
"html_url": "https://github.com/huggingface/transformers/pull/3034",
"diff_url": "https://github.com/huggingface/transformers/pull/3034.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3034.patch",
"merged_at": 1582738795000
} |
https://api.github.com/repos/huggingface/transformers/issues/3033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3033/comments | https://api.github.com/repos/huggingface/transformers/issues/3033/events | https://github.com/huggingface/transformers/pull/3033 | 571,510,165 | MDExOlB1bGxSZXF1ZXN0MzgwMzg0MTMy | 3,033 | Fix attn mask gpt2 when using past | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=h1) Report\n> Merging [#3033](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bb7c46852051f7d031dd4be0240c9c9db82f6ed9?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3033 +/- ##\n==========================================\n- Coverage 77.26% 76.23% -1.04% \n==========================================\n Files 98 98 \n Lines 16047 16048 +1 \n==========================================\n- Hits 12399 12234 -165 \n- Misses 3648 3814 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.16% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3033/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=footer). Last update [bb7c468...0909d8e](https://codecov.io/gh/huggingface/transformers/pull/3033?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | - fixed issue #3031
- updated doc-string for GPT2
- added two test for GPT2 that I think are important:
- check that when using past as an input to speed up decoding, the results are equivalent to not using past.
- check that when using past and attn_mask as an input to speed up decoding, the results are equivalent to not using past where as an input_id slice was corrupted that is masked by the attn_mask. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3033",
"html_url": "https://github.com/huggingface/transformers/pull/3033",
"diff_url": "https://github.com/huggingface/transformers/pull/3033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3033.patch",
"merged_at": 1582736678000
} |
https://api.github.com/repos/huggingface/transformers/issues/3032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3032/comments | https://api.github.com/repos/huggingface/transformers/issues/3032/events | https://github.com/huggingface/transformers/issues/3032 | 571,461,730 | MDU6SXNzdWU1NzE0NjE3MzA= | 3,032 | Loading custom weights for BERT in pytorch | {
"login": "LivC193",
"id": 38222294,
"node_id": "MDQ6VXNlcjM4MjIyMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/38222294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LivC193",
"html_url": "https://github.com/LivC193",
"followers_url": "https://api.github.com/users/LivC193/followers",
"following_url": "https://api.github.com/users/LivC193/following{/other_user}",
"gists_url": "https://api.github.com/users/LivC193/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LivC193/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LivC193/subscriptions",
"organizations_url": "https://api.github.com/users/LivC193/orgs",
"repos_url": "https://api.github.com/users/LivC193/repos",
"events_url": "https://api.github.com/users/LivC193/events{/privacy}",
"received_events_url": "https://api.github.com/users/LivC193/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1802861720,
"node_id": "MDU6TGFiZWwxODAyODYxNzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20CLI",
"name": "Core: CLI",
"color": "FF6426",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Yes. Have a look at the documentation for [`from_pretrained`](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained), particularly the `pretrained_model_name_or_path ` argument.",
"Yes sorry I looked after I asked and found everything I needed it "
] | 1,582 | 1,582 | 1,582 | NONE | null | Is it possible to load custom pre-trained weights for BERT, other than the one you provide ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3031/comments | https://api.github.com/repos/huggingface/transformers/issues/3031/events | https://github.com/huggingface/transformers/issues/3031 | 571,430,501 | MDU6SXNzdWU1NzE0MzA1MDE= | 3,031 | Forward pass with GPT2 using both past and attention_mask as an input leads to dimension error | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"FYI - I created a stackoverflow question about this here: https://stackoverflow.com/questions/60459292/using-past-and-attention-mask-at-the-same-time-for-gpt2",
"@Damiox - answered you stackoverflow question :-)"
] | 1,582 | 1,583 | 1,582 | MEMBER | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
## To reproduce run the following code:
```
from transformers import GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
input_ids = torch.tensor([8, 8, 0, 50256, 50256]).unsqueeze(0)
attn_mask = torch.tensor([1, 1, 1, 0, 0]).unsqueeze(0)
# first step there is no past so lets get it from the model and append new embedding id to inputs and extend the attn_mask
logits_output, past = model(input_ids, attention_mask=attn_mask)
next_token = torch.argmax(logits_output[:, -1, :]).unsqueeze(0)
input_ids = torch.cat([input_ids, next_token.unsqueeze(-1)], dim=-1)
attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)
# now we have a past so we can use it to speed up training
model_inputs = model.prepare_inputs_for_generation(input_ids=input_ids, past=past)
logits_output, past = model(**model_inputs, attention_mask=attn_mask) # this leads to an error which it should not
```
## Expected behavior
No error should be thrown and the forward pass to get `logits_output` and `past` should work
## Environment info
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3031/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3030/comments | https://api.github.com/repos/huggingface/transformers/issues/3030/events | https://github.com/huggingface/transformers/issues/3030 | 571,428,060 | MDU6SXNzdWU1NzE0MjgwNjA= | 3,030 | run_tf_glue with AdamW optimizer and distributed training | {
"login": "BOUALILILila",
"id": 15789415,
"node_id": "MDQ6VXNlcjE1Nzg5NDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/15789415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BOUALILILila",
"html_url": "https://github.com/BOUALILILila",
"followers_url": "https://api.github.com/users/BOUALILILila/followers",
"following_url": "https://api.github.com/users/BOUALILILila/following{/other_user}",
"gists_url": "https://api.github.com/users/BOUALILILila/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BOUALILILila/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BOUALILILila/subscriptions",
"organizations_url": "https://api.github.com/users/BOUALILILila/orgs",
"repos_url": "https://api.github.com/users/BOUALILILila/repos",
"events_url": "https://api.github.com/users/BOUALILILila/events{/privacy}",
"received_events_url": "https://api.github.com/users/BOUALILILila/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # ❓ Questions & Help
Can you please update the run_tf_glue example (Tensorflow version) that includes the Adam optimizer with Warm-up and weight decay. If you can also show how to run the code on multiple GPUs and on TPU.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3030/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3030/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3029/comments | https://api.github.com/repos/huggingface/transformers/issues/3029/events | https://github.com/huggingface/transformers/issues/3029 | 571,290,984 | MDU6SXNzdWU1NzEyOTA5ODQ= | 3,029 | How to add data to pretrained model. | {
"login": "ynebula",
"id": 22788865,
"node_id": "MDQ6VXNlcjIyNzg4ODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/22788865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ynebula",
"html_url": "https://github.com/ynebula",
"followers_url": "https://api.github.com/users/ynebula/followers",
"following_url": "https://api.github.com/users/ynebula/following{/other_user}",
"gists_url": "https://api.github.com/users/ynebula/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ynebula/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ynebula/subscriptions",
"organizations_url": "https://api.github.com/users/ynebula/orgs",
"repos_url": "https://api.github.com/users/ynebula/repos",
"events_url": "https://api.github.com/users/ynebula/events{/privacy}",
"received_events_url": "https://api.github.com/users/ynebula/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Have a look at the language modeling example:\r\n\r\nhttps://huggingface.co/transformers/examples.html#language-model-training\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py",
"Thank you for your answering.\r\n\r\nI did that^^",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,589 | 1,589 | NONE | null | I want to add wiki data to pretrained model.
i.e., kor wiki data is added to XLMRoberta Model weight.
I think that files in /transformers/templates/adding_a_new_model do
Is it right?
please let me know way to add data.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3029/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3028/comments | https://api.github.com/repos/huggingface/transformers/issues/3028/events | https://github.com/huggingface/transformers/issues/3028 | 571,239,136 | MDU6SXNzdWU1NzEyMzkxMzY= | 3,028 | AttributeError: 'Tensor' object has no attribute 'size' | {
"login": "sainimohit23",
"id": 26195811,
"node_id": "MDQ6VXNlcjI2MTk1ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/26195811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sainimohit23",
"html_url": "https://github.com/sainimohit23",
"followers_url": "https://api.github.com/users/sainimohit23/followers",
"following_url": "https://api.github.com/users/sainimohit23/following{/other_user}",
"gists_url": "https://api.github.com/users/sainimohit23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sainimohit23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sainimohit23/subscriptions",
"organizations_url": "https://api.github.com/users/sainimohit23/orgs",
"repos_url": "https://api.github.com/users/sainimohit23/repos",
"events_url": "https://api.github.com/users/sainimohit23/events{/privacy}",
"received_events_url": "https://api.github.com/users/sainimohit23/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"You are using TF. Did you mean to use `TFDistilBertModel` instead of `DistilBertModel`?",
"@BramVanroy got it. Thanks.",
"I wonder if it would be possible to know in advance whether a model supports TF or pyTorch, or both... and throw an exception accordingly? "
] | 1,582 | 1,641 | 1,582 | NONE | null | This is the model that I have defined:
```
input_layer = keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
bert = DistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
bert = bert[0][:,0,:]
# bert = keras.layers.Dense(units=10, activation='relu')(bert)
classifier = keras.layers.Dense(units=1, activation='sigmoid')(bert)
model = keras.models.Model(inputs=input_layer, outputs=classifier)
model.summary()
```
This is the error I am getting.
```
AttributeError Traceback (most recent call last)
<ipython-input-12-6d7e88036056> in <module>()
1 input_layer = keras.layers.Input(shape = (attention_mask.shape[1],), dtype='int64')
----> 2 bert = DistilBertModel.from_pretrained("distilbert-base-cased")(input_layer)
3 bert = bert[0][:,0,:]
4 # bert = keras.layers.Dense(units=10, activation='relu')(bert)
5 classifier = keras.layers.Dense(units=1, activation='sigmoid')(bert)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds)
449 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
450 elif input_ids is not None:
--> 451 input_shape = input_ids.size()
452 elif inputs_embeds is not None:
453 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'Tensor' object has no attribute 'size'
```
The same code works fine when distilbert is replaced with bert. What to do in this case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3028/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3027/comments | https://api.github.com/repos/huggingface/transformers/issues/3027/events | https://github.com/huggingface/transformers/issues/3027 | 571,104,664 | MDU6SXNzdWU1NzExMDQ2NjQ= | 3,027 | This class and module cannot be found | {
"login": "bboyxu5928",
"id": 36126405,
"node_id": "MDQ6VXNlcjM2MTI2NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/36126405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bboyxu5928",
"html_url": "https://github.com/bboyxu5928",
"followers_url": "https://api.github.com/users/bboyxu5928/followers",
"following_url": "https://api.github.com/users/bboyxu5928/following{/other_user}",
"gists_url": "https://api.github.com/users/bboyxu5928/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bboyxu5928/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bboyxu5928/subscriptions",
"organizations_url": "https://api.github.com/users/bboyxu5928/orgs",
"repos_url": "https://api.github.com/users/bboyxu5928/repos",
"events_url": "https://api.github.com/users/bboyxu5928/events{/privacy}",
"received_events_url": "https://api.github.com/users/bboyxu5928/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"You can do `pip install tokenizers` or [install from source](https://github.com/huggingface/tokenizers).",
"> You can do `pip install tokenizers` or [install from source](https://github.com/huggingface/tokenizers).\r\n\r\nOK, I've got it ,thank you",
"> pip install tokenizers\r\n\r\n```bash\r\nRequirement already satisfied: tokenizers in ./anaconda3/envs/dnabert/lib/python3.6/site-packages (0.8.1rc2)\r\n```\r\n\r\nI did it,but it also has the error",
"in my another computer, the env displays its version is 0.5.0, so why give me 0.8.1rc2? sometimes i want to say, the doc is hard to understand, when understanded, the errors occurs one by one because of some small tips, is it my fault? maybe, but why not write it straightly?"
] | 1,582 | 1,618 | 1,582 | NONE | null | I want to use BertForNextSentencePrediction in ' modeling_bert.py' line 1020 ,but when I run Examples in line 1065, error happens
File "E:\work\pycharm\transformers-master\src\transformers\tokenization_bert.py", line 24, in <module>
from tokenizers import BertWordPieceTokenizer
ImportError: No module named 'tokenizers'
where I can find this module tokenizers,
thanks !!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3027/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3027/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3026/comments | https://api.github.com/repos/huggingface/transformers/issues/3026/events | https://github.com/huggingface/transformers/issues/3026 | 571,094,325 | MDU6SXNzdWU1NzEwOTQzMjU= | 3,026 | language_modeling.py doesn't continue from last global step | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
Hey, The checkpoint's suffix is the last optimization step rather than the last global step (I'm working with accumulation steps)
## Information
The problem arises when using:
* [ ] the official example scripts: language_modeling.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: language_modeling.py
## To reproduce
Steps to reproduce the behavior:
1. run a model on language_modeling.py script with an accumulation step > 0
2. save a checkpoint after x > 0 steps and exit
3. try to continue training and it will continue from the last optimization step rather than global step
```
roberta-base-openai-detector, roberta-large-openai-detector). Assuming 'tmlm_roberta_output/checkpoint-480' is a path, a model identifier, or url to a directory containing tokenizer files.
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - Didn't find file tmlm_roberta_output/checkpoint-480/added_tokens.json. We won't load it.
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/vocab.json
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/merges.txt
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file None
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/special_tokens_map.json
02/26/2020 07:39:49 - INFO - transformers.tokenization_utils - loading file tmlm_roberta_output/checkpoint-480/tokenizer_config.json
02/26/2020 07:39:49 - INFO - transformers.modeling_utils - loading weights file tmlm_roberta_output/checkpoint-480/pytorch_model.bin
init tud head
02/26/2020 07:40:08 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=512, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='/specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/wiki.test.raw.time_filter.normalized', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=64, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='tmlm_roberta_output/checkpoint-480', model_type='roberta', n_gpu=4, no_cuda=False, num_train_epochs=1.0, output_dir='tmlm_roberta_output', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=1, save_steps=80, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=True, tokenizer_name=None, train_data_file='/specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/wiki.train.raw.time_filter.normalized', warmup_steps=0, weight_decay=0.0)
02/26/2020 07:40:08 - INFO - __main__ - Loading features from cached file /specific/netapp5_2/gamir/advml19/yuvalk/project/transformers/examples/lm_data/roberta_cached_lm_510_wiki.train.raw.time_filter.normalized
02/26/2020 07:40:16 - INFO - __main__ - ***** Running training *****
02/26/2020 07:40:16 - INFO - __main__ - Num examples = 163046
02/26/2020 07:40:16 - INFO - __main__ - Num Epochs = 1
02/26/2020 07:40:16 - INFO - __main__ - Instantaneous batch size per GPU = 1
02/26/2020 07:40:16 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 256
02/26/2020 07:40:16 - INFO - __main__ - Gradient Accumulation steps = 64
02/26/2020 07:40:16 - INFO - __main__ - Total optimization steps = 636
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from checkpoint, will skip to saved global_step
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from epoch 0
02/26/2020 07:40:16 - INFO - __main__ - Continuing training from global step 480
02/26/2020 07:40:16 - INFO - __main__ - Will skip the first 480 steps in the first epoch
```
## Expected behavior
I expect it to run from the last global step. I.e. optimization steps * gradient accumulation steps. Note that optimization steps == checkpoint suffix.
## I made the following changes and it seems to work ok:
Former code:
```global_step = int(checkpoint_suffix)
epochs_trained = global_step // (len(train_dataloader) // args.gradient_accumulation_steps)
steps_trained_in_current_epoch = global_step % (len(train_dataloader) // args.gradient_accumulation_steps)```
New code:
```global_step = int(checkpoint_suffix) * args.gradient_accumulation_steps
epochs_trained = global_step // len(train_dataloader)
steps_trained_in_current_epoch = global_step % (len(train_dataloader))```
- `transformers` version: latest
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): GPU
- Using GPU in script?: yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3026/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3025/comments | https://api.github.com/repos/huggingface/transformers/issues/3025/events | https://github.com/huggingface/transformers/issues/3025 | 571,030,989 | MDU6SXNzdWU1NzEwMzA5ODk= | 3,025 | Why do I run example/run_ner.py with no output | {
"login": "Vvegetables",
"id": 9248572,
"node_id": "MDQ6VXNlcjkyNDg1NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9248572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vvegetables",
"html_url": "https://github.com/Vvegetables",
"followers_url": "https://api.github.com/users/Vvegetables/followers",
"following_url": "https://api.github.com/users/Vvegetables/following{/other_user}",
"gists_url": "https://api.github.com/users/Vvegetables/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vvegetables/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vvegetables/subscriptions",
"organizations_url": "https://api.github.com/users/Vvegetables/orgs",
"repos_url": "https://api.github.com/users/Vvegetables/repos",
"events_url": "https://api.github.com/users/Vvegetables/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vvegetables/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Please don't post a screenshot. Copy-and-paste your code or output instead.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@Vvegetables , does this problem still exist?",
"it's fine,thanks\r\n\r\n\r\n发自我的iPhone\r\n\r\n------------------ Original ------------------\r\nFrom: Stefan Schweter <[email protected]>\r\nDate: 周日,4月 26,2020 20:55\r\nTo: huggingface/transformers <[email protected]>\r\nCc: Vvegetables <[email protected]>, Mention <[email protected]>\r\nSubject: Re: [huggingface/transformers] Why do I run example/run_ner.py with no output (#3025)\r\n\r\n\r\n\r\n\r\n\r\n \r\n@Vvegetables , does this problem still exist?\r\n \r\n—\r\nYou are receiving this because you were mentioned.\r\nReply to this email directly, view it on GitHub, or unsubscribe.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,593 | 1,593 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
### This is the log I ran. Any questions? Please help me

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3025/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3024/comments | https://api.github.com/repos/huggingface/transformers/issues/3024/events | https://github.com/huggingface/transformers/pull/3024 | 570,991,180 | MDExOlB1bGxSZXF1ZXN0Mzc5OTQyMDMw | 3,024 | Fix (non-slow) tests on GPU (torch) | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=h1) Report\n> Merging [#3024](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bb7c46852051f7d031dd4be0240c9c9db82f6ed9?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3024 +/- ##\n=======================================\n Coverage 77.26% 77.26% \n=======================================\n Files 98 98 \n Lines 16047 16047 \n=======================================\n Hits 12399 12399 \n Misses 3648 3648\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=footer). Last update [bb7c468...9c14bc7](https://codecov.io/gh/huggingface/transformers/pull/3024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | Fixes ~29 failing tests
Also cf previous commit from december: https://github.com/huggingface/transformers/pull/2055/commits/61978c1dd3f340a545e74537c3dae41a4514e867
(not sure why T5 and the common methods were not failing back then) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3024",
"html_url": "https://github.com/huggingface/transformers/pull/3024",
"diff_url": "https://github.com/huggingface/transformers/pull/3024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3024.patch",
"merged_at": 1582736365000
} |
https://api.github.com/repos/huggingface/transformers/issues/3023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3023/comments | https://api.github.com/repos/huggingface/transformers/issues/3023/events | https://github.com/huggingface/transformers/issues/3023 | 570,987,485 | MDU6SXNzdWU1NzA5ODc0ODU= | 3,023 | BART : host `bart-large-cnn` | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1843738573,
"node_id": "MDU6TGFiZWwxODQzNzM4NTcz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder",
"name": "Core: Encoder-Decoder",
"color": "ef536d",
"default": false,
"description": ""
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"IIRC @sshleifer is working on getting summarization working with that model."
] | 1,582 | 1,583 | 1,583 | CONTRIBUTOR | null | # 🌟 New model addition
BART pretrained model `bart-large` is currently provided, as well as fine-tuned BART model `bart-large-mnli`.
**How about [`bart-large-cnn`](https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models) ?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3022/comments | https://api.github.com/repos/huggingface/transformers/issues/3022/events | https://github.com/huggingface/transformers/pull/3022 | 570,896,940 | MDExOlB1bGxSZXF1ZXN0Mzc5ODU2Njc4 | 3,022 | Make format consistent with that of PreTrainedTokenizer | {
"login": "ranamihir",
"id": 8270471,
"node_id": "MDQ6VXNlcjgyNzA0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8270471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranamihir",
"html_url": "https://github.com/ranamihir",
"followers_url": "https://api.github.com/users/ranamihir/followers",
"following_url": "https://api.github.com/users/ranamihir/following{/other_user}",
"gists_url": "https://api.github.com/users/ranamihir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranamihir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranamihir/subscriptions",
"organizations_url": "https://api.github.com/users/ranamihir/orgs",
"repos_url": "https://api.github.com/users/ranamihir/repos",
"events_url": "https://api.github.com/users/ranamihir/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranamihir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@c913eb9`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3022 +/- ##\n=========================================\n Coverage ? 77.26% \n=========================================\n Files ? 98 \n Lines ? 16040 \n Branches ? 0 \n=========================================\n Hits ? 12393 \n Misses ? 3647 \n Partials ? 0\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.57% <100%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=footer). Last update [c913eb9...34be93e](https://codecov.io/gh/huggingface/transformers/pull/3022?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3022/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3022",
"html_url": "https://github.com/huggingface/transformers/pull/3022",
"diff_url": "https://github.com/huggingface/transformers/pull/3022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3022.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3021/comments | https://api.github.com/repos/huggingface/transformers/issues/3021/events | https://github.com/huggingface/transformers/issues/3021 | 570,865,148 | MDU6SXNzdWU1NzA4NjUxNDg= | 3,021 | Can GPT2LMHeadModel do batch inference with variable sentence lengths? | {
"login": "schizism",
"id": 3358940,
"node_id": "MDQ6VXNlcjMzNTg5NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3358940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schizism",
"html_url": "https://github.com/schizism",
"followers_url": "https://api.github.com/users/schizism/followers",
"following_url": "https://api.github.com/users/schizism/following{/other_user}",
"gists_url": "https://api.github.com/users/schizism/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schizism/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schizism/subscriptions",
"organizations_url": "https://api.github.com/users/schizism/orgs",
"repos_url": "https://api.github.com/users/schizism/repos",
"events_url": "https://api.github.com/users/schizism/events{/privacy}",
"received_events_url": "https://api.github.com/users/schizism/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"It seems possible to by-pass this issue by setting appropriate `attention_mask` so that no tokens will attend the positions that are supposed to be paddings, this way you can use whatever token as padding. I'm working on this issue too, will try to follow up if it works out.",
"I tried a rough version, basically adding attention mask to the padding positions and keep updating this mask as generation grows. One thing worth noting is that in the first step instead of extract the -1-th positions output for each sample, we need to keep track of the real prompt ending position, otherwise sometimes the output from padding positions will be extracted and produce random results.\r\n\r\nCode snippet:\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\nprompt_text = [\r\n 'in this paper we',\r\n 'we are trying to',\r\n 'The purpose of this workshop is to check whether we can', ]\r\nbatch_size = len(prompt_text)\r\nmax_length = 30\r\neos_token_id = tokenizer.eos_token_id\r\n\r\nmodel = model.cuda()\r\n\r\ntoken_ids = [tokenizer.encode(s, add_special_tokens=False) for s in prompt_text]\r\nprompt_lengths = [len(s) for s in token_ids]\r\nmax_prompt_len = max(prompt_lengths)\r\n\r\n# use 0 as padding id, shouldn't matter\r\npadded_tokens = [tok_ids + [0] * (max_prompt_len - len(tok_ids)) for tok_ids in token_ids]\r\ninput_ids = torch.LongTensor(padded_tokens).cuda()\r\nattn_mask = torch.zeros(input_ids.shape).long().cuda()\r\nfor ix, tok_ids in enumerate(token_ids):\r\n attn_mask[ix][:len(tok_ids)] = 1\r\n\r\nunfinished_sents = input_ids.new(batch_size).fill_(1)\r\npast = None\r\ncur_len = input_ids.shape[1]\r\n\r\ndef post_processing(input_ids, attn_mask):\r\n \"\"\"Remove padding tokens in the middle of the sequence.\"\"\"\r\n input_ids_proc = []\r\n for ix, seq in enumerate(input_ids):\r\n input_ids_proc.append([tok_id for tok_id, mask in zip(seq, attn_mask[ix]) if mask != 0])\r\n return input_ids_proc\r\n\r\n\r\ninput_lengths_index = torch.tensor([x - 1 for x in prompt_lengths]).cuda()\r\ninput_lengths_index = input_lengths_index.view(-1, 1).repeat(1, 50257).unsqueeze(1)\r\n\r\nwhile cur_len < max_length:\r\n model_inputs = model.prepare_inputs_for_generation(input_ids, past=past, attention_mask=attn_mask)\r\n outputs = model(**model_inputs)\r\n if cur_len == max_prompt_len:\r\n # at first step we can't directly extract the -1-th position's\r\n # prediction for next word, since for some samples the -1-th\r\n # token is PAD. Instead we keep track of the real prompt ending.\r\n next_token_logits = outputs[0].gather(1, input_lengths_index).squeeze(1)\r\n else:\r\n next_token_logits = outputs[0][:, -1, :]\r\n past = outputs[1]\r\n next_token = torch.argmax(next_token_logits, dim=-1)\r\n tokens_to_add = next_token * unfinished_sents + 0 * (1 - unfinished_sents)\r\n input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)\r\n attn_mask = torch.cat([attn_mask, torch.ones((batch_size, 1)).long().cuda()], dim=1)\r\n\r\n unfinished_sents.mul_(tokens_to_add.ne(eos_token_id).long())\r\n cur_len += 1\r\n\r\n if unfinished_sents.max() == 0:\r\n break\r\n\r\ninput_ids = post_processing(input_ids, attn_mask)\r\nfor item in input_ids:\r\n print(tokenizer.decode(item))\r\n```\r\n\r\nAlso a minor change to `src/transformers/modeling_gpt2.py`:\r\n\r\nline 422: `attention_mask = attention_mask.view(-1, input_shape[-1])`\r\n\r\nchange to `attention_mask = attention_mask.view(input_shape[0], -1)`\r\n\r\n(not sure if this change will break other things)\r\n\r\nOutput:\r\n\r\n`in this paper we have a very good idea of how to use the data to make predictions about the future. We`\r\n`we are trying to get the best possible deal for the best price. We are not going to be able to offer`\r\n`The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.`\r\n\r\n\r\n\r\n",
"@schizism Concerning LM inference on batches of different lengths is actually a problem we are currently looking at. Ideally, you should be able to simple put your input_ids (and an attention_mask) to model.generate() to make it work. \r\n\r\n@XinyuHua thanks for your great contribution to make LM inference work on batches having different lengths. Also it seems like you found a bug, when using the `past` and `attention_mask` variables as an input in GPT2. That's great! I will open a new issue for that and take a look :-) \r\n\r\nBelow, I am adding a simplified code snippet using simpler tokenization functions.\r\nIn this code, no `past` variable is used related to the bug found by @XinyuHua.\r\n\r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')\r\n# IMPORTANT: Note that setting the <PAD> token like this itn the constructor gives the\r\n# pad_token the pad_token_id = 50256, which normally belongs to <BOS> token_ids in GPT2\r\n# This is a very ugly way that works at the moment of setting the pad_token_id to the <BOS> token that is already included in the vocab size. This will be updated in the coming weeks! # noqa: E501\r\n\r\nprompt_text = [\r\n 'in this paper we',\r\n 'we are trying to',\r\n 'The purpose of this workshop is to check whether we can']\r\n\r\n# encode plus batch handles multiple batches and automatically creates attention_masks\r\nseq_len = 11\r\nencodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)\r\n\r\n# ideally we should be able to just input the following two variables to the function model.generate() ... => to be implemented soon! # noqa: E501\r\ninput_ids = torch.tensor(encodings_dict['input_ids'])\r\nattn_mask = torch.tensor(encodings_dict['attention_mask'])\r\n\r\nnum_tokens_to_produce = 20\r\npad_token_id = tokenizer.pad_token_id\r\neos_token_id = tokenizer.eos_token_id\r\neos_not_in_sents = torch.ones(input_ids.shape[0]).long()\r\n\r\n# we need to get the token ids of the last non-padded value\r\nlast_non_masked_idx = torch.sum(attn_mask, dim=1) - 1\r\nstart_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, tokenizer.vocab_size).unsqueeze(1)\r\npast = None\r\n\r\n# get correct position ids\r\nposition_ids = torch.tensor([list(range(seq_len)) for i in range(input_ids.shape[0])])\r\nfor i, position_ids_slice in enumerate(position_ids):\r\n position_ids_slice[last_non_masked_idx[i]:] = position_ids_slice[last_non_masked_idx[i]]\r\n\r\nfor step in range(num_tokens_to_produce):\r\n outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)\r\n\r\n # in the first decoding step, we want to use the 'real' last position for each sentence\r\n if step == 0:\r\n next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)\r\n else:\r\n next_token_logits = outputs[0][:, -1, :]\r\n\r\n next_tokens = torch.argmax(next_token_logits, dim=-1)\r\n\r\n # this updates which sentences have not seen an <EOS> token so far\r\n # if one <EOS> token was seen the sentence is finished\r\n eos_not_in_sents.mul_(next_tokens.ne(eos_token_id).long())\r\n\r\n # either append a padding token here if <EOS> has been seen or append next token\r\n tokens_to_add = next_tokens * (eos_not_in_sents) + pad_token_id * (1 - eos_not_in_sents)\r\n\r\n # Update input_ids, attn_mask and position_ids\r\n input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)\r\n attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)\r\n position_ids = torch.cat([position_ids, (position_ids[:, -1] + 1).unsqueeze(-1)], dim=1)\r\n\r\n[print(tokenizer.decode(output, skip_special_tokens=True)) for output in input_ids]\r\n```\r\n",
"Thanks for this much cleaned version @patrickvonplaten! Just one quick issue, I forgot to modify the position ids for each sample, so the padding will add up to the position ids and future tokens will get wrong position ids. This might cause issues when the prompt lengths in a batch are very different.",
"Fixed the issue #3033 regarding the attention mask with your proposed solution @XinyuHua - thanks! ",
"> Thanks for this much cleaned version @patrickvonplaten! Just one quick issue, I forgot to modify the position ids for each sample, so the padding will add up to the position ids and future tokens will get wrong position ids. This might cause issues when the prompt lengths in a batch are very different.\r\n\r\nadded the correct position ids. Feel free to review and comment! ",
"Thank you @XinyuHua @patrickvonplaten! These are very helpful!",
"@patrickvonplaten It looks like `tokens_to_add` in your script is unused, should that be used in place of `next_tokens` in the line `input_ids = torch.cat([input_ids, next_tokens.unsqueeze(-1)], dim=-1)`?",
"Uups! Yeah definitely - thanks a lot for pointing this out. Edited the script :-) ",
"Hi, padding still seems to be an issue with LMHeads in case of just perplexity calculation (and not generation). I am trying to run examples/run_language_modelling.py and having a hard time using GPT2LMHeadModel and same is the case with transformer-XL. I am running it in just evaluation mode (by setting --do_eval).\r\n\r\nThat example code uses training.py and data/data_collator.py, which throws the following error while batching sentences: \r\n\"ValueError: You are attempting to pad samples but the tokenizer you are using (TransfoXLTokenizer) does not have one.\"\r\n\r\nAny idea where I could be going wrong?\r\nThanks",
"@bajajahsaas Are you using the `--line_by_line` flag? Can you post the exact command you're running?",
"@julien-c I just ran into the exact same issue and I am indeed using the `--line_by_line` flag. The exact command I'm using:\r\n```\r\npython run_language_modeling.py \\\r\n --output_dir='/content/drive/My Drive/finetuned_models/run1' \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --save_total_limit=5 \\\r\n --num_train_epochs=1.0 \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --evaluate_during_training \\\r\n --logging_steps=1000 \\\r\n --save_steps=1000 \\\r\n --train_data_file=/content/train.txt \\\r\n --line_by_line \\\r\n --do_eval \\\r\n --eval_data_file=/content/valid.txt \\\r\n --per_gpu_train_batch_size=2 \\\r\n --per_gpu_eval_batch_size=2 \\\r\n```\r\nIf I take the `--line_by_line` flag out, the command executes fine.",
"HI @julien-c, thanks for checking this. I am using `--line_by_line` and my exact command is as below:\r\n\r\n`python run_lm.py --model_type gpt2 --model_name_or_path gpt2 --do_eval --eval_data_file ../../data/wikitext-103/valid.txt --line_by_line --output_dir logslm `\r\n\r\nI am just running inference on wikitext-103 dataset, and both xlnet and transformer-xl are throwing this error. However, since the error is caused by: https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/data/data_collator.py#L106\r\nI tried a simple workaround using: `tokenizer.pad_token = \"<pad>\"`. I am not sure if this is a correct fix and even perplexity scores are not matching on standard datasets. Note: I am not doing any training, just perplexity calculation.",
"Yes GPT2 is not compatible with the LineByLineDataset, because it doesn't have a padding token out of the box.\r\n\r\nFeel free to propose an update to the error's wording if you think of a clearer way to express that.",
"Sure, thanks for looking into this. Moreover, how shall we use this example code (run_language_modelling.py) for such models? I tried removing `--line_by_line` for wikitext-103 dataset, but that screws up the data processing in my opinion.",
"This is not a real fix, more of a hack, but if you change the code in `transformers.data.data_collator.DataCollatorForLanguageModelling._tensorize_batch`\r\nfrom:\r\n```\r\nif self.tokenizer._pad_token is None:\r\n raise ValueError(...)\r\n```\r\nto:\r\n```\r\nif self.tokenizer._pad_token is None:\r\n return pad_sequence(examples, batch_first=True)\r\n```\r\nThe language modelling script will run fine with the --line_by_line. In practice, it means it does padding with zeros, which is the default value for padding_value.\r\n\r\nThis \"error\" was introduced a week ago with the commit to master dd9d483d03962fea127f59661f3ae6156e7a91d2 by @julien-c that refactored the LM train script. I was using the LM script with the same data before that and it was working.\r\n\r\nI am not sure how \"wrong\" this is, but I'm using a dataset of relatively short texts (up to 400 words each, often shorter), and I'm getting decent results. I get a bunch of \"!\" (the token 0) at the end of the generation sometimes, but other than that, it looks good.\r\n\r\nI tried an alternative of separating the short texts with <|endoftext|> tokens, and training without the --line_by_line option, but the results I get in generation are qualitatively much worse.",
"Hi @jorgemcgomes, thanks for checking. However, check this [issue](https://github.com/huggingface/transformers/issues/586), it seems tokenizers have 0 index pointed to some vocab token.",
"How about using left side padding for GPT-2, and use attention mask to avoid attending to those padded words? Of course, position_ids shall be set properly to avoid impacting position embeddings. This approach could work with past state since padding word will not be in the middle after appending generated word.\r\n\r\n",
"> How about using left side padding for GPT-2, and use attention mask to avoid attending to those padded words? Of course, position_ids shall be set properly to avoid impacting position embeddings. This approach could work with past state since padding word will not be in the middle after appending generated word.\r\n\r\n@tianleiwu this worked for me! Saved me HOURS in compute time, thank you!\r\n\r\n```python\r\ntokenizer.padding_side = \"left\"\r\nencoded_prompt_dict = tokenizer.batch_encode_plus(input, return_tensors=\"pt\", pad_to_max_length=True)\r\nencoded_prompt = encoded_prompt_dict['input_ids'].to(args.device)\r\nencoded_mask = encoded_prompt_dict['attention_mask'].to(args.device)\r\n```",
"> @schizism Concerning LM inference on batches of different lengths is actually a problem we are currently looking at. Ideally, you should be able to simple put your input_ids (and an attention_mask) to model.generate() to make it work.\r\n> \r\n> @XinyuHua thanks for your great contribution to make LM inference work on batches having different lengths. Also it seems like you found a bug, when using the `past` and `attention_mask` variables as an input in GPT2. That's great! I will open a new issue for that and take a look :-)\r\n> \r\n> Below, I am adding a simplified code snippet using simpler tokenization functions.\r\n> In this code, no `past` variable is used related to the bug found by @XinyuHua.\r\n> \r\n> ```\r\n> from transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n> import torch\r\n> \r\n> model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n> tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')\r\n> # IMPORTANT: Note that setting the <PAD> token like this itn the constructor gives the\r\n> # pad_token the pad_token_id = 50256, which normally belongs to <BOS> token_ids in GPT2\r\n> # This is a very ugly way that works at the moment of setting the pad_token_id to the <BOS> token that is already included in the vocab size. This will be updated in the coming weeks! # noqa: E501\r\n> \r\n> prompt_text = [\r\n> 'in this paper we',\r\n> 'we are trying to',\r\n> 'The purpose of this workshop is to check whether we can']\r\n> \r\n> # encode plus batch handles multiple batches and automatically creates attention_masks\r\n> seq_len = 11\r\n> encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)\r\n> \r\n> # ideally we should be able to just input the following two variables to the function model.generate() ... => to be implemented soon! # noqa: E501\r\n> input_ids = torch.tensor(encodings_dict['input_ids'])\r\n> attn_mask = torch.tensor(encodings_dict['attention_mask'])\r\n> \r\n> num_tokens_to_produce = 20\r\n> pad_token_id = tokenizer.pad_token_id\r\n> eos_token_id = tokenizer.eos_token_id\r\n> eos_not_in_sents = torch.ones(input_ids.shape[0]).long()\r\n> \r\n> # we need to get the token ids of the last non-padded value\r\n> last_non_masked_idx = torch.sum(attn_mask, dim=1) - 1\r\n> start_idx = inp_idx = (last_non_masked_idx).view(-1, 1).repeat(1, tokenizer.vocab_size).unsqueeze(1)\r\n> past = None\r\n> \r\n> # get correct position ids\r\n> position_ids = torch.tensor([list(range(seq_len)) for i in range(input_ids.shape[0])])\r\n> for i, position_ids_slice in enumerate(position_ids):\r\n> position_ids_slice[last_non_masked_idx[i]:] = position_ids_slice[last_non_masked_idx[i]]\r\n> \r\n> for step in range(num_tokens_to_produce):\r\n> outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)\r\n> \r\n> # in the first decoding step, we want to use the 'real' last position for each sentence\r\n> if step == 0:\r\n> next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)\r\n> else:\r\n> next_token_logits = outputs[0][:, -1, :]\r\n> \r\n> next_tokens = torch.argmax(next_token_logits, dim=-1)\r\n> \r\n> # this updates which sentences have not seen an <EOS> token so far\r\n> # if one <EOS> token was seen the sentence is finished\r\n> eos_not_in_sents.mul_(next_tokens.ne(eos_token_id).long())\r\n> \r\n> # either append a padding token here if <EOS> has been seen or append next token\r\n> tokens_to_add = next_tokens * (eos_not_in_sents) + pad_token_id * (1 - eos_not_in_sents)\r\n> \r\n> # Update input_ids, attn_mask and position_ids\r\n> input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)\r\n> attn_mask = torch.cat([attn_mask, torch.ones((attn_mask.shape[0], 1)).long()], dim=1)\r\n> position_ids = torch.cat([position_ids, (position_ids[:, -1] + 1).unsqueeze(-1)], dim=1)\r\n> \r\n> [print(tokenizer.decode(output, skip_special_tokens=True)) for output in input_ids]\r\n> ```\r\n\r\n@patrickvonplaten Thanks for sharing this, I wonder if inputting `input_ids` and `attn_mask` to `model.generate` is possible now? is this feature available now?\r\nI've tried it and I think there should be some concerns regarding positional_embedding since I don't get meaningful result. \r\n\r\nOn the other hand when I try setting `tokenizer.padding_side = \"left\"` as suggested/tried by @tianleiwu @AADeLucia, I get the same output for different hyper parameters like k_sampling, p_sampling, length, ...\r\n@AADeLucia @tianleiwu have you been successful on this? did you take any action regarding position_ids?\r\n\r\nWould appreciate any pointer.\r\n",
"@fabrahman I realize I have huggingface version 2.8 installed, which was not working with `generate()`. I used the left-side padding with p-sampling and it worked for me (i.e. the outputs were reasonable for the settings and I was not getting the same issues as when I did not use left-side padding). I took no action regarding position_ids and I only provided the attention mask. Maybe the newest version of huggingface implemented `generate()` correctly?\r\n\r\nWhat do you mean you get the same output? Can you post your code?",
"@AADeLucia thanks for you quick reply. When you say it was not working with `generate()`, does that mean you got errors when passing `encoded_prompt ` and 'encoded_mask` to generate function?\r\n\r\nActually, I resolved same outputs with different decoding issue, but now I get similar outputs if I sample 5 times `(num_return_sequences=5)`. That is the returning sequences are the same:\r\nThis is the code I am trying as an example:\r\n\r\n```\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\nimport torch\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')\r\nprompt_text = [\r\n 'in this paper we',\r\n 'we are trying to',\r\n 'The purpose of this workshop is to check whether we can']\r\n\r\n# encode plus batch handles multiple batches and automatically creates attention_masks\r\nseq_len = 11\r\ntokenizer.padding_side = \"left\"\r\nencodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)\r\n\r\ninput_ids = torch.tensor(encodings_dict['input_ids'])\r\nattn_mask = torch.tensor(encodings_dict['attention_mask'])\r\n\r\noutputs = model.generate(input_ids, attention_mask=attn_mask, do_sample=True, max_length=40, top_k=10, num_return_sequences=5)\r\noutputs = [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]\r\noutputs = [text[:text.find(\".\")+1] for text in outputs if \".\" in text]\r\noutputs\r\n```\r\nand here is the output results:\r\n```\r\n['in this paper we present a new approach to the problem of the \"unconscious\" and the \"conscious\" in the study of the unconscious.',\r\n 'in this paper we present a new approach to the problem of the \"unconscious\" and the \"conscious\" in the study of the unconscious.',\r\n 'in this paper we present a new approach to the problem of the \"unconscious\" and the \"conscious\" in the study of the unconscious.',\r\n 'in this paper we present a new approach to the problem of the \"unconscious\" and the \"conscious\" in the study of the unconscious.',\r\n 'in this paper we present a new approach to the problem of the \"unconscious\" and the \"conscious\" in the study of the unconscious.',\r\n 'we are trying to get a new version of the game to work on the PC.',\r\n 'we are trying to get a new version of the game to work on the PC.',\r\n 'we are trying to get a new version of the game to work on the PC.',\r\n 'we are trying to get a new version of the game to work on the PC.',\r\n 'we are trying to get a new version of the game to work on the PC.',\r\n 'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',\r\n 'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',\r\n 'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',\r\n 'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.',\r\n 'The purpose of this workshop is to check whether we can make a difference in the lives of people who are struggling with mental illness.']\r\n```",
"@faiazrahman By \"not working\" I mean I would pass in padded prompts and masks and the model would generate as if the mask was not there. So the padded prompts were like\r\n```\r\n<|startoftext|>Hello there<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>\r\n```\r\n(I padded with `<|endoftext|>` but it shouldn't matter as long as the attention mask is working)\r\nAnd then the output would see the multiple `<|endoftext|>` padding tokens and start generating `<|startoftext|>` instead of continuing from the prompts!\r\n\r\nHmm I only generated 1 sequence for each input. But I just tried to generate multiple outputs as a test. I run into the same repetition issue as you with top-k but not with top-p. ",
"I believe Alexandra meant to tag @fabrahman :) ",
"> @faiazrahman By \"not working\" I mean I would pass in padded prompts and masks and the model would generate as if the mask was not there. So the padded prompts were like\r\n> \r\n> ```\r\n> <|startoftext|>Hello there<|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>\r\n> ```\r\n> \r\n> (I padded with `<|endoftext|>` but it shouldn't matter as long as the attention mask is working)\r\n> And then the output would see the multiple `<|endoftext|>` padding tokens and start generating `<|startoftext|>` instead of continuing from the prompts!\r\n> \r\n> Hmm I only generated 1 sequence for each input. But I just tried to generate multiple outputs as a test. I run into the same repetition issue as you with top-k but not with top-p.\r\n\r\n@AADeLucia I actually found the issue. It is because I am passing both `top_p=0` and `top_k=10`. When I removed top_p in case of topk_sampling the problem resolved. I updated my code snippet. \r\nBTW my transformer version is `2.11.0` in case you wanted to try. \r\n\r\n@patrickvonplaten Would you please confirm if [this](https://github.com/huggingface/transformers/issues/3021#issuecomment-669511291) is the right approach and doesn't crash anything?",
"@fabrahman, \r\n\r\nI did not use generate() method but batch inference works for me like the following way:\r\n(1) Get input_ids and attention_mask from tokenizer.batch_encode_plus directly. The padding strategy does not matter.\r\n```\r\n position_ids = (attention_mask.long().cumsum(-1) - 1)\r\n position_ids.masked_fill_(position_ids < 0, 0)\r\n past = None\r\n```\r\n(2) Use model to do inference and get outputs including past. For example, we can construct new inputs like:\r\n* update past tensor from the outputs\r\n* input_ids is the generated tokens with shape (batch_size, 1)\r\n``` \r\n position_ids = (position_ids[:,-1] + 1).reshape(batch_size,1)\r\n attention_mask = torch.cat([attention_mask, torch.ones([self.batch_size, 1]).type_as(attention_mask)], 1).to(device)\r\n```\r\nLoop this step until exit condition is satisfied.\r\n\r\nI have a [notebook](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2_with_OnnxRuntime_on_CPU.ipynb) shows example of batch generation",
"Sorry for the wrong tag! And @fabrahman , glad you found the bug!",
"For GPT2LMHeadModel, I think we can do this:\r\n```python\r\ndef prepare_inputs_for_generation(self, input_ids, past=None, **kwargs):\r\n # only last token for inputs_ids if past is defined in kwargs\r\n if past:\r\n input_ids = input_ids[:, -1].unsqueeze(-1)\r\n\r\n attention_mask = kwargs.get(\"attention_mask\", None)\r\n if attention_mask is not None:\r\n position_ids = (attention_mask.long().cumsum(-1) - 1)\r\n position_ids.masked_fill_(attention_mask==0, 0) # can be filled with anything >= 0\r\n if past:\r\n position_ids = position_ids[:, -1].unsqueeze(-1)\r\n else:\r\n position_ids = None\r\n return {\r\n \"input_ids\": input_ids,\r\n \"past_key_values\": past,\r\n \"use_cache\": kwargs.get(\"use_cache\"),\r\n \"position_ids\": position_ids,\r\n \"attention_mask\": attention_mask, # I forgot to add this line and it took me hours debugging.\r\n }\r\n```\r\nhere:\r\nhttps://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/src/transformers/modeling_gpt2.py#L665-L674\r\n\r\nSo we don't need to care about position ids in `generate()`, since it calls`prepare_inputs_for_generation`.\r\nhttps://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/src/transformers/generation_utils.py#L534-L536\r\n\r\nAnd in `examples/text-generation/run_generation.py`,\r\nuse `tokenizer.padding_side = \"left\"` to avoid this:\r\n> ```python\r\n> for step in range(num_tokens_to_produce):\r\n> outputs = model(input_ids, attention_mask=attn_mask, position_ids=position_ids)\r\n> \r\n> # in the first decoding step, we want to use the 'real' last position for each sentence\r\n> if step == 0:\r\n> next_token_logits = outputs[0].gather(1, start_idx).squeeze(1)\r\n> else:\r\n> next_token_logits = outputs[0][:, -1, :]\r\n> \r\n> next_tokens = torch.argmax(next_token_logits, dim=-1)\r\n> ```\r\n\r\nand use `tokenizer.batch_encode_plus` to get attention_mask and pass to `generate()`.\r\n\r\n@patrickvonplaten What do you think? I see you are working on this. 😃 \r\n\r\n",
"@cccntu have you tested this changes? do they work?\r\n\r\nIf it works, I think would be very useful to many folks out there (i.e., including me 😊). If so, maybe just send a pull request.",
"@andreamad8 I haven't tried it.😅 Maybe I will try it next week, idk. Feel free to try it yourself, and let me know the results! 😃 "
] | 1,582 | 1,678 | 1,582 | NONE | null | Given GPT2 tokenizer do not have an internal pad_token_id, how do I pad sentences and do batch inference using GPT2LMHeadModel?
Specifically my code as:
```
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can', ]
tokens = [tokenizer.convert_tokens_to_ids(tokenizer.tokenize(x, add_prefix_space=True)) for x in prompt_text]
inputs = pad_sequence([torch.LongTensor(x) for x in tokens], batch_first = True, padding_value=tokenizer.eos_token_id)
outputs, past = model(input_ids=inputs, attention_mask=None)
```
This will return non-relevant predictions since GPT2 will consider the eos_tokens and start a new sentence in the batch.
Can anyone please share sample codes that using GPT2LMHeadModel to do batch inference with various sentence lengths?
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3021/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3020/comments | https://api.github.com/repos/huggingface/transformers/issues/3020/events | https://github.com/huggingface/transformers/pull/3020 | 570,835,585 | MDExOlB1bGxSZXF1ZXN0Mzc5ODAzNzk1 | 3,020 | [ci] Run all tests on (self-hosted) GPU | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=h1) Report\n> Merging [#3020](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8ce63ff2163259276fc0a4a2f35b836fe9f4aa0?src=pr&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3020 +/- ##\n==========================================\n- Coverage 77.25% 77.21% -0.04% \n==========================================\n Files 98 98 \n Lines 16040 16040 \n==========================================\n- Hits 12392 12386 -6 \n- Misses 3648 3654 +6\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.71% <0%> (-0.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.38% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=footer). Last update [e8ce63f...2a48145](https://codecov.io/gh/huggingface/transformers/pull/3020?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"**Update:** \r\nThe remaining failing tests:\r\n\r\n- [ ] Test_doc_samples: multilingual.rst and modeling_flaubert.py (@LysandreJik)\r\n- [ ] test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained (@thomwolf https://github.com/huggingface/transformers/commit/0e31e06a75b0022171056e51c8b5d53078ac5170)\r\n- [x] test_modeling_roberta.py::RobertaModelIntegrationTest::test_inference_masked_lm (@julien-c https://github.com/huggingface/transformers/commit/9d0603148bc34255fad0cad73ce438ecd7306322)\r\n- [ ] test_tokenization_roberta.py::RobertaTokenizationTest::test_sequence_builders (@LysandreJik https://github.com/huggingface/transformers/commit/634a3172d869e2ff772b2e0813169641ca9e6cc5)\r\n- [ ] test_tokenization_xlm_roberta.py:: XLMRobertaTokenizationIntegrationTest::test_tokenization_base_hard_symbols (@patrickvonplaten https://github.com/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989)",
"\r\n> * test_tokenization_xlm_roberta.py::XLMRobertaTokenizationIntegrationTest::test_tokenization_base_hard_symbols (@patrickvonplaten [c913eb9](https://github.com/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989))\r\n\r\nI think this test currently fails because there is a problem with the xlm_roberta_tokenizer . In general so far there are no tests at all for the xlm_roberta_tokenizer . I can try to add those (will prob need a bit help from @mfuntowicz and @LysandreJik) \r\n",
"Update on my failing unit (integration) test on RoBERTa: the max absolute diff between expected and actual output is `0.004` whereas it used to be under `1e-3` (both CPU and cuda) – should I dive in to why this changed, or should i just lazily bump up to tolerance?\r\n\r\n(The same integration test without the maskedLM head still passes with `1e-5` abs diff)",
"Could this be due to the bias? https://github.com/huggingface/transformers/pull/2958\r\n\r\nIt could have been computed twice when creating the integration test.",
"I'm not sure how to exactly get the expected logits from the fairseq **LM** Models. It's very easy to get the last embeddings for verification of the no **LM** model via:\r\n\r\n```\r\nimport torch\r\nroberta = torch.hub.load('pytorch/fairseq', 'roberta.large')\r\nroberta.eval() # disable dropout (or leave in train mode to finetune)\r\nlast_layer_embeddings = roberta.extract_features(input_ids) # shape [1, seq_len, 1024]\r\n```\r\n\r\nThe integration test that were implemented for Roberta LMHeadModels did they correspond to the original weights? \r\n",
"@patrickvonplaten https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/tests/test_modeling_roberta.py#L324 is the test (I think)",
"Ok found a resolution strategy with @LysandreJik, will push fix soon.",
"Getting closer:\r\n> ===== 6 failed, 939 passed, 32 skipped, 84 warnings in 1508.72s (0:25:08) ======\r\n"
] | 1,582 | 1,583 | 1,582 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3020/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3020",
"html_url": "https://github.com/huggingface/transformers/pull/3020",
"diff_url": "https://github.com/huggingface/transformers/pull/3020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3020.patch",
"merged_at": 1582942269000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3019/comments | https://api.github.com/repos/huggingface/transformers/issues/3019/events | https://github.com/huggingface/transformers/pull/3019 | 570,823,600 | MDExOlB1bGxSZXF1ZXN0Mzc5NzkzNTk1 | 3,019 | Delete Model2Model | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=h1) Report\n> Merging [#3019](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e8ce63ff2163259276fc0a4a2f35b836fe9f4aa0?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3019 +/- ##\n==========================================\n+ Coverage 77.25% 77.27% +0.01% \n==========================================\n Files 98 98 \n Lines 16040 16030 -10 \n==========================================\n- Hits 12392 12387 -5 \n+ Misses 3648 3643 -5\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/3019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `21.05% <ø> (-4.33%)` | :arrow_down: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=footer). Last update [e8ce63f...835a807](https://codecov.io/gh/huggingface/transformers/pull/3019?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, why is the Model2Model deleted? I am using it and it works great for me. ",
"It was not super well documented or tested and we didn't want to maintain it. If you're interested in sending a PR with working quickstart code and tests (could start by reverting this) we would definitely be happy to add it back! Sorry!"
] | 1,582 | 1,583 | 1,582 | CONTRIBUTOR | null | - the quickstart code doesn't work
- the tests don't test a forward pass
if you need it `git checkout e8ce63ff` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3019/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3019",
"html_url": "https://github.com/huggingface/transformers/pull/3019",
"diff_url": "https://github.com/huggingface/transformers/pull/3019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3019.patch",
"merged_at": 1582734988000
} |
https://api.github.com/repos/huggingface/transformers/issues/3018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3018/comments | https://api.github.com/repos/huggingface/transformers/issues/3018/events | https://github.com/huggingface/transformers/pull/3018 | 570,803,786 | MDExOlB1bGxSZXF1ZXN0Mzc5Nzc3MDYz | 3,018 | [WIP] Updates to simplify PLT example and use new features | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can't seem to find who is pushing the commits, but can whoever is doing that please add commit messages? This is really messy and impossible to go over as currently written.",
"Sorry, @BramVanroy . I didn't realize people were reviewing this branch. This is a WIP We are still trying to work this out with new pytorch-lightning / TPU changes. (Closing for now and will switch to a new review branch when it is working). ",
"@srush No worries! Wasn't really reviewing, I was just curious which changes were being made - but then I saw those commit messages and I didn't know what to expect. 😄 "
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Some changes to PLT to support our workflow. Prelim support for TPUs.
Still testing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3018/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3018",
"html_url": "https://github.com/huggingface/transformers/pull/3018",
"diff_url": "https://github.com/huggingface/transformers/pull/3018.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3018.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3017/comments | https://api.github.com/repos/huggingface/transformers/issues/3017/events | https://github.com/huggingface/transformers/pull/3017 | 570,802,805 | MDExOlB1bGxSZXF1ZXN0Mzc5Nzc2MjUz | 3,017 | [WIP] Update to use new pytorch-lightning features | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Updates to utilize a bunch of new features the PLT wrote to support us. Also support for TPU. Still in testing phase. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3017",
"html_url": "https://github.com/huggingface/transformers/pull/3017",
"diff_url": "https://github.com/huggingface/transformers/pull/3017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3017.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3016/comments | https://api.github.com/repos/huggingface/transformers/issues/3016/events | https://github.com/huggingface/transformers/issues/3016 | 570,791,014 | MDU6SXNzdWU1NzA3OTEwMTQ= | 3,016 | Use my own pretrained BERT model | {
"login": "rabbitwayne",
"id": 5334805,
"node_id": "MDQ6VXNlcjUzMzQ4MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5334805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabbitwayne",
"html_url": "https://github.com/rabbitwayne",
"followers_url": "https://api.github.com/users/rabbitwayne/followers",
"following_url": "https://api.github.com/users/rabbitwayne/following{/other_user}",
"gists_url": "https://api.github.com/users/rabbitwayne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabbitwayne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabbitwayne/subscriptions",
"organizations_url": "https://api.github.com/users/rabbitwayne/orgs",
"repos_url": "https://api.github.com/users/rabbitwayne/repos",
"events_url": "https://api.github.com/users/rabbitwayne/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabbitwayne/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Hi, where did you obtain this model? Is it using the official google-research implementation or using huggingface/transformers?",
"@LysandreJik Thank you for your reply! I have pretrained BERT base using my own pytorch code and generated checkpoint for BERT base. Then I want to do squad finetuning using my own pretrained BERT base model but in huggingface/transformers framework. I saw that huggingface/transformers by default download a pretrained BERT base model from amazon aws. How can I change this to use my own checkpoint? Thanks a lot!",
"Are you able to load your model using `model = BertModel.from_pretrained(\"model_dir\")`? The architecture of the model would need to be the same, and have the same layer names so that the torch state dict may be loaded onto one of our architectures.",
"Thank you! Let me try that.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | How can I use my own pretrained BERT model in SQUAD finetuing? How to do this in the code? Can anyone provide any instructions? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3015/comments | https://api.github.com/repos/huggingface/transformers/issues/3015/events | https://github.com/huggingface/transformers/issues/3015 | 570,752,675 | MDU6SXNzdWU1NzA3NTI2NzU= | 3,015 | Latest version of transformers available via conda? | {
"login": "rmartinshort",
"id": 8951647,
"node_id": "MDQ6VXNlcjg5NTE2NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8951647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rmartinshort",
"html_url": "https://github.com/rmartinshort",
"followers_url": "https://api.github.com/users/rmartinshort/followers",
"following_url": "https://api.github.com/users/rmartinshort/following{/other_user}",
"gists_url": "https://api.github.com/users/rmartinshort/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rmartinshort/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rmartinshort/subscriptions",
"organizations_url": "https://api.github.com/users/rmartinshort/orgs",
"repos_url": "https://api.github.com/users/rmartinshort/repos",
"events_url": "https://api.github.com/users/rmartinshort/events{/privacy}",
"received_events_url": "https://api.github.com/users/rmartinshort/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"It would be nice if it had it's own channel like PyTorch has, as put a distribution of the packages (transformers, tokenizers, etc.) there.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any update on this request?\r\nThe latest version in conda forge is 2.1.1. \r\nThanks."
] | 1,582 | 1,591 | 1,591 | NONE | null | # 🚀 Feature request
I notice that the version of transformers available via conda-forge is quite outdated (v2.1.1). Could you make later versions available there too? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3015/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3015/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3014/comments | https://api.github.com/repos/huggingface/transformers/issues/3014/events | https://github.com/huggingface/transformers/pull/3014 | 570,732,468 | MDExOlB1bGxSZXF1ZXN0Mzc5NzE4Mjcy | 3,014 | Add integration tests for xlm roberta modelling and xlm roberta tokenzier | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=h1) Report\n> Merging [#3014](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e693cd1e877aa191d3317faed33e87d1558c9406?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3014 +/- ##\n==========================================\n- Coverage 77.25% 76.22% -1.04% \n==========================================\n Files 98 98 \n Lines 16040 16040 \n==========================================\n- Hits 12392 12226 -166 \n- Misses 3648 3814 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.22% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=footer). Last update [e693cd1...fc5fe85](https://codecov.io/gh/huggingface/transformers/pull/3014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | It's quite easy to get real numbers for the XLM-R moder from [fairseq](https://github.com/pytorch/fairseq/tree/master/examples/xlmr), so I added integration tests for `xlm_roberta_modeling.py` and `xlm_roberta_tokenization.py`
Since `XLMRobertaModel` is the same model as `RobertaModel`, I think intergration tests for `xlm_roberta_modeling.py` are enough.
Regarding `XLMRobertaTokenizer` there were no tests so far, so in this file there should definitely be more "fast" tests. I would need some help on those (@mfuntowicz ?).
Regarding the results of the integration tests:
The tests for `xlm_roberta_modeling.py` all pass.
One of the two tests (called hard_token_symbols) for `xlm_roberta_tokenization.py' fails. @LysandreJik @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3014",
"html_url": "https://github.com/huggingface/transformers/pull/3014",
"diff_url": "https://github.com/huggingface/transformers/pull/3014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3014.patch",
"merged_at": 1582667486000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.