url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4720/comments | https://api.github.com/repos/huggingface/transformers/issues/4720/events | https://github.com/huggingface/transformers/pull/4720 | 629,111,523 | MDExOlB1bGxSZXF1ZXN0NDI2NTEyNjU3 | 4,720 | [Reformer] Improved memory if input is shorter than chunk length | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=h1) Report\n> Merging [#4720](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **decrease** coverage by `0.56%`.\n> The diff coverage is `96.92%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4720 +/- ##\n==========================================\n- Coverage 77.14% 76.57% -0.57% \n==========================================\n Files 128 128 \n Lines 21073 21089 +16 \n==========================================\n- Hits 16256 16149 -107 \n- Misses 4817 4940 +123 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <96.92%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.73% <0.00%> (-40.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=footer). Last update [47a551d...6fe9553](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | This PR improves memory and speed of Reformer for language generatinon.
Reformer is based on chunked self attention. This means that for an input length which is not a multiple of the chunk length, the input has to be padded to be a multiple of the chunk length. This is not the case though when the input length is less than the chunk length (happens in language generation). In this case, normal self attention should be applied to save memory.
Code is updated for both LSH and Local self attention and a test is added.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4720/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4720",
"html_url": "https://github.com/huggingface/transformers/pull/4720",
"diff_url": "https://github.com/huggingface/transformers/pull/4720.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4720.patch",
"merged_at": 1591132120000
} |
https://api.github.com/repos/huggingface/transformers/issues/4719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4719/comments | https://api.github.com/repos/huggingface/transformers/issues/4719/events | https://github.com/huggingface/transformers/issues/4719 | 629,103,653 | MDU6SXNzdWU2MjkxMDM2NTM= | 4,719 | About `do_basic_tokenize` behavior in BertTokenizer | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"there is a `BasicTokenizer` in previous `pytorch_transformers` package:\r\n\r\nThis must be what \"basic tokenize\" does"
] | 1,591 | 1,657 | 1,596 | CONTRIBUTOR | null | # ❓ Questions & Help
In the [docs](https://huggingface.co/transformers/model_doc/bert.html?highlight=basic%20tokenization#berttokenizer) it just says that it performs a basic tokenization before the word piecer. But what is actually a "basic tokenization"? I would really appreciate a little more of information on that.
Diving into the code I found out that the basic tokenization removes control characters from the text. I did not expect that behavior from what I read on the docs. That gave us problems because some characters like `` weren't tokenizing and we didn't know why.
More generally I think that transformers has a problem with the docs. Most of the times that something does not work for me, I don't bother to look at them and I directly dive into the source code in order to understand what's happening. I am used to scikit-learn and maybe I am a bit bisaed, but I really think that these kind of things can be a barrier for new people wanting to use transformers.
If there is something I can do to help, I am happy to send a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4718/comments | https://api.github.com/repos/huggingface/transformers/issues/4718/events | https://github.com/huggingface/transformers/pull/4718 | 629,021,810 | MDExOlB1bGxSZXF1ZXN0NDI2NDQyNTMw | 4,718 | Replace pad_token with -100 for LM loss calculation | {
"login": "setu4993",
"id": 1833708,
"node_id": "MDQ6VXNlcjE4MzM3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1833708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/setu4993",
"html_url": "https://github.com/setu4993",
"followers_url": "https://api.github.com/users/setu4993/followers",
"following_url": "https://api.github.com/users/setu4993/following{/other_user}",
"gists_url": "https://api.github.com/users/setu4993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/setu4993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/setu4993/subscriptions",
"organizations_url": "https://api.github.com/users/setu4993/orgs",
"repos_url": "https://api.github.com/users/setu4993/repos",
"events_url": "https://api.github.com/users/setu4993/events{/privacy}",
"received_events_url": "https://api.github.com/users/setu4993/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=h1) Report\n> Merging [#4718](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **decrease** coverage by `0.64%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4718 +/- ##\n==========================================\n- Coverage 77.10% 76.45% -0.65% \n==========================================\n Files 128 128 \n Lines 21723 21725 +2 \n==========================================\n- Hits 16749 16610 -139 \n- Misses 4974 5115 +141 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.55% <100.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.08% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.49% <0.00%> (-0.78%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=footer). Last update [e80d6c6...6cceabd](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Closed the PR by mistake... Re-opening it.",
"Just a quick question for @mfuntowicz, is `copy.deepcopy` a performant way to clone a tensor? (given that this is called at each training step)",
"I saw `deepcopy` being used elsewhere in the code, so added it as is. Just looking at the new tensor [documentation](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.new_tensor), and they recommend using `.clone().detach()`. Happy to change it to that.",
"Bumping this since I haven't seen any activity in a few days.",
"Yes `.clone().detach()` sounds good.",
"LGTM but let's let @LysandreJik have a last check and merge this",
"Thanks @julien-c!",
"Hey @LysandreJik, can you please review when you have a chance? Thanks!",
"Thanks @setu4993!"
] | 1,591 | 1,593 | 1,593 | CONTRIBUTOR | null | The docs for both GPT and [GPT2](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel) specify that labels that are not -100 will be used for the calculation of the loss. So, the padding for the labels should be `-100`, not `tokenizer.pad_token_id`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4718",
"html_url": "https://github.com/huggingface/transformers/pull/4718",
"diff_url": "https://github.com/huggingface/transformers/pull/4718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4718.patch",
"merged_at": 1593015290000
} |
https://api.github.com/repos/huggingface/transformers/issues/4717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4717/comments | https://api.github.com/repos/huggingface/transformers/issues/4717/events | https://github.com/huggingface/transformers/pull/4717 | 629,010,238 | MDExOlB1bGxSZXF1ZXN0NDI2NDMzODgz | 4,717 | Override get_vocab for fast tokenizer. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=h1) Report\n> Merging [#4717](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76779363160a598f130433209a77f8a747351b61&el=desc) will **decrease** coverage by `1.23%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4717 +/- ##\n==========================================\n- Coverage 77.38% 76.14% -1.24% \n==========================================\n Files 128 128 \n Lines 21071 21073 +2 \n==========================================\n- Hits 16305 16046 -259 \n- Misses 4766 5027 +261 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.59% <50.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=footer). Last update [7677936...8217ee0](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4717/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4717",
"html_url": "https://github.com/huggingface/transformers/pull/4717",
"diff_url": "https://github.com/huggingface/transformers/pull/4717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4717.patch",
"merged_at": 1591088547000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4716/comments | https://api.github.com/repos/huggingface/transformers/issues/4716/events | https://github.com/huggingface/transformers/issues/4716 | 628,994,410 | MDU6SXNzdWU2Mjg5OTQ0MTA= | 4,716 | Can I save a word embedding from BERT and used again later for computational purpose(as BERT takes much more time). Just like in Glove. If yes then How? and is this good idea to do so? | {
"login": "shubhamk16",
"id": 46631180,
"node_id": "MDQ6VXNlcjQ2NjMxMTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/46631180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamk16",
"html_url": "https://github.com/shubhamk16",
"followers_url": "https://api.github.com/users/shubhamk16/followers",
"following_url": "https://api.github.com/users/shubhamk16/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamk16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamk16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamk16/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamk16/orgs",
"repos_url": "https://api.github.com/users/shubhamk16/repos",
"events_url": "https://api.github.com/users/shubhamk16/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamk16/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As per my understanding, you can! \r\n\r\nIf you read the BERT paper by Devlin et.al., you can see that two suggested ways to extract said word embeddings would be to concatenate the last four hidden layers (9 to 12), generating a 4*768=3072 sized embedding for each token. Alternatively you can also sum or average out the last 4 layers to generate vectors of size 768.\r\n\r\nYou can also instead prefer storing sentence embeddings, then the CLS token serves as averaged representation of the sentence embedding (not a very good representation though, unless the model was fine-tuned well on the data or the LM was pretrained on similar data, and even then the sentence embeddings through that CLS token could be substandard). Although, averaging out the second to last hidden layer (i.e. averaging out token embeddings for the sentence at the 11th hidden layer) seems to generate decent sentence embeddings (of size 768, the same as one token, since its averaged and not concatenated). \r\n\r\nHope this helps!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,597 | 1,597 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4716/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4715/comments | https://api.github.com/repos/huggingface/transformers/issues/4715/events | https://github.com/huggingface/transformers/issues/4715 | 628,935,530 | MDU6SXNzdWU2Mjg5MzU1MzA= | 4,715 | how to make a multi-task deep neural network baseline using huggingface transformers? | {
"login": "mobassir94",
"id": 24439592,
"node_id": "MDQ6VXNlcjI0NDM5NTky",
"avatar_url": "https://avatars.githubusercontent.com/u/24439592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mobassir94",
"html_url": "https://github.com/mobassir94",
"followers_url": "https://api.github.com/users/mobassir94/followers",
"following_url": "https://api.github.com/users/mobassir94/following{/other_user}",
"gists_url": "https://api.github.com/users/mobassir94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mobassir94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mobassir94/subscriptions",
"organizations_url": "https://api.github.com/users/mobassir94/orgs",
"repos_url": "https://api.github.com/users/mobassir94/repos",
"events_url": "https://api.github.com/users/mobassir94/events{/privacy}",
"received_events_url": "https://api.github.com/users/mobassir94/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,596 | 1,596 | NONE | null | I was trying to build a [multi-task deep neural network][1] using [xlm roberta large model][2] for a multilingual classification problem. my training dataset contains 4 columns :
1. ID
2. comment_text (according to id number,each users english comment is stored in this column. example comment : "you are a loser")
3. toxic (this column contains 1/0,0 means not toxic,1 means toxic)
4. personal_attack(this column also contains 0/1,,0 means the comment is not a personal attack type comment and 1 means opposite)
here is my models code :
def build_model(transformer, max_len=512):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid',name = 'y_train')(cls_token)
out1 = Dense(1, activation='sigmoid',name = 'y_aux')(cls_token)
model = Model(inputs=input_word_ids, outputs=[out, out1])
model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
here is the code for train and test dataset :
train_dataset = (
tf.data.Dataset
.from_tensor_slices((x_train,
{ 'y_train':train.toxic.values,
'y_aux':train.identity_attack.values}
))
.repeat()
.shuffle(2048)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(BATCH_SIZE)
)
then for training model i used this code :
EPOCHS = 3
n_steps = x_train.shape[0] // BATCH_SIZE
train_history = model.fit(
train_dataset,
steps_per_epoch=n_steps,
epochs=EPOCHS
)
i don't wish to perform validation so just train_dataset was given for model.fit()
after 3 epoch i get performance like this :
Epoch 3/3
1658/1658 [==============================] - 887s 535ms/step - loss: 0.0591 - y_train_loss: 0.0175 - y_aux_loss: 0.0416 - y_train_accuracy: 0.9940 - y_aux_accuracy: 0.9821
now in my test set i have 1 columns :
1. comments( this column contains comments of non english language (remember in train set we only had English comments and here in test set all comments are non english)
so i expect my model to predict on these test set whether the given test set comment is toxic or not?
as you can see from 3rd epoch result that i am calculating y_train_accuracy: 0.9940 - y_aux_accuracy: 0.9821
now i want my model to predict y_test or toxic/not toxic only
for that i tried :
sub['toxic'] = model.predict(test_dataset, verbose=1)
sub is a dataframe that contains all the id of test set and using **test_dataset** i was trying to predict each and every test set comments but i get this error :
499/499 [==============================] - 126s 253ms/step
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-1dc84858379e> in <module>
----> 1 sub['toxic'] = model.predict(test_dataset, verbose=1)
2 sub.to_csv('submission.csv', index=False)
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in __setitem__(self, key, value)
2936 else:
2937 # set column
-> 2938 self._set_item(key, value)
2939
2940 def _setitem_slice(self, key, value):
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in _set_item(self, key, value)
2998
2999 self._ensure_valid_index(value)
-> 3000 value = self._sanitize_column(key, value)
3001 NDFrame._set_item(self, key, value)
3002
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in _sanitize_column(self, key, value, broadcast)
3634
3635 # turn me into an ndarray
-> 3636 value = sanitize_index(value, self.index, copy=False)
3637 if not isinstance(value, (np.ndarray, Index)):
3638 if isinstance(value, list) and len(value) > 0:
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/construction.py in sanitize_index(data, index, copy)
609
610 if len(data) != len(index):
--> 611 raise ValueError("Length of values does not match length of index")
612
613 if isinstance(data, ABCIndexClass) and not copy:
ValueError: Length of values does not match length of index
now i have 4 questions :
1. is my implementation correct?
2. why i am getting that error? if i treat this problem as simple multilingual classification task like compute 1 loss for y true then i get no error at all,so where i am having trouble?
3. how can i solve the issue?
4. as it is my first time with multi task learning using huggingface transformers,what are your suggestions for updating my model so that it can generalize better?
[1]: https://arxiv.org/abs/1706.05098
[2]: https://huggingface.co/jplu/tf-xlm-roberta-large
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4714/comments | https://api.github.com/repos/huggingface/transformers/issues/4714/events | https://github.com/huggingface/transformers/issues/4714 | 628,881,518 | MDU6SXNzdWU2Mjg4ODE1MTg= | 4,714 | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' | {
"login": "iBrushC",
"id": 58674899,
"node_id": "MDQ6VXNlcjU4Njc0ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/58674899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iBrushC",
"html_url": "https://github.com/iBrushC",
"followers_url": "https://api.github.com/users/iBrushC/followers",
"following_url": "https://api.github.com/users/iBrushC/following{/other_user}",
"gists_url": "https://api.github.com/users/iBrushC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iBrushC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iBrushC/subscriptions",
"organizations_url": "https://api.github.com/users/iBrushC/orgs",
"repos_url": "https://api.github.com/users/iBrushC/repos",
"events_url": "https://api.github.com/users/iBrushC/events{/privacy}",
"received_events_url": "https://api.github.com/users/iBrushC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This might help: #3444",
"`MODEL_WITH_LM_HEAD_MAPPING` is a mapping containing all the *Pytorch* models that have an LM head. Since you don't have PyTorch installed, you can't import them.\r\n\r\nWith TensorFlow you're probably looking for `TF_MODEL_WITH_LM_HEAD_MAPPING`.\r\n\r\nPlease note that the `run_language_modeling.py` script is currently PyTorch only. A TensorFlow version will be available in the future."
] | 1,591 | 1,591 | 1,591 | NONE | null | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' from 'transformers' (C:\Users\<username>\anaconda3\envs\tensorflow\lib\site-packages\transformers\__init__.py)
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
[x] the official example scripts: (give details below)
a necessary package does not exist for import
The tasks I am working on is:
[x] my own task or dataset: (give details below)
I am trying to finetune the GPT2 library to work with recipes
## To reproduce
Steps to reproduce the behavior:
1. create conda virtualenv
2. install latest version of Tensorflow GPU
3. install transformers library
4. navigate to the 'language-modeling' folder
5. run 'python run_language_modeling.py' in conda virtualenv
LOG:
```
python run_language_modeling.py
2020-06-01 23:53:43.603759: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' from 'transformers' (C:\Users\<name>\anaconda3\envs\tensorflow\lib\site-packages\transformers\__init__.py)
```
## Expected behavior
There should be an error with not receiving variables or no output at all. If you were to insert data it would train an instance of GPT2 and save it to the output directory
## Environment info
- `transformers` version: 2.10.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
It would be nice to figure out what the issue is. I am aware that there are 3 questions similar to this, but they all use torch while I am using tensorflow. All of the solutions I have tried (reinstalling tensorflow, reinstalling transformers, updating packages, reformatting code, etc.) have not worked. Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4713/comments | https://api.github.com/repos/huggingface/transformers/issues/4713/events | https://github.com/huggingface/transformers/issues/4713 | 628,796,389 | MDU6SXNzdWU2Mjg3OTYzODk= | 4,713 | Is there any need to fine-tune the already pre-trained GPT-2 models? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"\"Finetuning\" a GPT-2 model is generally used for generation, to make it output text more similar to a given dataset.\r\n\r\nIf you are using it for other tasks (e.g. binary predictions), you may not have to finetune it.",
"Hello,\r\n\r\nThank you for your reply.\r\nSo say if I am trying to solve multiple choice questions with GPT2DoubleHeadsModel, should I just use the pre-trained model without fine-tuning?\r\n\r\nThanks, ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,596 | 1,596 | NONE | null | Hello,
If I am using the pre-trained GPT-2 for my research, should I still fine-tune the already pre-trained models with my dataset? I am a bit confused by the term "pre-trained".
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4712/comments | https://api.github.com/repos/huggingface/transformers/issues/4712/events | https://github.com/huggingface/transformers/issues/4712 | 628,775,192 | MDU6SXNzdWU2Mjg3NzUxOTI= | 4,712 | Converting model to pytorch | {
"login": "rlpatrao",
"id": 18295519,
"node_id": "MDQ6VXNlcjE4Mjk1NTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/18295519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlpatrao",
"html_url": "https://github.com/rlpatrao",
"followers_url": "https://api.github.com/users/rlpatrao/followers",
"following_url": "https://api.github.com/users/rlpatrao/following{/other_user}",
"gists_url": "https://api.github.com/users/rlpatrao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlpatrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlpatrao/subscriptions",
"organizations_url": "https://api.github.com/users/rlpatrao/orgs",
"repos_url": "https://api.github.com/users/rlpatrao/repos",
"events_url": "https://api.github.com/users/rlpatrao/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlpatrao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
Folks, I am trying to convert the Biobert model to Pytorch. Here are the things that I did so far:
**1. For the vocab:** I am trying to convert the vocab using solution from #69 :
```tokenizer = BartTokenizer.from_pretrained('/content/biobert_v1.1_pubmed/vocab.txt')```
I get :
`OSError: Model name '/content/biobert_v1.1_pubmed' was not found in tokenizers model name list (bart-large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed '/content/biobert_v1.1_pubmed' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.`
I don’t have the vocab.json, so I how do I convert the vocab for the tokenizer ?
**2. For the model:** As the out of the box `pytorch_pretrained_bert.convert_tf_checkpoint_to_pytorch` did not work I customized it per #2 by adding:
```
excluded = ['BERTAdam','_power','global_step']
init_vars = list(filter(lambda x:all([True if e not in x[0] else False for e in excluded]),init_vars))
```
With this the model 'seems' to be converting fine. But When I load this using:
`model = BartForConditionalGeneration.from_pretrained('path/to/model/biobert_v1.1_pubmed_pytorch.model') `
I still get
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
`
Can you pl. help me to understand what is going on here ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4711/comments | https://api.github.com/repos/huggingface/transformers/issues/4711/events | https://github.com/huggingface/transformers/pull/4711 | 628,665,381 | MDExOlB1bGxSZXF1ZXN0NDI2MTY1MjAy | 4,711 | Make docstring match args | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=h1) Report\n> Merging [#4711](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6449c494d0f40f0b70442f3da9e61f042ff807a8&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4711 +/- ##\n==========================================\n- Coverage 77.32% 77.14% -0.19% \n==========================================\n Files 128 128 \n Lines 21071 21071 \n==========================================\n- Hits 16294 16256 -38 \n- Misses 4777 4815 +38 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.79% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.11% <ø> (-14.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <ø> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `89.46% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.94% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=footer). Last update [6449c49...e7d6cb9](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I can work on this if there is no one on it. Quick question though: what about the models that have both `lm_labels` *and* `masked_lm_labels`? encode_decoder is one of them for instance, don't know if there are more.",
"Yes, that's the case for [`BertForMaskedLM` for example](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L850). I don't really know the best way to handle this.\r\n\r\nAs with this update we're trying to have the *exact* same API for all models so that the training/inference code is model agnostic, I'd say that we should look for the most natural on a case-by-case basis.\r\n\r\nFor example with the `BertForMaskedLM` example, I believe the `labels` should be the `masked_lm_labels`, as BERT should be used for MLM rather than CLM. ",
"> the\r\n\r\nAs far as I know, `BertForMaskedLM` does not really use `lm_labels` at the moment. I think it was added to support a causal `Bert` in an encoder-decoder setting so that the decoder can be trained with a causal mask with the language model objective. Since the encoder-decoder framework is not really released yet, I think we can also add a new `BertWithLMHead` class so that each class only has one `labels` argument. It would be a breaking change in terms of the class name for people that already implemented Bert2Bert models, but I think it's worth it for consistency. What do you think? \r\n\r\n@sgugger - In the encoder-decoder model I added both `lm_labels` and `masked_lm_labels` because `Bert` has both `lm_labels` and `masked_lm_labels`. Normally, encoder-decoder models are trained with a CLM objective so not sure if we even need `masked_lm_lables` for the encoder-decoder model wrapper. ",
"@patrickvonplaten good for me"
] | 1,591 | 1,591 | 1,591 | COLLABORATOR | null | When replying to #4698, I realized some language model docstrings are using arguments that are not present in the function signature. This PR addresses that (for all the ones I found at least).
The alternative would be to change the argument names in the function signatures (if it makes the various model APIs more consistent). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4711/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4711",
"html_url": "https://github.com/huggingface/transformers/pull/4711",
"diff_url": "https://github.com/huggingface/transformers/pull/4711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4711.patch",
"merged_at": 1591039372000
} |
https://api.github.com/repos/huggingface/transformers/issues/4710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4710/comments | https://api.github.com/repos/huggingface/transformers/issues/4710/events | https://github.com/huggingface/transformers/pull/4710 | 628,651,719 | MDExOlB1bGxSZXF1ZXN0NDI2MTU0MDA2 | 4,710 | Specify PyTorch versions | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=h1) Report\n> Merging [#4710](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6449c494d0f40f0b70442f3da9e61f042ff807a8&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4710 +/- ##\n==========================================\n+ Coverage 77.32% 77.38% +0.05% \n==========================================\n Files 128 128 \n Lines 21071 21071 \n==========================================\n+ Hits 16294 16305 +11 \n+ Misses 4777 4766 -11 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (+1.64%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=footer). Last update [6449c49...6081929](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,591 | 1,591 | 1,591 | MEMBER | null | Specify that the examples require a different PyTorch version than the base library. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4710/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4710",
"html_url": "https://github.com/huggingface/transformers/pull/4710",
"diff_url": "https://github.com/huggingface/transformers/pull/4710.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4710.patch",
"merged_at": 1591086569000
} |
https://api.github.com/repos/huggingface/transformers/issues/4709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4709/comments | https://api.github.com/repos/huggingface/transformers/issues/4709/events | https://github.com/huggingface/transformers/issues/4709 | 628,647,110 | MDU6SXNzdWU2Mjg2NDcxMTA= | 4,709 | Wrong argument passed during TFRobertaClassificationHead initialization | {
"login": "harkous",
"id": 5602332,
"node_id": "MDQ6VXNlcjU2MDIzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5602332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harkous",
"html_url": "https://github.com/harkous",
"followers_url": "https://api.github.com/users/harkous/followers",
"following_url": "https://api.github.com/users/harkous/following{/other_user}",
"gists_url": "https://api.github.com/users/harkous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harkous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harkous/subscriptions",
"organizations_url": "https://api.github.com/users/harkous/orgs",
"repos_url": "https://api.github.com/users/harkous/repos",
"events_url": "https://api.github.com/users/harkous/events{/privacy}",
"received_events_url": "https://api.github.com/users/harkous/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello,\r\n\r\nThe way the TF models behave has been recently updated. Can you please retry with the `master` version?",
"Hi Julien,\r\n\r\nThanks. It's the same issue in the `master` currently:\r\n\r\nhttps://github.com/huggingface/transformers/blob/9f5d5a531d769d07403f59661884e254f8420afe/src/transformers/modeling_tf_roberta.py#L320-L324\r\n\r\n`config` is still passed as the first parameter to the `__init__` of parent class `tf.keras.layers.Layer` while the latter expects `trainable` as the first parameter. I fixed it on my side by simply removing that `config` parameter from the `super().__init__(` call. But I wasn't sure if this affects other parts of the repo. Otherwise, I would have submitted a PR.\r\n",
"Ok, thanks for the feedback. Indeed, the `config` parameter is important. I will take some time to review this. Sorry for the inconvenience.",
"You are totally right, `config` here is useless and the same appears in other models . Do you mind to do a PR? And I will help you to fix all this :)",
"Thanks for making the checks. I submitted a PR: https://github.com/huggingface/transformers/pull/4884"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
## Information
There is an issue preventing a RoBERTa classification model from being serialized. It is related to a problem in passing `config` as the first argument to `tf.keras.layers.Layer`. However, the [expected positional argument](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) is `trainable`:
https://github.com/huggingface/transformers/blob/d6a677b14bcfd56b22fafeb212a27c6068886e07/src/transformers/modeling_tf_roberta.py#L327-L331
This is the root cause behind issue #3664 (about serialization).
A related fix for GPT2: #2738.
Model I am using (Bert, XLNet ...):
RoBERTa
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the code below:
```python
from transformers import (TFRobertaForSequenceClassification)
base_model = TFRobertaForSequenceClassification.from_pretrained("roberta-base")
print(base_model.classifier.trainable)
```
## Expected behavior
The output is:
`True`
The current output is
```
RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"type_vocab_size": 1,
"vocab_size": 50265
}
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4709/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4708/comments | https://api.github.com/repos/huggingface/transformers/issues/4708/events | https://github.com/huggingface/transformers/issues/4708 | 628,638,631 | MDU6SXNzdWU2Mjg2Mzg2MzE= | 4,708 | Is the separation token absolutely necessary if I use GPT2DoubleHeadsModel with token_type_ids? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,591 | 1,596 | 1,596 | NONE | null | Hello,
I am trying to use the GPT2DoubleHeadsModel to process the multiple choice questions.
For the pre-processing of the multiple choice questions, I didn't add any special separating token between the multiple choice question and the multiple choice option. Instead, I generated the token_type_ids, which denote 0 for the question portion of the text, and 1 for the multiple choice option. Then I tried to make the GPT2DoubleHeadsModel to predict the correct answer by doing:
```python
gpt2DoubleHeadsModel(input_ids=input_ids, token_type_ids = token_type_ids)
```
Is this practice acceptable? or do I absolutely need to insert a special separation token between the question text and the multiple choice option text?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4708/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4707/comments | https://api.github.com/repos/huggingface/transformers/issues/4707/events | https://github.com/huggingface/transformers/issues/4707 | 628,638,123 | MDU6SXNzdWU2Mjg2MzgxMjM= | 4,707 | How tokenizers work | {
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"pinging @n1t0, lead on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) :)",
"Thank you! Closing it here, moved link: [https://github.com/huggingface/tokenizers/issues/290](url)"
] | 1,591 | 1,591 | 1,591 | CONTRIBUTOR | null | Good afternoon.
If possible, I would ask a few questions which I cannot solve for several days running.
1) What does the file _merge.txt_ mean which is used in `ByteLevelBPETokenizer`? Does it store the config for the tokenizer? I deepened into it, however, did not manage to understand its purpose.
2) Is it possible to get offsets for the text when using models-stacked tokenizers? For instance, I am using ElectraTokenizer and I would like it to return the offsets for my texts - do you probably have any templates of how to do it?
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4707/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4706/comments | https://api.github.com/repos/huggingface/transformers/issues/4706/events | https://github.com/huggingface/transformers/issues/4706 | 628,620,903 | MDU6SXNzdWU2Mjg2MjA5MDM= | 4,706 | When using the Hugging Face Transformer, if I set my pad_token to be a different token then the default, do I need to train my model on that new pad_token as well? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @h56cho, if you add new tokens then yes, you'll need to train the model. If you just want `eos` and `pad` token then its a good idea to use the model defaults. `eos` and `pad` tokens are already available in GPT2 model ",
"Hi,\r\n\r\nThank you for your reply.\r\nSo what I am getting is that if I add any new \"special token\" onto the existing pre-trained tokenizer, I will need to re-train the pre-trained Transformer to make it learn that new special token.\r\n\r\n But what if I just add extra non-special tokens? for example, a word \"paradox\" is not included in the existing GPT-2 tokenizer, so say I add the word \"paradox\" to the existing set of GPT-2 vocabulary. If I didn't make any changes to the special tokens in the GPT-2 tokenizer, do I still need to train the pre-trained GPT-2 because I added a new word to a set of vocabulary?\r\n\r\nThanks, "
] | 1,591 | 1,591 | 1,591 | NONE | null | Hello,
For using the GPT2DoubleHeadsModel, I used ``<eos>`` as the last token of my text sequence, which my model will use to make predictions for multiple-choice questions.
I also set ``<pad>`` as the padding token in my tokenizer, which is different than the default.
When using the Hugging Face pre-trained GPT2DoubleHeadsModel, do I need to train the already pre-trained Transformer because the two tokens I mentioned above are new?
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4706/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4705/comments | https://api.github.com/repos/huggingface/transformers/issues/4705/events | https://github.com/huggingface/transformers/issues/4705 | 628,602,225 | MDU6SXNzdWU2Mjg2MDIyMjU= | 4,705 | Is transformers 2.11.0 compatible with tokenizers 0.8.0(-dev*)? | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,591 | 1,592 | 1,592 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4705/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/4704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4704/comments | https://api.github.com/repos/huggingface/transformers/issues/4704/events | https://github.com/huggingface/transformers/issues/4704 | 628,538,364 | MDU6SXNzdWU2Mjg1MzgzNjQ= | 4,704 | Tensorflow XLMRoberta Multi-Class Problem | {
"login": "kjans123",
"id": 35381053,
"node_id": "MDQ6VXNlcjM1MzgxMDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/35381053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kjans123",
"html_url": "https://github.com/kjans123",
"followers_url": "https://api.github.com/users/kjans123/followers",
"following_url": "https://api.github.com/users/kjans123/following{/other_user}",
"gists_url": "https://api.github.com/users/kjans123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kjans123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kjans123/subscriptions",
"organizations_url": "https://api.github.com/users/kjans123/orgs",
"repos_url": "https://api.github.com/users/kjans123/repos",
"events_url": "https://api.github.com/users/kjans123/events{/privacy}",
"received_events_url": "https://api.github.com/users/kjans123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! If you're using 8 labels, you'll need to tell the model that it needs to do 8-way classification. You can do so by specifying the `num_labels` argument during instantiation:\r\n\r\n```py\r\nmodel = TFXLMRobertaForSequenceClassification.from_pretrained(\r\n \"jplu/tf-xlm-roberta-base\", \r\n num_labels=8\r\n)\r\n```\r\n\r\nLet me know if this fixes your issue.",
"Yes! That fixed the issue. I apologize for my oversight of that argument. Thank you very much for your time.",
"Sure, my pleasure!"
] | 1,591 | 1,591 | 1,591 | NONE | null | ## Details
I am attempting to fine tune an XLMRoberta sequence classification model. I have an array of text snippets from physicians labelled 1-8 with various diagnostic indications. I've created a tensorflow dataset object with the
```
convert_raw_to_xlmroberta_tfdataset()
```
function seen here (https://stackoverflow.com/questions/62095316/tensorflow-xlmroberta-multi-class)
I then create the model:
```
from transformers import TFXLMRobertaForSequenceClassification
import tensorflow as tf
learning_rate = 2e-5
number_of_epochs = 2
model = TFXLMRobertaForSequenceClassification.from_pretrained("jplu/tf-xlm-roberta-base")
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=1e-08)
loss = tf.keras.losses.SparseCategoricalCrossentropy()
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
```
but consistently get this error:
```
ValueError: Shapes (None, 1, 8) and (None, 2) are incompatible
```
(full trace at the above SO link).
I've tried both Sparse Categorical Cross Entropy and just Categorical Cross Entropy. I've used one-hot encoded labels and "normal" labels. Is it even possible to do multi-class classification with TFXLMRoberta? It started to work when I fed in a binary dummy set of labels.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4704/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4703/comments | https://api.github.com/repos/huggingface/transformers/issues/4703/events | https://github.com/huggingface/transformers/issues/4703 | 628,489,602 | MDU6SXNzdWU2Mjg0ODk2MDI= | 4,703 | Fused_norm_layer_cuda | {
"login": "Sagar1094",
"id": 54572031,
"node_id": "MDQ6VXNlcjU0NTcyMDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/54572031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sagar1094",
"html_url": "https://github.com/Sagar1094",
"followers_url": "https://api.github.com/users/Sagar1094/followers",
"following_url": "https://api.github.com/users/Sagar1094/following{/other_user}",
"gists_url": "https://api.github.com/users/Sagar1094/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sagar1094/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sagar1094/subscriptions",
"organizations_url": "https://api.github.com/users/Sagar1094/orgs",
"repos_url": "https://api.github.com/users/Sagar1094/repos",
"events_url": "https://api.github.com/users/Sagar1094/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sagar1094/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! What's your model? Do you have the configuration file?\r\n\r\nWhat is the `run_pretraining.py`? This doesn't seem to be one of our scripts.",
"Hi, extremely sorry for not providing adequate information.\r\n\r\nI used google-research/bert for creating the model. The bert repo contains the run_pretraining.py, before that I created the vocab.txt file from my own data and after that I used create_pretraining_data.py and yes I do have the config.json file as well.\r\n\r\nI am trying to convert tf_checkpoint file to pytorch_model.bin where I am encountering the issue of memory error on Kaggle and Colab and fused_norm_layer_cuda error on linux server without GPU. Just want to know if it is possible to convert from tf_checkpoint to pytorch without GPU or any way I can reduce the model size to load weights on kaggle or colab without running out of memory",
"Do you mind sharing the configuration file?\r\n\r\nSo you pre-trained a model using google-research/bert, and now you're trying to convert it to one of our models, is that correct?",
"{\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 384,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"max_position_embeddings\": 256,\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 2800000\r\n}\r\nThis is the config.json file.\r\n\r\nNot exactly, I have used bert repo to create a model which is in tensorflow and have files model.ckpt-100000.data-00000-of-00001 which seems to be in tensorflow but I need model_pytorch.bin so converting from tf_checkpoint file to pytorch using this amazing library. Trying to use the code written in convert_bert_original_tf_checkpoint_to_pytorch.py",
"Right, you're using the correct script!\r\n\r\nPlease note, that having a 2_800_000 vocabulary size is absolutely huge!\r\n\r\nUnfortunately it's hard to get around memory errors during conversion. The script should work whether you have a GPU available or not. Can you show me the command you used to convert the model on your linux server?",
"I have little data and many unique terms which are written in English but are addresses from all over India due to which I have to keep the vocabulary size this high and due to which I compromised with the dimensions of the model and even the embedding size as well.\r\n\r\nHere is the code snippet:- \r\n\r\nimport torch\r\n\r\nfrom pytorch_transformers.modeling_bert import BertConfig, BertForPreTraining, load_tf_weights_in_bert\r\n\r\n\r\ntf_checkpoint_path=\"./distlang/\"\r\nbert_config_file = \"./config.json\"\r\npytorch_dump_path=\"./distlangpytorch\"\r\n\r\nconfig = BertConfig.from_json_file(bert_config_file)\r\nprint(\"Building PyTorch model from configuration: {}\".format(str(config)))\r\nmodel = BertForPreTraining(config)\r\n\r\n# Load weights from tf checkpoint\r\nload_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n\r\n# Save pytorch-model\r\nprint(\"Save PyTorch model to {}\".format(pytorch_dump_path))\r\ntorch.save(model.state_dict(), pytorch_dump_path)\r\n\r\nIts, throwing error while instantiation of the model model = BertForPreTraining(config) \r\n\r\nI have apex installed but without cuda_ext as my server is not having GPU, so it won't install 😅",
"Hmm I can't reproduce on my setup. Two things:\r\n\r\n- Can you update your `transformers` library? It seems that you're using `pytorch-transformers` which is starting to be quite old now. We've patched quite a few bugs since then! > `pip install -U transformers` and update your imports to `from transformers import`\r\n- Do you mind pasting the stack trace?",
"If I use thrasformers library will it work fine?\r\nAnd yes here the stack trace:-\r\nBuilding PyTorch model from configuration: {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 384,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 256,\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 6,\r\n \"num_labels\": 2,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"pruned_heads\": {},\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 2800000\r\n}\r\n\r\nTraceback (most recent call last):\r\n File \"tftopytorch.py\", line 18, in <module>\r\n model = BertForPreTraining(config)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py\", line 761, in __init__\r\n self.bert = BertModel(config)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py\", line 651, in __init__\r\n self.embeddings = BertEmbeddings(config)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py\", line 240, in __init__\r\n self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py\", line 133, in __init__\r\n fused_layer_norm_cuda = importlib.import_module(\"fused_layer_norm_cuda\")\r\n File \"/usr/lib/python3.6/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 953, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'fused_layer_norm_cuda'",
"Yes, upgrading the library should solve your problems. Since [this commit](https://github.com/huggingface/transformers/commit/98dd19b96b351f481e1268ab6c7b035bb21d106e) we're not using apex for the LayerNorm anymore, so having PyTorch installed on a recent version alongside `transformers` on a recent version should solve your issue!",
"Turns out the problem is solved. Thanks a lot man for saving the day. Amazing library as well 🤗 really appreciate your quick response and understanding my problem and solving it.",
"My pleasure :hugs: ",
"Hey the above problem is solved but I am running into this problem while writing the file to the directory, to my understanding I have to provide the path where I have to save the pyorch model and I am providing a path. Am I missing out something here? \r\n\r\nHere's the snippet of the error the code is same as mentioned above just used transformers instead of pytorch_transformer\r\n\r\nSave PyTorch model to ./distlangpytorch/\r\nTraceback (most recent call last):\r\n File \"tftopytorch.py\", line 25, in <module>\r\n torch.save(model.state_dict(), pytorch_dump_path)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py\", line 369, in save\r\n with _open_file_like(f, 'wb') as opened_file:\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py\", line 234, in _open_file_like\r\n return _open_file(name_or_buffer, mode)\r\n File \"/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py\", line 215, in __init__\r\n super(_open_file, self).__init__(open(name, mode))\r\nIsADirectoryError: [Errno 21] Is a directory: './distlangpytorch/'",
"We usually recommend saving using our method `from_pretrained`, as it saves the configuration as well as the model state dict.\r\n\r\nCan you try using the following?\r\n\r\n```py\r\nmodel = load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\nmodel.save_pretrained(pytorch_dump_path)\r\n```",
"It worked fine, the model is saved now and the size of the model is 1/3rd the original size which was of the tf_checkpoint. Is the model quantized as well, is it something implemented in the library? Also thanks a ton have been struggling since evening and once posted the issue here got the resolution immediately. 🤗",
"That's great! The model is not quantized, having a 2/3 reduction is surprising. It's possible that your checkpoint originally had optimizer states which do take quite a big amount of memory. We don't save that in our checkpoints.\r\n\r\nCool, glad I could help!",
"Hey, its not related to the issue can you please help me with post training quantization. I only am able to see tutorials of dynamic quantization which indeed increases the size of my model. I am not able to find any decent tutorials on post training(static) quantization. Tried torch.quantization.quantize but the method asks for two positional arguments fn_run and fn_args and I am not sure how to define them or create a function to pass to these arguments",
"So, it turns out I might have figured it out. The model size is 4.3 GB my vocab size is 28_000_00 and hidden layer size is 384. So the vocab part of model turn out to be 4X2800000X384 = 4.3GB. What I don't understand is why the model size only have vocabulary part and no part of weights strange. And is there any free resource where I can train this model for classification you might be aware of 🙈, On colab and kaggle the kernel restarts because they only have 12,16 GB RAM, the model needs a bit more 🙈"
] | 1,591 | 1,591 | 1,591 | NONE | null | Hi,
I have created a model on kaggle tpu using run_pretraining.py and it gives me tf_checkpoint file. First of all the model is 12gb in size due to which it is throwing memory error while loading weights into it on kaggle and colab.
Is there a way to reduce the size?
Secondly, I tried performing the operation over a linux server without Gpu so it throws an error no module "fused_norm_layer_cuda" which is obvious but I want to know if there is a way to convert the model from tf to pytorch without GPU, or is there any parameter in BertForPreTraining which can instantiate the model without GPU. Please help. 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4703/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4702/comments | https://api.github.com/repos/huggingface/transformers/issues/4702/events | https://github.com/huggingface/transformers/issues/4702 | 628,228,585 | MDU6SXNzdWU2MjgyMjg1ODU= | 4,702 | Why DataCollatorForLanguageModeling do not make attention_mask feature? | {
"login": "makailove123",
"id": 10958339,
"node_id": "MDQ6VXNlcjEwOTU4MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/10958339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makailove123",
"html_url": "https://github.com/makailove123",
"followers_url": "https://api.github.com/users/makailove123/followers",
"following_url": "https://api.github.com/users/makailove123/following{/other_user}",
"gists_url": "https://api.github.com/users/makailove123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makailove123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makailove123/subscriptions",
"organizations_url": "https://api.github.com/users/makailove123/orgs",
"repos_url": "https://api.github.com/users/makailove123/repos",
"events_url": "https://api.github.com/users/makailove123/events{/privacy}",
"received_events_url": "https://api.github.com/users/makailove123/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@makailove123 Have you figured out why?"
] | 1,590 | 1,622 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
As we know, according to padding on batch of input_ids, we should mask the padding ids as invalidate to avoid attention on them. But I found that collate function of DataCollatorForLanguageModeling didn't implement attention_mask feature.
Is this expected? Or do you think attention_mask is not necessary?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4701/comments | https://api.github.com/repos/huggingface/transformers/issues/4701/events | https://github.com/huggingface/transformers/issues/4701 | 628,110,456 | MDU6SXNzdWU2MjgxMTA0NTY= | 4,701 | Why we need the init_weight function in BERT pretrained model | {
"login": "allanj",
"id": 3351187,
"node_id": "MDQ6VXNlcjMzNTExODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3351187?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allanj",
"html_url": "https://github.com/allanj",
"followers_url": "https://api.github.com/users/allanj/followers",
"following_url": "https://api.github.com/users/allanj/following{/other_user}",
"gists_url": "https://api.github.com/users/allanj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allanj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allanj/subscriptions",
"organizations_url": "https://api.github.com/users/allanj/orgs",
"repos_url": "https://api.github.com/users/allanj/repos",
"events_url": "https://api.github.com/users/allanj/events{/privacy}",
"received_events_url": "https://api.github.com/users/allanj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Have a look at the code for [`.from_pretrained()`](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L490). What actually happens is something like this:\r\n\r\n- find the correct base model class to initialise\r\n- initialise that class with pseudo-random initialisation (by using the `_init_weights` function that you mention)\r\n- find the file with the pretrained weights\r\n- overwrite the weights of the model that we just created with the pretrained weightswhere applicable\r\n\r\nThis ensure that layers were not pretrained (e.g. in some cases the final classification layer) _do_ get initialised in `_init_weights` but don't get overridden.",
"Great. Thanks. I also read through the code and that really clears my confusion. ",
"Good. If the answer was sufficient on Stack Overflow as well, please close that too. ",
"\r\n\r\n\r\n\r\n> Have a look at the code for [`.from_pretrained()`](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L490). What actually happens is something like this:\r\n> \r\n> * find the correct base model class to initialise\r\n> * initialise that class with pseudo-random initialisation (by using the `_init_weights` function that you mention)\r\n> * find the file with the pretrained weights\r\n> * overwrite the weights of the model that we just created with the pretrained weightswhere applicable\r\n> \r\n> This ensure that layers were not pretrained (e.g. in some cases the final classification layer) _do_ get initialised in `_init_weights` but don't get overridden.\r\n\r\nwhen we construct BertForSequenceClassification from pre-trained model, didn't we overwrite the loaded weights with random initialisation?",
"@sunersheng No, the random initialization happens [first](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L659) and then the existing weights are loaded [into it](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L732)."
] | 1,590 | 1,637 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
I have already tried asking the question is SO, which you can find the link [here](https://stackoverflow.com/questions/62040309/why-we-need-the-init-weight-function-in-bert-pretrained-model-in-huggingface-tra/62053791#62053791).
## Details
In the code by Hugginface transformers, there are many fine-tuning models have the function `init_weight`.
For example([here](https://github.com/huggingface/transformers/blob/a9aa7456ac/src/transformers/modeling_bert.py#L1073-L1082)), there is a `init_weight` function at last. Even though we use `from_pretrained`, it will still call the constructor and call `init_weight` function.
```python
class BertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
```
As I know, it will call the following [code](https://github.com/huggingface/transformers/blob/a9aa7456ac/src/transformers/modeling_bert.py#L520-L530)
```python
def _init_weights(self, module):
""" Initialize the weights """
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
elif isinstance(module, BertLayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
```
My question is **If we are loading the pre-trained language model, why do we need to initialize the weight for every module?**
I guess I must be misunderstanding something here.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4701/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4700/comments | https://api.github.com/repos/huggingface/transformers/issues/4700/events | https://github.com/huggingface/transformers/pull/4700 | 628,098,124 | MDExOlB1bGxSZXF1ZXN0NDI1NzA5ODg4 | 4,700 | Add community notebook for T5 sentiment span extraction | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=h1) Report\n> Merging [#4700](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0866669e751bef636fa693b704a28c1fea9a17f3&el=desc) will **decrease** coverage by `1.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4700 +/- ##\n==========================================\n- Coverage 77.14% 75.72% -1.42% \n==========================================\n Files 128 128 \n Lines 21070 21070 \n==========================================\n- Hits 16255 15956 -299 \n- Misses 4815 5114 +299 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=footer). Last update [0866669...34c9a46](https://codecov.io/gh/huggingface/transformers/pull/4700?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice webinar, and cool notebook :)\r\n\r\n@patrickvonplaten, do you want to take a look?",
"Thanks @LysandreJik ! :smile:",
"Awesome notebook @enzoampil! \r\n\r\nLGTM for merge!\r\n\r\nWhich dataset do you use exactly to fine-tune T5 here? ",
"Thanks @patrickvonplaten ! 😄 \r\n\r\nFor the dataset, I got it from an ongoing Kaggle competition called [Tweet Sentiment Extraction](https://www.kaggle.com/c/tweet-sentiment-extraction/data).\r\n\r\n**The objective is to extract the span from a tweet that indicates its sentiment**\r\n\r\nExample input:\r\n```\r\nsentiment: negative\r\ntweet: How did we just get paid and still be broke as hell?! No shopping spree for me today\r\n```\r\n\r\nExample output:\r\n```\r\nbroke as hell?!\r\n```\r\n",
"I was thinking about contributing this to the `nlp` library, but I'm not sure if Kaggle has policies regarding uploading their datasets to other public sources ...",
"I see! Yeah no worries - I don't think we currently handle dataset processing from on-going kaggle competition links. "
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | This is an example notebook that aims to increase the coverage of T5 fine-tuning examples to address #4426 .
This notebook presents a high level overview of T5, its significance for the future of NLP in practice, and a thoroughly commented tutorial on how to fine-tune T5 for sentiment span extraction with an extractive Q&A format.
I recently presented this in a webinar published on [youtube](https://www.youtube.com/watch?v=4LYw_UIdd4A). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4700/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4700/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4700",
"html_url": "https://github.com/huggingface/transformers/pull/4700",
"diff_url": "https://github.com/huggingface/transformers/pull/4700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4700.patch",
"merged_at": 1591084793000
} |
https://api.github.com/repos/huggingface/transformers/issues/4699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4699/comments | https://api.github.com/repos/huggingface/transformers/issues/4699/events | https://github.com/huggingface/transformers/issues/4699 | 628,058,488 | MDU6SXNzdWU2MjgwNTg0ODg= | 4,699 | NER example doesn’t work with tensorflow | {
"login": "chuckabees",
"id": 5777689,
"node_id": "MDQ6VXNlcjU3Nzc2ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5777689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuckabees",
"html_url": "https://github.com/chuckabees",
"followers_url": "https://api.github.com/users/chuckabees/followers",
"following_url": "https://api.github.com/users/chuckabees/following{/other_user}",
"gists_url": "https://api.github.com/users/chuckabees/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuckabees/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuckabees/subscriptions",
"organizations_url": "https://api.github.com/users/chuckabees/orgs",
"repos_url": "https://api.github.com/users/chuckabees/repos",
"events_url": "https://api.github.com/users/chuckabees/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuckabees/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Also, the example was missing this parameter in the python3 run_tf_ner.py command to work. Would be good to update the doc:\r\n\r\n`--logging_dir ./my-model/`",
"@chuckabees I got the same issue, did you solve this problem?",
"@YuqiShen Unfortunately no. I even wrote to the code's author but no response :(",
"I got the same problem. This issue helped me https://github.com/huggingface/transformers/issues/4631#issuecomment-636063607\r\n\r\nI added `--mode token-classification` param in the shell command and now it works fine :)",
"@donuzium thanks but when I try, I get this error now:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_tf_ner.py\", line 281, in <module>\r\n main()\r\n File \"run_tf_ner.py\", line 135, in main\r\n cache_dir=model_args.cache_dir,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py\", line 203, in from_pretrained\r\n config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py\", line 252, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for 'token-classification'. Make sure that:\r\n\r\n- 'token-classification' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'token-classification' is the correct path to a directory containing a config.json file\r\n```\r\n\r\nGoing to https://huggingface.co/models, I don't see 'token-classification' there.",
"@chuckabees it's under `tags`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closing as solved."
] | 1,590 | 1,598 | 1,598 | NONE | null | I’m working through the Pytorch text tokenization example here using the tensorflow version run_tf_ner.py: https://github.com/huggingface/transformers/tree/master/examples/token-classification
The PyTorch version using run_ner.py works. I believe the difference is in https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py starting at line 139 where it has different logic for tensorflow. Narrowing down to utils_ner.py line 149:
`pad_token_label_id: int = -1`
The Pytorch version uses -100. This seems to be the only labeling difference that I can tell. I tried to change this line to -100 as well. But tensorflow code doesn’t seem to accept -1 or -100.
Anyone know how to get this example to work using the tensorflow version? Thanks!
I created a Colab for the example here: https://colab.research.google.com/drive/10GrYSYx5sUVMXplgUS79fIV73bFtoiHX?usp=sharing
This is the error I keep getting:
```
05/31/2020 00:07:45 - INFO - utils_ner - tokens: [CLS] dar ##aus en ##t ##wick ##elt ##e sic ##h im ro ##ko ##ko die sit ##te des gem ##ein ##sam ##en wei ##nen ##s im theater , das die stand ##es ##gren ##zen inner ##hal ##b des pub ##lik ##ums uber ##bruck ##en sol ##lt ##e . [SEP]
05/31/2020 00:07:45 - INFO - utils_ner - input_ids: 101 18243 20559 4372 2102 7184 20042 2063 14387 2232 10047 20996 3683 3683 3280 4133 2618 4078 17070 12377 21559 2368 11417 10224 2015 10047 4258 1010 8695 3280 3233 2229 13565 10431 5110 8865 2497 4078 9047 18393 18163 19169 28985 2368 14017 7096 2063 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
05/31/2020 00:07:45 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
05/31/2020 00:07:45 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
05/31/2020 00:07:45 - INFO - utils_ner - label_ids: -1 24 -1 24 -1 -1 -1 -1 24 -1 24 6 -1 -1 24 24 -1 24 24 -1 -1 -1 24 -1 -1 24 24 24 24 24 24 -1 -1 -1 24 -1 -1 24 24 -1 -1 24 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
05/31/2020 00:07:54 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
05/31/2020 00:07:54 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
05/31/2020 00:07:54 - INFO - transformers.trainer_tf - Created an/a adam optimizer
05/31/2020 00:07:54 - INFO - transformers.trainer_tf - ***** Running training *****
05/31/2020 00:07:54 - INFO - transformers.trainer_tf - Num examples = 24000
05/31/2020 00:07:54 - INFO - transformers.trainer_tf - Num Epochs = 3
05/31/2020 00:07:54 - INFO - transformers.trainer_tf - Total optimization steps = 750
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:360: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
05/31/2020 00:07:54 - WARNING - tensorflow - From /usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:360: StrategyBase.experimental_run_v2 (from tensorflow.python.distribute.distribute_lib) is deprecated and will be removed in a future version.
Instructions for updating:
renamed to `run`
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
05/31/2020 00:08:02 - WARNING - tensorflow - From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/indexed_slices.py:434: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2020-05-31 00:08:52.364556: W tensorflow/core/framework/op_kernel.cc:1753] OP_REQUIRES failed at sparse_xent_op.cc:90 : Invalid argument: Received a label value of -1 which is outside the valid range of [0, 25). Label values: -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 24 24 2 -1 -1 -1 24 0 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 24 -1 24 -1 24 -1 -1 24 -1 24 24 6 24 -1 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 24 3 -1 24 24 24 24 -1 -1 1 24 -1 -1 -1 -1 9 21 -1 -1 24 0 -1 -1 -1 24 24 24 24 -1 -1 24 24 -1 -1 24 24 -1 -1 -1 3 15 15 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 24 24 -1 -1 24 -1 24 24 -1 -1 24 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 6 24 24 24 6 18 18 24 24 6 24 -1 -1 24 24 24 -1 -1 -1 24 24 9 21 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 -1 24 -1 24 -1 24 -1 24 -1 24 -1 24 24 24 -1 -1 24 24 24 -1 24 -1 24 -1 24 -1 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 -1 24 -1 -1 -1 24 -1 9 24 9 21 24 24 -1 24 24 0 -1 -1 24 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 24 -1 24 24 -1 -1 -1 24 -1 24 24 -1 24 -1 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 24 -1 0 -1 24 2 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 -1 -1 24 24 3 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 24 24 24 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 -1 24 -1 24 -1 24 -1 24 24 -1 24 -1 -1 24 -1 24 24 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 24 -1 24 24 24 -1 -1 24 -1 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 -1 24 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 24 -1 24 -1 24 24 24 24 -1 9 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 -1 24 24 -1 24 -1 24 -1 24 -1 24 24 24 24 -1 -1 24 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 -1 24 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 24 -1 24 24 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 6 18 24 -1 -1 24 -1 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 8 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 24 24 -1 -1 -1 8 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 24 -1 24 24 -1 -1 -1 24 -1 -1 24 24 24 24 8 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 6 -1 -1 18 18 -1 -1 -1 18 18 18 -1 18 18 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 0 -1 24 -1 -1 -1 24 -1 -1 24 -1 -1 -1 24 24 -1 -1 24 24 -1 24 24 0 -1 24 -1 -1 24 24 -1 24 5 -1 -1 -1 -1 -1 -1 24 0 12 12 24 -1 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 24 24 -1 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 24 24 -1 24 24 24 24 -1 -1 24 24 -1 -1 -1 -1 0 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 -1 24 -1 24 -1 6 -1 -1 24 24 1 -1 24 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 -1 24 3 -1 -1 24 24 24 24 24 24 -1 24 -1 24 24 24 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 24 -1 24 24 24 -1 -1 -1 24 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 3 15 15 -1 -1 -1 -1 -1 24 0 12 12 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 0 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 0 -1 -1 24 -1 24 -1 24 -1 24 -1 -1 24 24 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 9 21 -1 24 9 -1 21 24 9 21 24 24 -1 -1 -1 -1 24 -1 -1 -1 -1 24 -1 -1 24 24 -1 24 -1 24 24 -1 24 24 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 24 -1 24 -1 24 -1 24 24 1 -1 -1 24 24 24 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 9 -1 24 -1 -1 -1 -1 24 -1 24 24 -1 24 24 9 24 24 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 24 0 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 9 -1 24 24 -1 24 24 -1 24 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 24 24 24 -1 24 -1 -1 24 24 -1 24 -1 -1 -1 24 24 6 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 24 24 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 24 -1 24 24 -1 24 24 -1 -1 -1 24 -1 -1 -1 -1 24 -1 24 -1 24 24 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 24 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 24 -1 -1 24 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Traceback (most recent call last):
File "run_tf_ner.py", line 281, in <module>
main()
File "run_tf_ner.py", line 213, in main
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 277, in train
for training_loss in self._training_steps():
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 321, in _training_steps
for i, loss in enumerate(self._accumulate_next_gradients()):
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 354, in _accumulate_next_gradients
yield _accumulate_next()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 708, in _call
return function_lib.defun(fn_with_cond)(*canon_args, **canon_kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2420, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1665, in _filtered_call
self.captured_inputs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 598, in call
ctx=ctx)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Received a label value of -1 which is outside the valid range of [0, 25). Label values: -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 24 24 2 -1 -1 -1 24 0 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 24 -1 24 -1 24 -1 -1 24 -1 24 24 6 24 -1 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 24 3 -1 24 24 24 24 -1 -1 1 24 -1 -1 -1 -1 9 21 -1 -1 24 0 -1 -1 -1 24 24 24 24 -1 -1 24 24 -1 -1 24 24 -1 -1 -1 3 15 15 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 24 24 -1 -1 24 -1 24 24 -1 -1 24 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 6 24 24 24 6 18 18 24 24 6 24 -1 -1 24 24 24 -1 -1 -1 24 24 9 21 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 -1 24 -1 24 -1 24 -1 24 -1 24 -1 24 24 24 -1 -1 24 24 24 -1 24 -1 24 -1 24 -1 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 -1 24 -1 -1 -1 24 -1 9 24 9 21 24 24 -1 24 24 0 -1 -1 24 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 24 -1 24 24 -1 -1 -1 24 -1 24 24 -1 24 -1 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 24 -1 0 -1 24 2 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 -1 -1 24 24 3 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 24 24 24 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 -1 24 -1 24 -1 24 -1 24 24 -1 24 -1 -1 24 -1 24 24 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 24 -1 24 24 24 -1 -1 24 -1 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 -1 24 -1 -1 24 -1 24 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 24 -1 24 -1 24 24 24 24 -1 9 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 -1 24 24 -1 24 -1 24 -1 24 -1 24 24 24 24 -1 -1 24 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 -1 24 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 24 -1 24 24 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 6 18 24 -1 -1 24 -1 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 8 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 24 24 -1 -1 -1 8 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 24 -1 24 24 -1 -1 -1 24 -1 -1 24 24 24 24 8 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 6 -1 -1 18 18 -1 -1 -1 18 18 18 -1 18 18 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 0 -1 24 -1 -1 -1 24 -1 -1 24 -1 -1 -1 24 24 -1 -1 24 24 -1 24 24 0 -1 24 -1 -1 24 24 -1 24 5 -1 -1 -1 -1 -1 -1 24 0 12 12 24 -1 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 24 24 -1 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 24 24 -1 24 24 24 24 -1 -1 24 24 -1 -1 -1 -1 0 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 24 -1 24 -1 24 -1 6 -1 -1 24 24 1 -1 24 -1 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 -1 24 3 -1 -1 24 24 24 24 24 24 -1 24 -1 24 24 24 -1 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 24 24 -1 24 24 24 -1 -1 -1 24 24 -1 24 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 3 15 15 -1 -1 -1 -1 -1 24 0 12 12 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 0 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 24 24 24 -1 0 -1 -1 24 -1 24 -1 24 -1 24 -1 -1 24 24 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 9 21 -1 24 9 -1 21 24 9 21 24 24 -1 -1 -1 -1 24 -1 -1 -1 -1 24 -1 -1 24 24 -1 24 -1 24 24 -1 24 24 -1 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 24 24 -1 24 -1 24 -1 24 24 1 -1 -1 24 24 24 -1 24 -1 -1 24 24 -1 -1 -1 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 9 -1 24 -1 -1 -1 -1 24 -1 24 24 -1 24 24 9 24 24 24 24 -1 -1 -1 -1 24 24 -1 -1 -1 24 0 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 9 -1 24 24 -1 24 24 -1 24 -1 -1 -1 -1 24 -1 -1 -1 24 -1 -1 -1 -1 -1 24 24 24 -1 24 -1 -1 24 24 -1 24 -1 -1 -1 24 24 6 -1 -1 24 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 24 -1 24 24 -1 -1 -1 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 24 -1 24 24 -1 24 24 -1 -1 -1 24 -1 -1 -1 -1 24 -1 24 -1 24 24 -1 -1 24 24 -1 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 24 -1 -1 24 24 -1 -1 -1 -1 24 -1 -1 24 -1 -1 -1 -1 -1 -1 24 -1 24 -1 -1 24 24 24 -1 -1 -1 -1 -1 24 24 24 -1 -1 -1 24 -1 -1 24 -1 -1 24 -1 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
[[{{node cond/else/_1/StatefulPartitionedCall/sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_fn_with_cond_30948]
Function call stack:
fn_with_cond
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4699/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4699/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4698/comments | https://api.github.com/repos/huggingface/transformers/issues/4698/events | https://github.com/huggingface/transformers/issues/4698 | 628,015,442 | MDU6SXNzdWU2MjgwMTU0NDI= | 4,698 | Transformer-XL: Input and labels for Language Modeling | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there!\r\n\r\n> I currently pass this data into the model like this:\r\n> \r\n> ```\r\n> output = model(input_ids=data, labels=target, mems=mems)\r\n> ```\r\n> \r\n> Is this correct?\r\n\r\nNo, this is not correct, because the labels are shifted inside the model (as the documentation suggests). This happens [here](https://github.com/huggingface/transformers/blob/ec8717d5d8f6edc2c595ff6954ffaa2078dcc97d/src/transformers/modeling_transfo_xl_utilities.py#L104) so in your example, the target vector will become\r\n```\r\ntensor([[3,4,5,6,7,8,9]])\r\n```\r\nto be matched with the predictions corresponding to\r\n```\r\ntensor([[1,2,3,4,5,6,7]])\r\n```\r\nso you'll try to predict the token that is two steps ahead of the current one.\r\n\r\nI am guessing that `lm_labels` is a typo for `labels`, and that you should either:\r\n- pass `labels = input_ids` as suggested by the doc string (in this case you will not compute any loss for the last prediction, but that's probably okay)\r\n- add something at the beginning of your target tensor (anything can work since it will be removed by the shift) : `target = tensor([[42,2,3,4,5,6,7,8,9]])`\r\n\r\nI'm still learning the library, so tagging @TevenLeScao (since he worked on the issue/PR you mentioned) to make sure I'm not saying something wrong (also, do we want to update `LMOrderedIterator` from tokenization_transfo_xl.py to return target tensors that can be used as labels?)",
"Ah yes that does sound like a typo from another model's convention! You do have to pass `data` twice, once to `input_ids` and once to `labels` (in your case, `[1, ... , 8]` for both). The model will then attempt to predict `[2, ... , 8]` from `[1, ... , 7]`). I am not sure adding something at the beginning of the target tensor would work as that would probably cause size mismatches later down the line.\r\n\r\nPassing twice is the default way to do this in `transformers`; before the aforementioned PR, `TransfoXL` did not shift labels internally and you had to shift the labels yourself. The PR changed it to be consistent with the library and the documentation, where you have to pass the same data twice. I believe #4711 fixed the typo, you should be all set ! I'll also answer on StackOverflow in case someone finds that question there.",
"Thanks @sgugger and @TevenLeScao for your help!\r\n\r\n@TevenLeScao \r\n> before the aforementioned PR, TransfoXL did not shift labels internally and you had to shift the labels yourself\r\n\r\nSo this means that in the versions before the fix my method with shifting the labels beforehand was correct? Because I'm currently using `transformers 2.6`.",
"Yes, it was changed in 2.9.0. You should probably consider updating ;)",
"> The model will then attempt to predict `[2, ... , 8]` from `[1, ... , 7]`).\r\n\r\nNote that if you are using the state, the memory returned is computed on the whole `[1, ... , 8]`, so you should use `[9,10,... , 16]` as your next batch.",
"Thanks guys!\r\n\r\nSorry for asking this here, but maybe one of you can help me with my workaround in issue #3554 ? That would help me a lot!",
"Hello again, sorry for bothering again, but I have update my code from version 2.6 to 2.11 as @TevenLeScao has suggested. Now I experience a drop in my model's performance, but I don't know why. I use the same code as before except passing `data` in twice as suggested.\r\n\r\nI know that this can have several other reasons but I just want to know if there where other breaking changes to `TransformerXLLMHeadModel` or to the generation process?\r\n\r\nI skipped through the changelog in the releases but could not find anything.\r\n\r\nThanks in advance!",
"Sorry, by a drop in model performance you mean the loss is worse right? I've noticed discrepancies between CMU code performance (better) and ours in the past, so maybe a bug was introduced between 2.6 and 2.11 (never used 2.6 myself). I'm comparing the two.",
"Well mainly I saw differences during text generation with `model.generate()` The sequences tend to be shorter and end more often with an <eos> in 2.11, where before in 2.6 they were just cut of at some point.\r\n\r\nBut I can't guarantee that there are no mistakes from my side.",
"Could it be that this is also related to #4826 ?",
"FYI: The issue regarding worse model performance on the newer version of `transformers` is solved. There were some errors on my side.\r\n\r\nNevertheless, I hope that the fix in the PR linked above will improve the generated texts, since I also experience low quality output despite proper finetuning."
] | 1,590 | 1,593 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm trying to finetune the pretrained Transformer-XL model `transfo-xl-wt103` for a language modeling task. Therfore, I use the model class `TransfoXLLMHeadModel`.
To iterate over my dataset I use the `LMOrderedIterator` from the file [tokenization_transfo_xl.py](https://github.com/huggingface/transformers/blob/5e737018e1fcb22c8b76052058279552a8d6c806/src/transformers/tokenization_transfo_xl.py#L467) which yields a tensor with the `data` and its `target` for each batch (and the sequence length).
**My question**:
Let's assume the following data with `batch_size = 1` and `bptt = 8`:
data = tensor([[1,2,3,4,5,6,7,8]])
target = tensor([[2,3,4,5,6,7,8,9]])
mems # from the previous output
I currently pass this data into the model like this:
output = model(input_ids=data, labels=target, mems=mems)
Is this correct?
I am wondering because the documentation says for the `labels` parameter:
> labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``lm_labels = input_ids``
So what is it about the parameter `lm_labels`? I only see `labels` defined in the `forward` method.
And when the labels "are shifted" inside the model, does this mean I have to pass in `data` twice (for `input_ids` and `labels`) because `labels` shifted inside? But how does the model then know the next token to predict (in the case above: `9`) ?
I also read through [this bug](https://github.com/huggingface/transformers/issues/3711) and the fix in [this pull request](https://github.com/huggingface/transformers/pull/3716) but I don't quite understand how to treat the model now (before vs. after fix). Maybe someone could explain me both versions.
Thanks in advance for some help!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/q/62069350/9478384
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4697/comments | https://api.github.com/repos/huggingface/transformers/issues/4697/events | https://github.com/huggingface/transformers/issues/4697 | 628,014,496 | MDU6SXNzdWU2MjgwMTQ0OTY= | 4,697 | SpanBert always predicts the same token | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | I tried to use this implementation - https://huggingface.co/SpanBERT/spanbert-base-cased, but as a prediction I always get the same exect outoput no matter where I put [MASK] in a sentence. Here is the code. Am I doing something wrong?
import torch
from transformers import AutoTokenizer,AutoModel
tokenizer = AutoTokenizer.from_pretrained("SpanBERT/spanbert-base-cased")
model = AutoModel.from_pretrained("SpanBERT/spanbert-base-cased")
model.eval()
model.to('cuda')
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = tokenizer.tokenize(text)
print("tokenized_text",tokenized_text)
themask = 2
tokenized_text[themask] = '[MASK]'
indexes = tokenizer.convert_tokens_to_ids(tokenized_text)
indexes_tensor = torch.tensor([indexes])
indexes_tensor = indexes_tensor.to('cuda')
with torch.no_grad():
outputs = model(indexes_tensor)
predictions0 = outputs[0]
the_index = torch.argmax(predictions0[0, themask]).item()
theresult = tokenizer.convert_ids_to_tokens([the_index])[0]
print("theresult",theresult)
print("the_index",the_index)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4697/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4696/comments | https://api.github.com/repos/huggingface/transformers/issues/4696/events | https://github.com/huggingface/transformers/issues/4696 | 627,997,204 | MDU6SXNzdWU2Mjc5OTcyMDQ= | 4,696 | Loading config file bug | {
"login": "abrhaleitela",
"id": 43967278,
"node_id": "MDQ6VXNlcjQzOTY3Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/43967278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abrhaleitela",
"html_url": "https://github.com/abrhaleitela",
"followers_url": "https://api.github.com/users/abrhaleitela/followers",
"following_url": "https://api.github.com/users/abrhaleitela/following{/other_user}",
"gists_url": "https://api.github.com/users/abrhaleitela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abrhaleitela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abrhaleitela/subscriptions",
"organizations_url": "https://api.github.com/users/abrhaleitela/orgs",
"repos_url": "https://api.github.com/users/abrhaleitela/repos",
"events_url": "https://api.github.com/users/abrhaleitela/events{/privacy}",
"received_events_url": "https://api.github.com/users/abrhaleitela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! The `d_head` is actually computed in the configuration: `self.d_head = d_model // n_head`. \r\n\r\nIt would probably be better to handle `d_head` directly, but currently the `d_model` and `d_head` are linked to each other. \r\n\r\nThis would crash in your case as `d_model // n_head != d_head`",
"@LysandreJik Thanks a lot. Yes, I am setting d_head directly before loading my model. But it would be nice to see the model load its configuration from the given config file. Just my opinion though :)",
"You're right! Raising an error if the `d_head` is wrong in #4747 "
] | 1,590 | 1,591 | 1,591 | NONE | null | Hi guys.
I have released a new (XLNet based) transformer model for low-resource language Tigrinya
[(TigXLNet)](https://github.com/abryeemessi/Transferring-Monolingual-Model-to-Low-Resource-Language) and found a bug when loading a pre-trained config file:
My config file looks like:
https://s3.amazonaws.com/models.huggingface.co/bert/abryee/TigXLNet/config.json
config = AutoConfig.from_pretrained("abryee/TigXLNet")
print(config.d_head) #prints 48 even though d_head in the given config file is 64.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4696/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4695/comments | https://api.github.com/repos/huggingface/transformers/issues/4695/events | https://github.com/huggingface/transformers/issues/4695 | 627,996,918 | MDU6SXNzdWU2Mjc5OTY5MTg= | 4,695 | Please add the functionality to save tokenizer model for run_language_modeling.py | {
"login": "vincentwen1995",
"id": 29601049,
"node_id": "MDQ6VXNlcjI5NjAxMDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/29601049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vincentwen1995",
"html_url": "https://github.com/vincentwen1995",
"followers_url": "https://api.github.com/users/vincentwen1995/followers",
"following_url": "https://api.github.com/users/vincentwen1995/following{/other_user}",
"gists_url": "https://api.github.com/users/vincentwen1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vincentwen1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vincentwen1995/subscriptions",
"organizations_url": "https://api.github.com/users/vincentwen1995/orgs",
"repos_url": "https://api.github.com/users/vincentwen1995/repos",
"events_url": "https://api.github.com/users/vincentwen1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/vincentwen1995/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This would be good indeed (cc @julien-c). In the meantime I think you can specify \r\n```\r\n--tokenizer_name=$TOKENIZER_NAME_OR_PATH\r\n```\r\n so that it always loads the initial tokenizer (which does not change during training).",
"> This would be good indeed (cc @julien-c). In the meantime I think you can specify\r\n> \r\n> ```\r\n> --tokenizer_name=$TOKENIZER_NAME_OR_PATH\r\n> ```\r\n> \r\n> so that it always loads the initial tokenizer (which does not change during training).\r\n\r\nThank you!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm trying to figure out how to use a custom made or create a tokenizer using this script and having significant difficulty. There does not seem to be any documentation on how to do this. I attempted to follow [this ](https://huggingface.co/blog/how-to-train) example but it makes no mention of how the tokenizer gets loaded / used. I get errors like:\r\n\r\n```\r\nOSError: Can't load config for 'EsperBERTo-small'. Make sure that:\r\n\r\n- 'EsperBERTo-small' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'EsperBERTo-small' is the correct path to a directory containing a config.json file\r\n```",
"Hi @EdwardRaff ,\r\nI'm facing the same issue while trying to train BERT model from scratch on my own dateset. Did you figure out how to solve it?\r\n```\r\n\r\nOSError: Can't load config for './models/BuckBERTer-small/'. Make sure that:\r\n\r\n- './models/BuckBERTer-small/' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or './models/BuckBERTer-small/' is the correct path to a directory containing a config.json file\r\n```",
"@EdwardRaff @FerchichiNourchene `run_language_modeling.py` doesn't work if a tokenizer is specified, but it does not contain the model configuration files. [This](https://stackoverflow.com/a/64795300/3950710) workaround worked for me.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@vincentwen1995, did you manage to get the tokenizers somehow? One year later and it seems like the `Trainer` is still not saving `tokenizer_config.json` in the checkpoint folders. \r\nWhere is it even saved?"
] | 1,590 | 1,663 | 1,619 | NONE | null | # 🚀 Feature request
Please add the feature to save the tokenizer model during training to the checkpoint folders.
## Motivation
When I tried out the script for [fine-tuning with language modeling](transformers/examples/language_modeling/run_language_modeling.py), I realized that the generated checkpoints during training cannot allow continue training, because under the checkpoint folders, the corresponding tokenizer model is not saved (including the files: tokenizer_config.json, special_tokens_map.json, vocab.txt). As I checked the [script](https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/examples/language-modeling/run_language_modeling.py#L250), and noticed that the tokenizer model is only saved after the training process.
## Your contribution
Haven't looked into the code in details so it might be best to have someone familiar with the Trainer class to integrate this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4694/comments | https://api.github.com/repos/huggingface/transformers/issues/4694/events | https://github.com/huggingface/transformers/issues/4694 | 627,995,420 | MDU6SXNzdWU2Mjc5OTU0MjA= | 4,694 | Adding Neutral Score | {
"login": "MohammadReza-Babaee",
"id": 66214697,
"node_id": "MDQ6VXNlcjY2MjE0Njk3",
"avatar_url": "https://avatars.githubusercontent.com/u/66214697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammadReza-Babaee",
"html_url": "https://github.com/MohammadReza-Babaee",
"followers_url": "https://api.github.com/users/MohammadReza-Babaee/followers",
"following_url": "https://api.github.com/users/MohammadReza-Babaee/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammadReza-Babaee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammadReza-Babaee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammadReza-Babaee/subscriptions",
"organizations_url": "https://api.github.com/users/MohammadReza-Babaee/orgs",
"repos_url": "https://api.github.com/users/MohammadReza-Babaee/repos",
"events_url": "https://api.github.com/users/MohammadReza-Babaee/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammadReza-Babaee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey all, any chance anyone else is working around this? I think a neutral label or a standard sentiment score would be great for such an extensive model. Neutral statements are not caught with this adjustment:\r\n\r\nclassifier('I do not know the answer.')\r\nOut[16]: [{'label': 'NEGATIVE', 'score': 0.9995205402374268}]\r\n\r\nclassifier('This is meant to be a very neutral statement.')\r\nOut[17]: [{'label': 'NEGATIVE', 'score': 0.987031102180481}]\r\n\r\nclassifier('The last president of US is Donald Trump.')\r\nOut[18]: [{'label': 'POSITIVE', 'score': 0.9963828325271606}]\r\n\r\nclassifier('There is going to be an election in two months.')\r\nOut[19]: [{'label': 'NEGATIVE', 'score': 0.9604763984680176}]\r\n\r\nJust raising this thread again to see if there is a common interest...\r\nCheers!"
] | 1,590 | 1,600 | 1,596 | NONE | null | # 🚀 Feature request
After performing some experimentation and comparison to VADER, we come to consensus that "pretrained BERT-based Hugging Face transfomer" is performing way beyond the other lexicons, but VADER is also good at social media context + it provides "neutral" label which turns out to be useful in some context.
I was wondering whether it is possible to manipulate the Transformer Sentiment Analysis in a way that it can calculate the **"neutral" score**?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4694/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4694/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4693/comments | https://api.github.com/repos/huggingface/transformers/issues/4693/events | https://github.com/huggingface/transformers/issues/4693 | 627,886,995 | MDU6SXNzdWU2Mjc4ODY5OTU= | 4,693 | TypeError: cannot create 'BPE' instances | {
"login": "tobimichigan",
"id": 5084987,
"node_id": "MDQ6VXNlcjUwODQ5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5084987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobimichigan",
"html_url": "https://github.com/tobimichigan",
"followers_url": "https://api.github.com/users/tobimichigan/followers",
"following_url": "https://api.github.com/users/tobimichigan/following{/other_user}",
"gists_url": "https://api.github.com/users/tobimichigan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tobimichigan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobimichigan/subscriptions",
"organizations_url": "https://api.github.com/users/tobimichigan/orgs",
"repos_url": "https://api.github.com/users/tobimichigan/repos",
"events_url": "https://api.github.com/users/tobimichigan/events{/privacy}",
"received_events_url": "https://api.github.com/users/tobimichigan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
01-training-tokenizers.ipynb
```
# For the user's convenience `tokenizers` provides some very high-level classes encapsulating
# the overall pipeline for various well-known tokenization algorithm.
# Everything described below can be replaced by the ByteLevelBPETokenizer class.
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase, NFKC, Sequence
from tokenizers.pre_tokenizers import ByteLevel
# First we create an empty Byte-Pair Encoding model (i.e. not trained model)
tokenizer = Tokenizer(BPE())
# Then we enable lower-casing and unicode-normalization
# The Sequence normalizer allows us to combine multiple Normalizer that will be
# executed in order.
tokenizer.normalizer = Sequence([
NFKC(),
Lowercase()
])
# Our tokenizer also needs a pre-tokenizer responsible for converting the input to a ByteLevel representation.
tokenizer.pre_tokenizer = ByteLevel()
# And finally, let's plug a decoder so we can recover from a tokenized input to the original one
tokenizer.decoder = ByteLevelDecoder()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Flawless Run
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-5.4.0-31-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Specific recurring Error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-f099004b011b> in <module>
10
11 # First we create an empty Byte-Pair Encoding model (i.e. not trained model)
---> 12 tokenizer = Tokenizer(BPE())
13
14 # Then we enable lower-casing and unicode-normalization
TypeError: cannot create 'BPE' instances
NB. I have gone through this issue:https://github.com/huggingface/transformers/issues/3787,
It dosent solve it either.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4693/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4692/comments | https://api.github.com/repos/huggingface/transformers/issues/4692/events | https://github.com/huggingface/transformers/issues/4692 | 627,867,790 | MDU6SXNzdWU2Mjc4Njc3OTA= | 4,692 | Gradient overflow issue when i try to train gpt2 with run_language_modeling in fp16 with 02. Any idea why that maybe happen? | {
"login": "Nkonstan",
"id": 35643708,
"node_id": "MDQ6VXNlcjM1NjQzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/35643708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nkonstan",
"html_url": "https://github.com/Nkonstan",
"followers_url": "https://api.github.com/users/Nkonstan/followers",
"following_url": "https://api.github.com/users/Nkonstan/following{/other_user}",
"gists_url": "https://api.github.com/users/Nkonstan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nkonstan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nkonstan/subscriptions",
"organizations_url": "https://api.github.com/users/Nkonstan/orgs",
"repos_url": "https://api.github.com/users/Nkonstan/repos",
"events_url": "https://api.github.com/users/Nkonstan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nkonstan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"\r\nI get this message every 2000 steps. \r\n\"Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 262144.0\"\r\n\r\nThe train run on rtx 2070 and seems to not have other problem but that message keeps going on every 2000 steps.\r\n\r\npytorch 1.5.0\r\npython 3.7\r\ncuda 10.1\r\n",
"Do you have the same issue with opt level O1? Using O2 is discouraged. The issue may also be related to PyTorch 1.5, so if switching to O1 does not help, try a previous PyTorch version. Note that it is highly likely that this is an AMP problem, not a transformers issue. Have a look here https://github.com/NVIDIA/apex/issues/318",
"@BramVanroy Yes i guess you 're right. It seems problem of the AMP. I asked and in APEX the same question and their answer was : \r\n\r\n\"The loss scaler tries to increase the loss scaling factor after a threshold of successful steps was reached. In your case it seems that the scaling factor is being downgraded to the same value, so it should be fine.\"\r\n\r\nSo, according to that answer, is not a problem. But still i am not sure. \r\n\r\n",
"I guess that skipping one step every 2000 steps is not a problem. You can monitor the loss, and as long as it seems to decrease normally, then you should be fine. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4692/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4691/comments | https://api.github.com/repos/huggingface/transformers/issues/4691/events | https://github.com/huggingface/transformers/pull/4691 | 627,841,629 | MDExOlB1bGxSZXF1ZXN0NDI1NTMzNDE3 | 4,691 | [EncoderDecoder] Add RoBERTa as a decoder | {
"login": "ivanmontero",
"id": 26352222,
"node_id": "MDQ6VXNlcjI2MzUyMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/26352222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanmontero",
"html_url": "https://github.com/ivanmontero",
"followers_url": "https://api.github.com/users/ivanmontero/followers",
"following_url": "https://api.github.com/users/ivanmontero/following{/other_user}",
"gists_url": "https://api.github.com/users/ivanmontero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivanmontero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivanmontero/subscriptions",
"organizations_url": "https://api.github.com/users/ivanmontero/orgs",
"repos_url": "https://api.github.com/users/ivanmontero/repos",
"events_url": "https://api.github.com/users/ivanmontero/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivanmontero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=h1) Report\n> Merging [#4691](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0866669e751bef636fa693b704a28c1fea9a17f3&el=desc) will **increase** coverage by `0.19%`.\n> The diff coverage is `16.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4691 +/- ##\n==========================================\n+ Coverage 77.14% 77.34% +0.19% \n==========================================\n Files 128 128 \n Lines 21070 21087 +17 \n==========================================\n+ Hits 16255 16309 +54 \n+ Misses 4815 4778 -37 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `89.42% <16.66%> (-6.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.22% <0.00%> (+0.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.29% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4691/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=footer). Last update [0866669...b0bbd24](https://codecov.io/gh/huggingface/transformers/pull/4691?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | * Add crossattention input
* Add EncoderDecoder tests for RoBERTa
Since RoBERTa is a subclass of BERT, it inherits all the crossattention mechanics in the model itself. This change allows RobertaForMaskedLM to take in encoder hidden states and language model labels to work with the EncoderDecoder framework. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4691/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4691",
"html_url": "https://github.com/huggingface/transformers/pull/4691",
"diff_url": "https://github.com/huggingface/transformers/pull/4691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4691.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4690/comments | https://api.github.com/repos/huggingface/transformers/issues/4690/events | https://github.com/huggingface/transformers/issues/4690 | 627,827,406 | MDU6SXNzdWU2Mjc4Mjc0MDY= | 4,690 | Keyword errors on tokenizer.encode_plus | {
"login": "tchang1997",
"id": 30159285,
"node_id": "MDQ6VXNlcjMwMTU5Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30159285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchang1997",
"html_url": "https://github.com/tchang1997",
"followers_url": "https://api.github.com/users/tchang1997/followers",
"following_url": "https://api.github.com/users/tchang1997/following{/other_user}",
"gists_url": "https://api.github.com/users/tchang1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tchang1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tchang1997/subscriptions",
"organizations_url": "https://api.github.com/users/tchang1997/orgs",
"repos_url": "https://api.github.com/users/tchang1997/repos",
"events_url": "https://api.github.com/users/tchang1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/tchang1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@tchainzzz Can you tell how you resolved the particular issue?\r\n"
] | 1,590 | 1,635 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BertTokenizer
Language I am using the model on (English, Chinese ...): English (`bert-base-uncased`)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Code:
```
tokenizer = BERTTokenizer.from_pretrained('bert-base-uncased')`
result = tokenizer.encode_plus("This is an example sentence", add_special_tokens=True,
max_length=64, pad_to_max_length=True, return_attention_masks=True, return_tensors='pt'
```
Result:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../python3.8/site-packages/transformers/tokenization_utils.py", line 786, in encode_plus
first_ids = get_input_ids(text)
File ".../python3.8/site-packages/transformers/tokenization_utils.py", line 778, in get_input_ids
return self.convert_tokens_to_ids(self.tokenize(text, **kwargs))
File ".../python3.8/site-packages/transformers/tokenization_utils.py", line 649, in tokenize
tokenized_text = split_on_tokens(added_tokens, text)
File ".../python3.8/site-packages/transformers/tokenization_utils.py", line 644, in split_on_tokens
return sum((self._tokenize(token, **kwargs) if token not \
File ".../python3.8/site-packages/transformers/tokenization_utils.py", line 644, in <genexpr>
return sum((self._tokenize(token, **kwargs) if token not \
TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'
```
A similar issue occurs if I remove the `pad_to_max_length` keyword; then `return_attention_masks` is the unexpected keyword.
## Expected behavior
Expected: the function returns without error a dict with the attention masks, padded sequence, and some other info, as specified by the documentation.
## Environment info
- `transformers` version: 2.1.1
- Platform: WSL 2, Ubuntu 18.04.4 LTS
- Python version: 3.8
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4690/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4689/comments | https://api.github.com/repos/huggingface/transformers/issues/4689/events | https://github.com/huggingface/transformers/issues/4689 | 627,796,808 | MDU6SXNzdWU2Mjc3OTY4MDg= | 4,689 | Same logits value for different input | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Can you post a minimal dataset that we can test this with? So for instance two sentences that give the same result for you.",
"```\r\n[\r\n [\"Nah I don't think he goes to usf, he lives around here though\", 2],\r\n [\"URGENT! You have won a 1 week FREE membership in our $100,000 Prize Jackpot!\", 1]\r\n]\r\n```\r\n\r\n**The model outputs same results only after fine-tuning phase.**\r\n\r\nThis is my Dataset subclass:\r\n\r\n```\r\nclass SequenceClassificationDataset(Dataset):\r\n def __init__(self, df, tokenizer, max_length):\r\n encodings = tokenizer.batch_encode_plus(df.values[:, 0].tolist(), return_tensors=\"pt\", max_length=max_length, pad_to_max_length=True)\r\n self.input_ids = encodings.input_ids\r\n self.attention_masks = encodings.attention_mask\r\n self.y = torch.LongTensor(df.values[:,1].tolist())\r\n\r\n def __getitem__(self, index):\r\n return self.input_ids[index], self.attention_masks[index], self.y[index]\r\n \r\n def __len__(self):\r\n return self.input_ids.shape[0]\r\n```\r\n\r\nAm I passing the \"y\" value incorrectly? It's not one-hot encoded matrix, but it's vector with size (batch_size, ) where each element represents the category for that text.",
"Yes, your labels are correct.\r\n\r\nI can't reproduce your problem, though. This seems to work correctly.\r\n\r\n\r\n```python\r\nimport torch\r\nfrom torch.utils.data import Dataset, DataLoader\r\nfrom transformers import AutoTokenizer, BartForSequenceClassification\r\n\r\nimport pandas as pd\r\n\r\n\r\nclass SequenceClassificationDataset(Dataset):\r\n def __init__(self, df, tokenizer, max_length):\r\n encodings = tokenizer.batch_encode_plus(df.values[:, 0].tolist(), return_tensors=\"pt\",\r\n max_length=max_length, pad_to_max_length=True)\r\n self.input_ids = encodings.input_ids\r\n self.attention_masks = encodings.attention_mask\r\n self.y = torch.LongTensor(df.values[:, 1].tolist())\r\n\r\n def __getitem__(self, index):\r\n return self.input_ids[index], self.attention_masks[index], self.y[index]\r\n\r\n def __len__(self):\r\n return self.input_ids.shape[0]\r\n\r\n\r\ndef main():\r\n df = pd.DataFrame([\r\n [\"Nah I don't think he goes to usf, he lives around here though\", 2],\r\n [\"URGENT! You have won a 1 week FREE membership in our $100,000 Prize Jackpot!\", 1]\r\n ])\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\"bart-large\")\r\n model = BartForSequenceClassification.from_pretrained(\"bart-large\")\r\n ds = SequenceClassificationDataset(df, tokenizer, 32)\r\n dl = DataLoader(ds)\r\n\r\n with torch.no_grad():\r\n for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(dl, start=1):\r\n batch_input_ids_cuda = batch_input_ids\r\n batch_attention_masks_cuda = batch_attention_masks\r\n batch_y_cuda = batch_y\r\n loss, logits, _ = model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda,\r\n labels=batch_y_cuda)\r\n print(\"Input ids:\", batch_input_ids_cuda)\r\n print(\"Attention masks:\", batch_attention_masks_cuda)\r\n print(\"Loss:\", loss)\r\n print(\"Logits:\", logits)\r\n print(\"y:\", batch_y)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nAre you running this in a notebook? If so, try restarting the notebook. Having unexpected results like this can be a sign of cached cells.",
"Everything works fine if I don't fine-tune the model, the problem occurs after fine-tuning. Can you please check my training subroutine?",
"The training code seems okay. Even if there was a bug in the loop, you would not expect that any input gives the same output. I can't debug this unfortunately since I can't reproduce your issue. Can you share the dataset that you use for fine-tuning?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> # ❓ Questions & Help\r\n> ## Details\r\n> I use BartForSequenceClassification pre-trained model from the HuggingFace Transformers library. During training phase logits from classification head gets different values, but during validation phase all logits values are equal even for different input texts.\r\n> \r\n> I use BartTokenizer.batch_encode_plus to encode the text before feeding into the model.\r\n> \r\n> I fine-tuned the model for 1 epoch using the following code:\r\n> \r\n> ```\r\n> config = BartConfig.from_pretrained(model_name)\r\n> config.num_labels = 3\r\n> config.output_hidden_states = False\r\n> config.output_attentions = False\r\n> \r\n> transformer_model = BartForSequenceClassification.from_pretrained(model_name, config=config)\r\n> transformer_model.cuda();\r\n> optimizer = AdamW(transformer_model.parameters())\r\n> \r\n> NUM_TRAIN_EPOCHS = 1\r\n> print(\"Training:\")\r\n> for i in range(1, 1+NUM_TRAIN_EPOCHS):\r\n> transformer_model.train()\r\n> for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(train_dataloader, start=1):\r\n> batch_input_ids_cuda = batch_input_ids.to(device)\r\n> batch_attention_masks_cuda = batch_attention_masks.to(device)\r\n> batch_y_cuda = batch_y.to(device)\r\n> loss, logits, _ = transformer_model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda, labels=batch_y_cuda)\r\n> transformer_model.zero_grad()\r\n> loss.backward()\r\n> optimizer.step()\r\n> ```\r\n> \r\n> And for validation I use the following code:\r\n> \r\n> ```\r\n> transformer_model.eval()\r\n> with torch.no_grad():\r\n> for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(test_dataloader, start=1):\r\n> batch_input_ids_cuda = batch_input_ids.to(device)\r\n> batch_attention_masks_cuda = batch_attention_masks.to(device)\r\n> batch_y_cuda = batch_y.to(device)\r\n> loss, logits, _ = transformer_model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda, labels=batch_y_cuda)\r\n> print(\"Input ids:\", batch_input_ids_cuda)\r\n> print(\"Attention masks:\", batch_attention_masks_cuda)\r\n> print(\"Logits:\", logits)\r\n> ```\r\n> \r\n> Output of validation phase is:\r\n> \r\n> ```\r\n> Input ids: tensor([[ 0, 3655, 9, ..., 1, 1, 1],\r\n> [ 0, 31524, 347, ..., 1, 1, 1],\r\n> [ 0, 12806, 24220, ..., 1, 1, 1],\r\n> ...,\r\n> [ 0, 8518, 7432, ..., 1, 1, 1],\r\n> [ 0, 15006, 23613, ..., 1, 1, 1],\r\n> [ 0, 14729, 13178, ..., 1, 1, 1]], device='cuda:0')\r\n> Attention masks: tensor([[1, 1, 1, ..., 0, 0, 0],\r\n> [1, 1, 1, ..., 0, 0, 0],\r\n> [1, 1, 1, ..., 0, 0, 0],\r\n> ...,\r\n> [1, 1, 1, ..., 0, 0, 0],\r\n> [1, 1, 1, ..., 0, 0, 0],\r\n> [1, 1, 1, ..., 0, 0, 0]], device='cuda:0')\r\n> Logits: tensor([[-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862],\r\n> [-1.3014, -0.7394, 0.7862]], device='cuda:0')\r\n> ```\r\n> \r\n> Why all the logits have exact same value for every input?\r\n> \r\n> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62097267/same-logits-value-for-different-input-in-huggingfaces-transformer\r\n\r\nsame problem occurs to me",
"> Yes, your labels are correct.\r\n> \r\n> I can't reproduce your problem, though. This seems to work correctly.\r\n> \r\n> ```python\r\n> import torch\r\n> from torch.utils.data import Dataset, DataLoader\r\n> from transformers import AutoTokenizer, BartForSequenceClassification\r\n> \r\n> import pandas as pd\r\n> \r\n> \r\n> class SequenceClassificationDataset(Dataset):\r\n> def __init__(self, df, tokenizer, max_length):\r\n> encodings = tokenizer.batch_encode_plus(df.values[:, 0].tolist(), return_tensors=\"pt\",\r\n> max_length=max_length, pad_to_max_length=True)\r\n> self.input_ids = encodings.input_ids\r\n> self.attention_masks = encodings.attention_mask\r\n> self.y = torch.LongTensor(df.values[:, 1].tolist())\r\n> \r\n> def __getitem__(self, index):\r\n> return self.input_ids[index], self.attention_masks[index], self.y[index]\r\n> \r\n> def __len__(self):\r\n> return self.input_ids.shape[0]\r\n> \r\n> \r\n> def main():\r\n> df = pd.DataFrame([\r\n> [\"Nah I don't think he goes to usf, he lives around here though\", 2],\r\n> [\"URGENT! You have won a 1 week FREE membership in our $100,000 Prize Jackpot!\", 1]\r\n> ])\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"bart-large\")\r\n> model = BartForSequenceClassification.from_pretrained(\"bart-large\")\r\n> ds = SequenceClassificationDataset(df, tokenizer, 32)\r\n> dl = DataLoader(ds)\r\n> \r\n> with torch.no_grad():\r\n> for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(dl, start=1):\r\n> batch_input_ids_cuda = batch_input_ids\r\n> batch_attention_masks_cuda = batch_attention_masks\r\n> batch_y_cuda = batch_y\r\n> loss, logits, _ = model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda,\r\n> labels=batch_y_cuda)\r\n> print(\"Input ids:\", batch_input_ids_cuda)\r\n> print(\"Attention masks:\", batch_attention_masks_cuda)\r\n> print(\"Loss:\", loss)\r\n> print(\"Logits:\", logits)\r\n> print(\"y:\", batch_y)\r\n> \r\n> if __name__ == '__main__':\r\n> main()\r\n> ```\r\n> \r\n> Are you running this in a notebook? If so, try restarting the notebook. Having unexpected results like this can be a sign of cached cells.\r\n\r\nyou should set the mode of the model to 'eval'.\r\n\r\nhttps://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained\r\n\r\n`The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train().`",
"@xylcbd No. The _default_ is already eval, as the documentation writes. So you do not have to explicitly set it to eval() again. But if you want to train, you need to set it to train()."
] | 1,590 | 1,631 | 1,596 | NONE | null | # ❓ Questions & Help
## Details
I use BartForSequenceClassification pre-trained model from the HuggingFace Transformers library. During training phase logits from classification head gets different values, but during validation phase all logits values are equal even for different input texts.
I use BartTokenizer.batch_encode_plus to encode the text before feeding into the model.
I fine-tuned the model for 1 epoch using the following code:
```
config = BartConfig.from_pretrained(model_name)
config.num_labels = 3
config.output_hidden_states = False
config.output_attentions = False
transformer_model = BartForSequenceClassification.from_pretrained(model_name, config=config)
transformer_model.cuda();
optimizer = AdamW(transformer_model.parameters())
NUM_TRAIN_EPOCHS = 1
print("Training:")
for i in range(1, 1+NUM_TRAIN_EPOCHS):
transformer_model.train()
for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(train_dataloader, start=1):
batch_input_ids_cuda = batch_input_ids.to(device)
batch_attention_masks_cuda = batch_attention_masks.to(device)
batch_y_cuda = batch_y.to(device)
loss, logits, _ = transformer_model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda, labels=batch_y_cuda)
transformer_model.zero_grad()
loss.backward()
optimizer.step()
```
And for validation I use the following code:
```
transformer_model.eval()
with torch.no_grad():
for batch_index, (batch_input_ids, batch_attention_masks, batch_y) in enumerate(test_dataloader, start=1):
batch_input_ids_cuda = batch_input_ids.to(device)
batch_attention_masks_cuda = batch_attention_masks.to(device)
batch_y_cuda = batch_y.to(device)
loss, logits, _ = transformer_model(batch_input_ids_cuda, attention_mask=batch_attention_masks_cuda, labels=batch_y_cuda)
print("Input ids:", batch_input_ids_cuda)
print("Attention masks:", batch_attention_masks_cuda)
print("Logits:", logits)
```
Output of validation phase is:
```
Input ids: tensor([[ 0, 3655, 9, ..., 1, 1, 1],
[ 0, 31524, 347, ..., 1, 1, 1],
[ 0, 12806, 24220, ..., 1, 1, 1],
...,
[ 0, 8518, 7432, ..., 1, 1, 1],
[ 0, 15006, 23613, ..., 1, 1, 1],
[ 0, 14729, 13178, ..., 1, 1, 1]], device='cuda:0')
Attention masks: tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], device='cuda:0')
Logits: tensor([[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862],
[-1.3014, -0.7394, 0.7862]], device='cuda:0')
```
Why all the logits have exact same value for every input?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62097267/same-logits-value-for-different-input-in-huggingfaces-transformer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4689/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4688/comments | https://api.github.com/repos/huggingface/transformers/issues/4688/events | https://github.com/huggingface/transformers/issues/4688 | 627,729,310 | MDU6SXNzdWU2Mjc3MjkzMTA= | 4,688 | Compressive Transformer | {
"login": "stvhuang",
"id": 15218222,
"node_id": "MDQ6VXNlcjE1MjE4MjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/15218222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stvhuang",
"html_url": "https://github.com/stvhuang",
"followers_url": "https://api.github.com/users/stvhuang/followers",
"following_url": "https://api.github.com/users/stvhuang/following{/other_user}",
"gists_url": "https://api.github.com/users/stvhuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stvhuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stvhuang/subscriptions",
"organizations_url": "https://api.github.com/users/stvhuang/orgs",
"repos_url": "https://api.github.com/users/stvhuang/repos",
"events_url": "https://api.github.com/users/stvhuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/stvhuang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Interested in model weights too but currently not available. Author does mention releasing tf code here:\r\n\r\nhttps://news.ycombinator.com/item?id=22290227\r\n\r\nRequires tf 1.15+ and deepmind/sonnet ver 1.36. Link to python script here:\r\n\r\nhttps://github.com/deepmind/sonnet/blob/cd5b5fa48e15e4d020f744968f5209949ebe750f/sonnet/python/modules/nets/transformer.py#L915\r\n\r\nHave tried running as-is but doesn't appear to have options for training on custom data as per the paper and available data sets.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,598 | 1,598 | NONE | null | # 🌟 New model addition
## Model description
<table>
<tr><th>Title</th><td>Compressive Transformers for Long-Range Sequence Modelling (ICLR '20)</td></tr>
<tr><th>arXiv</th><td><a href="https://arxiv.org/pdf/1911.05507.pdf">1911.05507</a></td></tr>
<tr><th>Blog</th><td><a href="https://deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory">A new model and dataset for long-range memory</a></td></tr>
</table>
__Compressive Transformer__ is an attentive sequence model which __compresses past memories__ for long-range sequence learning. The idea is similar to [Transformer-XL](https://arxiv.org/pdf/1901.02860.pdf). However, the memories are compressed in __Compressive Transformer__, making it leverage longer past memories compared to __Transformer-XL__.
## Open source status
- [ ] the model implementation is available
- [ ] the model weights are available
- [ ] who are the authors | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4688/reactions",
"total_count": 12,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/transformers/issues/4688/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4687/comments | https://api.github.com/repos/huggingface/transformers/issues/4687/events | https://github.com/huggingface/transformers/pull/4687 | 627,719,545 | MDExOlB1bGxSZXF1ZXN0NDI1NDU4MjY0 | 4,687 | Update HooshvareLab/bert-base-parsbert-uncased | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=h1) Report\n> Merging [#4687](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0866669e751bef636fa693b704a28c1fea9a17f3&el=desc) will **decrease** coverage by `1.44%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4687 +/- ##\n==========================================\n- Coverage 77.14% 75.70% -1.45% \n==========================================\n Files 128 128 \n Lines 21070 21070 \n==========================================\n- Hits 16255 15950 -305 \n- Misses 4815 5120 +305 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.94% <0.00%> (-2.71%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.29% <0.00%> (+0.35%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=footer). Last update [0866669...d473be4](https://codecov.io/gh/huggingface/transformers/pull/4687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | mBERT results added regarding NER datasets! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4687/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4687",
"html_url": "https://github.com/huggingface/transformers/pull/4687",
"diff_url": "https://github.com/huggingface/transformers/pull/4687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4687.patch",
"merged_at": 1591014421000
} |
https://api.github.com/repos/huggingface/transformers/issues/4686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4686/comments | https://api.github.com/repos/huggingface/transformers/issues/4686/events | https://github.com/huggingface/transformers/pull/4686 | 627,703,213 | MDExOlB1bGxSZXF1ZXN0NDI1NDQ4OTg4 | 4,686 | [pipeline] Tokenizer should not add special tokens for text generation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=h1) Report\n> Merging [#4686](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c17256447b91cf8483c856cb15e95ed30ace538&el=desc) will **increase** coverage by `0.24%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4686 +/- ##\n==========================================\n+ Coverage 77.23% 77.47% +0.24% \n==========================================\n Files 128 128 \n Lines 21050 21051 +1 \n==========================================\n+ Hits 16257 16309 +52 \n+ Misses 4793 4742 -51 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.15% <75.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=footer). Last update [9c17256...95cf209](https://codecov.io/gh/huggingface/transformers/pull/4686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,591 | 1,591 | MEMBER | null | This PR fixes generation in pipelines for all models whose tokenizer adds special tokens to the input, *e.g.* XLNet.
I think this is good for now, but I think the `_parse_and_tokenize()` function needs a larger refactoring to allow more flexibility in the future, also see Issue: #4501 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4686/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4686",
"html_url": "https://github.com/huggingface/transformers/pull/4686",
"diff_url": "https://github.com/huggingface/transformers/pull/4686.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4686.patch",
"merged_at": 1591088627000
} |
https://api.github.com/repos/huggingface/transformers/issues/4685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4685/comments | https://api.github.com/repos/huggingface/transformers/issues/4685/events | https://github.com/huggingface/transformers/issues/4685 | 627,683,780 | MDU6SXNzdWU2Mjc2ODM3ODA= | 4,685 | AutoModel.from_config loads random parameter values. | {
"login": "gkutiel",
"id": 1332967,
"node_id": "MDQ6VXNlcjEzMzI5Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1332967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gkutiel",
"html_url": "https://github.com/gkutiel",
"followers_url": "https://api.github.com/users/gkutiel/followers",
"following_url": "https://api.github.com/users/gkutiel/following{/other_user}",
"gists_url": "https://api.github.com/users/gkutiel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gkutiel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gkutiel/subscriptions",
"organizations_url": "https://api.github.com/users/gkutiel/orgs",
"repos_url": "https://api.github.com/users/gkutiel/repos",
"events_url": "https://api.github.com/users/gkutiel/events{/privacy}",
"received_events_url": "https://api.github.com/users/gkutiel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is expected behaviour, but I understand your confusion.\r\n\r\n```python\r\nmodel_from_pretrained = AutoModel.from_pretrained(pretrained)\r\n```\r\n\r\nThis actually loads the pretrained weights. It looks up the mapping and locations of the config file and the weights, and loads both.\r\n\r\n```python\r\nmodel_from_config = AutoModel.from_config(AutoConfig.from_pretrained(pretrained))\r\n```\r\n\r\nHere, the pretrained weights are never requested. You request the pretrained _config_ (basically the pretraining settings for the architecture), and (randomly) initialise an AutoModel given that config - but the weights are never requested and, thus, never loaded. \r\n\r\nThis means that both initialised models will have the same architecture, the same config, but different weights. The former has pretrained weights, the latter is randomly initialised.\r\n\r\nI think that what you expected or wanted is actually this, which will load pretrained weights and taking into account a pretrained config (however, this is practically the same as the first option):\r\n\r\n```python\r\nmodel_from_config = AutoModel.from_pretrained(pretrained, config=AutoConfig.from_pretrained(pretrained))\r\n```\r\n\r\nHope that helps.",
"Thank you very much for the fast response. \r\nI think that the documentation is not clear enough about this difference, especially when there are pre-trained models such as `bert-base-uncased` and `bert-base-cased`, and there is the `AutoModelForPreTraining` class (that now I'm not sure what is for).\r\n\r\n\r\n",
"If I understand correctly, your confusion lies in \"well I called `.from_pretrained` so I would expect the model to have pretrained weights\". However, the distinction is that if you run .from_pretrained on Auto**Config** you are not loading weights but you are loading a pre-existing config file. Loading pre-existing weights can only be done in a **Model** by using its `from_pretrained` method. But I agree that this could be improved in the documentation. I'll reopen this, try to improve the documentation, and close the issue when it's done.\r\n\r\nThanks for bringing this to the attention!",
"Hi, we tried to make it clear in the documentation by specifying [it in the `PretrainedConfig` class](https://huggingface.co/transformers/main_classes/configuration.html#pretrainedconfig). \r\n\r\nI think we could add this note to `AutoConfig` as well, as I doubt users using `AutoConfig` read the documentation of `PretrainedConfig` as well.",
"Oh, sorry @BramVanroy I didn't see you assigned it to yourself. Do you want to add the documentation note? Maybe you have additional ideas of where it should be added?",
"I think that another place to mention this note is under the [from-config](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_config) method.",
"You're right, it would be nice to specify it there as well!",
"> Oh, sorry @BramVanroy I didn't see you assigned it to yourself. Do you want to add the documentation note? Maybe you have additional ideas of where it should be added?\r\n\r\nOh, go ahead! You know the library better than I do so your judgement of where to add a note is better."
] | 1,590 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Model parameters are (apparently) random initialized when using `AutoModel.from_config`.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `git clone https://github.com/gkutiel/transformers-bug`
2. `cd transformers-bug`
3. `pipenv shell`
4. `pipenv install`
5. `python main.py`
```python
from transformers import (
AutoModel,
AutoConfig,
)
pretrained = 'bert-base-uncased'
model_from_pretrained = AutoModel.from_pretrained(pretrained)
model_from_config = AutoModel.from_config(AutoConfig.from_pretrained(pretrained))
model_from_pretrained_params = list(model_from_pretrained.parameters())
model_from_config_params = list(model_from_config.parameters())
assert len(model_from_pretrained_params) == len(model_from_config_params)
model_from_pretrained_first_param = model_from_pretrained_params[0][0][0]
model_from_config_first_param = model_from_config_params[0][0][0]
assert model_from_pretrained_first_param == model_from_config_first_param, (
f'{model_from_pretrained_first_param} != {model_from_config_first_param}'
)
```
## Expected behavior
An assertion error should not happen.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: MacOS
- Python version:3.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4685/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4684/comments | https://api.github.com/repos/huggingface/transformers/issues/4684/events | https://github.com/huggingface/transformers/pull/4684 | 627,681,291 | MDExOlB1bGxSZXF1ZXN0NDI1NDM1ODM2 | 4,684 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=h1) Report\n> Merging [#4684](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0866669e751bef636fa693b704a28c1fea9a17f3&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4684 +/- ##\n=======================================\n Coverage 77.14% 77.15% \n=======================================\n Files 128 128 \n Lines 21070 21070 \n=======================================\n+ Hits 16255 16256 +1 \n+ Misses 4815 4814 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=footer). Last update [0866669...e12a141](https://codecov.io/gh/huggingface/transformers/pull/4684?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4684/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4684",
"html_url": "https://github.com/huggingface/transformers/pull/4684",
"diff_url": "https://github.com/huggingface/transformers/pull/4684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4684.patch",
"merged_at": 1591004754000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4683/comments | https://api.github.com/repos/huggingface/transformers/issues/4683/events | https://github.com/huggingface/transformers/issues/4683 | 627,673,650 | MDU6SXNzdWU2Mjc2NzM2NTA= | 4,683 | when I encode [unused1], return not one token | {
"login": "jxyxiangyu",
"id": 65048993,
"node_id": "MDQ6VXNlcjY1MDQ4OTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/65048993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxyxiangyu",
"html_url": "https://github.com/jxyxiangyu",
"followers_url": "https://api.github.com/users/jxyxiangyu/followers",
"following_url": "https://api.github.com/users/jxyxiangyu/following{/other_user}",
"gists_url": "https://api.github.com/users/jxyxiangyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxyxiangyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxyxiangyu/subscriptions",
"organizations_url": "https://api.github.com/users/jxyxiangyu/orgs",
"repos_url": "https://api.github.com/users/jxyxiangyu/repos",
"events_url": "https://api.github.com/users/jxyxiangyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxyxiangyu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"I can reproduce this. cc @n1t0 @mfuntowicz Special \"unused\" tokens are not tokenised correctly. This happens in the fast tokenizers as well as the slow ones. See test case below.\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\nUSE_FAST = True\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\", use_fast=USE_FAST)\r\nprint('\"[unused1]\" in vocab?', \"[unused1]\" in tokenizer.vocab)\r\nprint('\"[unused1]\" index in vocab', tokenizer.vocab[\"[unused1]\"] if \"[unused1]\" in tokenizer.vocab else \"NA\")\r\n\r\nidxs = tokenizer.encode(\"[unused1]\", add_special_tokens=False)\r\nprint(\"indices\", idxs)\r\nrecoded = tokenizer.decode(idxs)\r\nprint(\"recoded\", recoded)\r\n```",
"Hi @jxyxiangyu, thanks for reporting this, thanks @BramVanroy to making a code to reproduce.\r\n\r\nSo far, the behavior you want to achieve needs to be done by deactivating the `do_basic_tokenize` feature on `BertTokenizer`, otherwise the input will be splitted on ponctuation chars before actually going through the wordpiece tokenizer.\r\n\r\n_I don't think we have an equivalent on the Rust implementation of Bert, let me check internally and get back to you on this point._\r\n\r\nHere a snippet of code which should achieve the desired behavior:\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\", do_basic_tokenize=False)\r\ntokenizer.tokenize(\"[unused1]\")\r\n\r\n>>> ['[unused1]']\r\n\r\ntokenizer.encode(\"[unused1]\", add_special_tokens=False)\r\n>>> [1]\r\n\r\ntokenizer.decode([1])\r\n>>> '[unused1]'\r\n```",
"Thanks for responding to my query. After I tried the method you gave, ‘[unused1]’ could indeed be tokenized correctly, but I want to use '[unused1]' to concatenate two words with little relation. In my opinion, may I set other words do_basic_tokenize as True, and '[unused1]' as False?",
"Hi @jxyxiangyu! Thank you @BramVanroy & @mfuntowicz for the help on this!\r\n\r\nI think in this case the easiest way to handle this, is by adding the tokens you plan to use as special tokens. After all, that's what they are. They are not added by default since only a handful of them are actually used so you need to do it manually with\r\n```python\r\ntokenizer.add_special_tokens({ \"additional_special_tokens\": [ \"[unused1]\" ] })\r\n```\r\n\r\nThen, it should work for both fast and slow tokenizers:\r\n```python\r\n>>> from transformers import AutoTokenizer\r\n\r\n>>> slow = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=False)\r\n>>> fast = AutoTokenizer.from_pretrained(\"bert-base-cased\", use_fast=True)\r\n\r\n>>> slow.add_special_tokens({ \"additional_special_tokens\": [ \"[unused1]\" ] })\r\n>>> fast.add_special_tokens({ \"additional_special_tokens\": [ \"[unused1]\" ] })\r\n\r\n>>> slow.encode(\"[unused1]\", add_special_tokens=False)\r\n[1]\r\n>>> fast.encode(\"[unused1]\", add_special_tokens=False)\r\n[1]\r\n```",
"Thank you very much for your response, which solved my confusion",
"Awesome! Closing the issue, do not hesitate to reopen if needed!",
"Hi @n1t0 , I had a related question. I want to re-register a token's id in the vocab so that I don't need to add a new token and expand the size of the vocab. Specifically, for example I want to register \"qazwsx\" to id 1, which means I want to replace \"[unused1]\" : 1 by \"qazwsx\" : 1. Do you know how to achieve this?\r\nAnother question is how to synchronize two tokens with the same id, to avoid expanding the size of vocab. For example, instead of replacing [unused1]\" : 1 by \"qazwsx\" : 1, I want to keep both in the new vocab.\r\n\r\nThank you so much for the help!"
] | 1,590 | 1,679 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using: tokenizer.encode('[unused1]')
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:relation extraction
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. tokenizer.encode("[unused1]")
2. but return not one token, if using keras-bert, it will return me only one token
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: least version
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): 1.1.0
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4683/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4683/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4682/comments | https://api.github.com/repos/huggingface/transformers/issues/4682/events | https://github.com/huggingface/transformers/issues/4682 | 627,671,646 | MDU6SXNzdWU2Mjc2NzE2NDY= | 4,682 | XLNet Generation appears to reference padding text in run_generation script | {
"login": "thesamuel",
"id": 6275391,
"node_id": "MDQ6VXNlcjYyNzUzOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6275391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesamuel",
"html_url": "https://github.com/thesamuel",
"followers_url": "https://api.github.com/users/thesamuel/followers",
"following_url": "https://api.github.com/users/thesamuel/following{/other_user}",
"gists_url": "https://api.github.com/users/thesamuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesamuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesamuel/subscriptions",
"organizations_url": "https://api.github.com/users/thesamuel/orgs",
"repos_url": "https://api.github.com/users/thesamuel/repos",
"events_url": "https://api.github.com/users/thesamuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesamuel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @thesamuel, \r\n\r\nIdeally, the padding text should not influence the outcome, but this is more a hack to make XLNet work with short prompts, than actual science.\r\n\r\nAlso note that it is recommended now to use the TextGeneration Pipeline instead of the `run_generation` script: \r\n\r\n```\r\nfrom transformers import pipeline\r\ngenerator = pipeline(\"text-generation\", model=\"xlnet-base-cased\")\r\nprint(generator(\"We propose a \"))\r\n```\r\n**Note**: This works well for XLNet only after merging PR: #4686. So for pipelines to work, you either have to wait a bit or work on the branch of the PR: #4686. \r\n\r\nAs a default the pipeline employs sampling instead of greedy search. You might also want to play around with the generation hyperparameters here a bit for better results. To learn more about how to effectively use the many parameters for text generation, you might want to take a look at: https://huggingface.co/blog/how-to-generate\r\n"
] | 1,590 | 1,591 | 1,591 | NONE | null | When generating with XLNet in the `run_generation.py` script, the outputs seem to reference the context from the padding text. For instance, given the prompt "We propose a", XLNet generates "We propose a boy Go Ya Ya, a young Iriel Farg, to be named Rasputin."
This seems to reference the padding text:
https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/examples/text-generation/run_generation.py#L62-L71
From what I understand, this padding text should not influence the generation, since the padding ends with an end of sentence token. Is this behavior expected?
Full command I used for reference:
```bash
python -m examples.run_generation --model_type xlnet --model_name_or_path xlnet-base-cased --prompt "We propose a"
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4682/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4681/comments | https://api.github.com/repos/huggingface/transformers/issues/4681/events | https://github.com/huggingface/transformers/pull/4681 | 627,602,575 | MDExOlB1bGxSZXF1ZXN0NDI1Mzc5NzAz | 4,681 | NER: Add new WNUT’17 example | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Looks great!",
"(I've relaunched the failing CI test that's unrelated)",
"@julien-c do you think rebasing onto latest master would fix that problem?"
] | 1,590 | 1,591 | 1,591 | COLLABORATOR | null | Hi,
this PR extends the NER example section, and adds an extra section for fine-tuning a NER model on the (difficult) WNUT’17 shared task:
> The WNUT’17 shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions.
> Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on
> them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms.
I also added my pre-processing script, that splits longer sentences into smaller ones (once the max. subtoken length is reached). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4681",
"html_url": "https://github.com/huggingface/transformers/pull/4681",
"diff_url": "https://github.com/huggingface/transformers/pull/4681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4681.patch",
"merged_at": 1591312398000
} |
https://api.github.com/repos/huggingface/transformers/issues/4680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4680/comments | https://api.github.com/repos/huggingface/transformers/issues/4680/events | https://github.com/huggingface/transformers/pull/4680 | 627,585,410 | MDExOlB1bGxSZXF1ZXN0NDI1MzY1ODUy | 4,680 | [EncoderDecoder] Fix initialization and save/load bug | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=h1) Report\n> Merging [#4680](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a801c7fd74f56a651ba43bfc93eba93c63e84766&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `75.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4680 +/- ##\n==========================================\n- Coverage 78.02% 78.01% -0.02% \n==========================================\n Files 124 124 \n Lines 20626 20634 +8 \n==========================================\n+ Hits 16094 16098 +4 \n- Misses 4532 4536 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4680/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <75.00%> (-3.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4680/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=footer). Last update [a801c7f...a61742a](https://codecov.io/gh/huggingface/transformers/pull/4680?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@thomwolf @LysandreJik @sshleifer - merging for now to solve a bunch of issues. On a side-note, we have not really released the `EncoderDecoderModel` feature of `transformers` yet, or? Are we planning on doing something semi-official for this?",
">I will add an encoder decoder notebook in the next 1-2 weeks showing in-detail how Bert2Bert can be used with EncoderDecoder\r\n\r\n@patrickvonplaten \r\n\r\nI'm already working on a EncoderDecoder notebook for summarization task using kaggle news summery dataset. Hope to finish it in a week :)",
"> > I will add an encoder decoder notebook in the next 1-2 weeks showing in-detail how Bert2Bert can be used with EncoderDecoder\r\n> \r\n> @patrickvonplaten\r\n> \r\n> I'm already working on a EncoderDecoder notebook for summarization task using kaggle news summery dataset. Hope to finish it in a week :)\r\n\r\nThat's great news! Do you use a Bert2Bert implementation? ",
"Yes, I am using Bert2Bert",
"@patrickvonplaten I noticed that since this change was merged this issue happens #5826, is it possible I'm using the config api incorrectly or may be a real issue? \r\n\r\nSorry for tagging you, thanks in advance!",
"Hey @afcruzs - will answer on the issue :-) ",
"Hey, @patrickvonplaten , I wonder if there is any way to get the cross attention weights in the decoder from `EncoderDecoderModel` model. I looked at the document, it only will return the self-attention weights (`decoder_attentions `) in the decoder from `EncoderDecoderModel`? Thanks very much, it bothers me for a while.",
"Hey @kimmo1019 - could you please open a new issue about this? :-) "
] | 1,590 | 1,600 | 1,590 | MEMBER | null | This PR fixes two bugs:
- Cross attention layers were not initialized when initiating the `EncoderDecoderModel` via `from_encoder_decoder_pretrained()`. Thanks goes to https://github.com/huggingface/transformers/issues/4293 for finding this bug! A slow test is including in this PR to prevent future bugs of this kind.
- Saving / loading of pretrained models didn't work because the weights were initialized from the `model_to_load` weights because of a missing `base_model_prefix` in the `EncoderDecoderModel` class. Another slow test is included in this PR to prevent future bugs. I think this problem was mentioned in this issue: https://github.com/huggingface/transformers/issues/4517. I didn't rerun the code attached here: https://github.com/huggingface/transformers/issues/4517#issuecomment-636189365 but I'm quite positive that this is the bug that will be fixed in this PR.
## IMPORTANT
*To everybody who has been training Bert2Bert using the EncoderDecoder framework: Training with the EncoderDecoderModel before this PR did not work because there were no cross attention layers to be trained if you initialized your `EncoderDecoderModel` using `.from_encoder_decoder_pretrained(...)`. - I'm very sorry for the wasted compute and energy! Training should work now. I will add an encoder decoder notebook in the next 1-2 weeks showing in-detail how Bert2Bert can be used with `EncoderDecoder`*
This regards the Issues: #4445, #4647, #4517, #4443, #4293, #4640 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4680/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4680/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4680",
"html_url": "https://github.com/huggingface/transformers/pull/4680",
"diff_url": "https://github.com/huggingface/transformers/pull/4680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4680.patch",
"merged_at": 1590794719000
} |
https://api.github.com/repos/huggingface/transformers/issues/4679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4679/comments | https://api.github.com/repos/huggingface/transformers/issues/4679/events | https://github.com/huggingface/transformers/issues/4679 | 627,495,658 | MDU6SXNzdWU2Mjc0OTU2NTg= | 4,679 | GPT-3 | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks, looks like a duplicate of #4658 ",
"Ups, haven't seen this on mobile.\r\nMy apologies"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Paper: https://arxiv.org/pdf/2005.14165
GitHub: https://github.com/openai/gpt-3
Author: @8enmann | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4679/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4677/comments | https://api.github.com/repos/huggingface/transformers/issues/4677/events | https://github.com/huggingface/transformers/issues/4677 | 627,439,718 | MDU6SXNzdWU2Mjc0Mzk3MTg= | 4,677 | Documentation for non-nlp experts | {
"login": "falconair",
"id": 365542,
"node_id": "MDQ6VXNlcjM2NTU0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/365542?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falconair",
"html_url": "https://github.com/falconair",
"followers_url": "https://api.github.com/users/falconair/followers",
"following_url": "https://api.github.com/users/falconair/following{/other_user}",
"gists_url": "https://api.github.com/users/falconair/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falconair/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falconair/subscriptions",
"organizations_url": "https://api.github.com/users/falconair/orgs",
"repos_url": "https://api.github.com/users/falconair/repos",
"events_url": "https://api.github.com/users/falconair/events{/privacy}",
"received_events_url": "https://api.github.com/users/falconair/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
}
] | [
"Since no one else answers this:\r\nCheck out the simpletransformers library, it's and wrapper of this one and there are some blog posts linked to examples.\r\nhttps://github.com/ThilinaRajapakse/simpletransformers\r\nThe author is very open to every idea and the work with him is pretty good.\r\nMaybe it will help you.\r\nFor most common tasks the distilroberta model gives good results and doesn't need as much computing power as longformer or t5 needs",
"Thanks for the link, very helpful. I’m also finding that if I know what I need to do and which model to use, the API docs do have well coded examples.",
"If you don't know which model to use you can even just checkout the model hub.\r\nFor example for question answering tasks just search for \"squad\" models, then you will find a lot of pretrained models for this task",
"> Since no one else answers this:\r\n\r\nI marked this question as a \"Good First Issue\". The idea is that we encourage people who are new to contributing to add their input. Particularly, you can add new examples that are very entry-level to explain the basic principles of the library.\r\n\r\n",
"@falconair Thanks for raising this issue. We think we can make things better here.\r\n\r\nIf any one is interested in helping out, let me know. In particular it would be helpful to have examples of (non-nlp) projects that do this well. We could also use some beta testers. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> @falconair Thanks for raising this issue. We think we can make things better here.\r\n> \r\n> If any one is interested in helping out, let me know. In particular it would be helpful to have examples of (non-nlp) projects that do this well. We could also use some beta testers.\r\n\r\nPing me if you need beta testers. I can test for Windows, Linux, Linux DDP. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,602 | 1,602 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I want to use these models as an end-user, without having to read academic papers describing them.
I have been following deep learning in the field of computer vision, had no idea that NLP had advanced SO much (for me, Word2vec is still state of the art).
At work I have large amounts of text: articles, transcripts from voice and chat, chatbot exchanges, etc. I would love to Try out the functionality provided by the contributors here, but the documentation seems to assume one already know which models to use.
Perhaps someone can write a “Modern NLP for technical managers” type post (or link to an existing one).
I’m excited by the immense amount of work done here, but will have to start all the way back with RNNs and work my way up to this stuff. Hoping to be able to use this stuff without that detour!
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4677/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4677/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4676/comments | https://api.github.com/repos/huggingface/transformers/issues/4676/events | https://github.com/huggingface/transformers/pull/4676 | 627,428,641 | MDExOlB1bGxSZXF1ZXN0NDI1MjM2MzQ4 | 4,676 | Include `nlp` notebook for model evaluation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4676/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4676",
"html_url": "https://github.com/huggingface/transformers/pull/4676",
"diff_url": "https://github.com/huggingface/transformers/pull/4676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4676.patch",
"merged_at": 1590773937000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4675/comments | https://api.github.com/repos/huggingface/transformers/issues/4675/events | https://github.com/huggingface/transformers/issues/4675 | 627,420,440 | MDU6SXNzdWU2Mjc0MjA0NDA= | 4,675 | Gpt2 generation of text larger than 1024 | {
"login": "rautnikita77",
"id": 48254334,
"node_id": "MDQ6VXNlcjQ4MjU0MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48254334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rautnikita77",
"html_url": "https://github.com/rautnikita77",
"followers_url": "https://api.github.com/users/rautnikita77/followers",
"following_url": "https://api.github.com/users/rautnikita77/following{/other_user}",
"gists_url": "https://api.github.com/users/rautnikita77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rautnikita77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rautnikita77/subscriptions",
"organizations_url": "https://api.github.com/users/rautnikita77/orgs",
"repos_url": "https://api.github.com/users/rautnikita77/repos",
"events_url": "https://api.github.com/users/rautnikita77/events{/privacy}",
"received_events_url": "https://api.github.com/users/rautnikita77/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"After looking over `modeling_utils.generate` (the function used for generation by `run_generation.py`), I believe that the sliding window approach is not yet implemented.\r\n\r\nThis method was implemented for CTRL generation in their repo, so you may be able to adapt some of their code for your use case: https://github.com/salesforce/ctrl/blob/master/generation.py#L186-L189",
"The iterative approach to generation (using `past`) may work better because you can control the sliding window manually.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@patrickvonplaten @rautnikita77 @minimaxir Has anyone attempted to implement this (using cached keys and values)?"
] | 1,590 | 1,615 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I know the context supported by GPT2 is 1024, but I assume there's some technique they utilized to train and generate text longer than that in their results. Also, I saw many gpt2-based repos training text with length longer than 1024. But when I tried generating text longer than 1024 it throws a runtime error :The size of tensor a (1025) must match the size of tensor b (1024) at non-singleton dimension 3. I have the following questions:
1) Shouldn't it be possible to generate longer text since a sliding window is used?
2) Can you please explain what's necessary to generate longer text? What changes will I have to make to the run_generation.py code?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4675/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4674/comments | https://api.github.com/repos/huggingface/transformers/issues/4674/events | https://github.com/huggingface/transformers/issues/4674 | 627,417,753 | MDU6SXNzdWU2Mjc0MTc3NTM= | 4,674 | KeyError in Camembert in QuestionAnsweringPipeline | {
"login": "ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ierezell",
"html_url": "https://github.com/ierezell",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"repos_url": "https://api.github.com/users/ierezell/repos",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I can't reproduce on master. Do you mind specifying the context/question?",
"Because you couldn't reproduce, I tried with the latest version (from git : 2.11.0) and the problem seems gone (tested with random articles from wikipedia). \r\n\r\nMaybe my first text had bad format or non utf-8 characters (but I remember to have tested with many differents inputs before openning an issue) or it was due to a bug fixed in 2.11\r\n\r\nSorry to have bothered you, thanks for the support ! \r\n",
"Alright, no worries! Let me know if you have an issue down the road.",
"@LysandreJik \r\nI got the exact same error using Camembert (\"illuin/camembert-large-fquad\") and the question answering pipeline.\r\n\r\nOpening a new issue.\r\n\r\nQuestions :\r\nLe loyer est-il révisé annuellement ou triennalemment ?\r\nQuel est la nature de l’indice de base ?\r\nLe bail est-il soumis à TVA ou non soumis à TVA ?\r\n\r\nContext : \r\n[context_mono.txt](https://github.com/huggingface/transformers/files/4752269/context_mono.txt)\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"qa.py\", line 164, in <module>\r\n main_file()\r\n File \"qa.py\", line 161, in main_file\r\n analayse(mode)\r\n File \"qa.py\", line 85, in analayse\r\n answer_C = nlp_camembert_gpu_f(question=question_text, context=context)\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 1229, in __call__\r\n for s, e, score in zip(starts, ends, scores)\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py\", line 1229, in <listcomp>\r\n for s, e, score in zip(starts, ends, scores)\r\nKeyError: 377\r\n```"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`Camembert ("illuin/camembert-large-fquad")`
Language I am using the model on (English, Chinese ...):
`French`
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Load model and create a question answering pipeline
Steps to reproduce the behavior:
1.
```
from transformers import (QuestionAnsweringPipeline, CamembertForQuestionAnswering, CamembertModel, CamembertTokenizer)
```
2.
```
QA_model = "illuin/camembert-large-fquad"
CamTokQA = CamembertTokenizer.from_pretrained(QA_model)
CamQA = CamembertForQuestionAnswering.from_pretrained(QA_model)
```
3.
```
device_pipeline = 0 if torch.cuda.is_available() else -1
q_a_pipeline = QuestionAnsweringPipeline(model=CamQA,
tokenizer=CamTokQA,
device=device_pipeline)
```
4.
```
res = q_a_pipeline({'question': question, 'context': ctx})
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
File "/mnt/Documents/Projets/BotPress/R_D/R_D_q_a/sdk/readers.py", line 15, in get_answers
res = q_a_pipeline({'question': question, 'context': ctx})
File "/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1213, in __call__
answers += [
File "/home/pedro/.local/lib/python3.8/site-packages/transformers/pipelines.py", line 1216, in <listcomp>
"start": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),
KeyError: 339
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Get answer from the Q_a pipeline
*Is working on transformers version 2.8.0* (but not on next ones)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Linux-5.6.14-arch1-1-x86_64-with-glibc2.2.5
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0-rc4 (False)
- Using GPU in script?: Yes (but same problem with cpu only)
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4674/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4673/comments | https://api.github.com/repos/huggingface/transformers/issues/4673/events | https://github.com/huggingface/transformers/issues/4673 | 627,286,164 | MDU6SXNzdWU2MjcyODYxNjQ= | 4,673 | QUESTION: How do I know what type of positional encoding to input during fine-tuning or pretrained BERT? | {
"login": "abhisheknovoic",
"id": 62595485,
"node_id": "MDQ6VXNlcjYyNTk1NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/62595485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhisheknovoic",
"html_url": "https://github.com/abhisheknovoic",
"followers_url": "https://api.github.com/users/abhisheknovoic/followers",
"following_url": "https://api.github.com/users/abhisheknovoic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhisheknovoic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhisheknovoic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhisheknovoic/subscriptions",
"organizations_url": "https://api.github.com/users/abhisheknovoic/orgs",
"repos_url": "https://api.github.com/users/abhisheknovoic/repos",
"events_url": "https://api.github.com/users/abhisheknovoic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhisheknovoic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! If you're new to the library, I heavily recommend taking a look at the [glossary (position IDs in this case)](https://huggingface.co/transformers/glossary.html#position-ids), which explains how to use such inputs.\r\n\r\nIf you ignore the `position_ids`, then they're always automatically generated to be the same as the model's pre-training scheme. If you're fine-tuning and wish to keep the same position embeddings, then you don't need to pass them to the model.",
"THanks @LysandreJik , your second statement pretty much answered it completely! Thanks",
"@LysandreJik Hi, does that mean the position embeddings won't get updated if `position_ids` are not passed to the model (or with the default value of `None`)? Would you point me to the related lines in the code that implements this logic? Thanks!",
"This means that the position IDs will be generated on the fly, and the position embeddings will be exactly the same than during the pre-training. You can check the code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L192-L214)."
] | 1,590 | 1,598 | 1,590 | NONE | null | Hello HuggingFace team,
I am familiarizing myself with the HuggingFace tutorials and understand the functionality of the various methods. However, I have a general question for example when using models like BERT.
Considering that I am doing sentiment classification and I want to fine-tune the whole BERT based on pre-trained weights, how do I know what should be the positional encoding as input during the `forward ( )` method?
I know it has a default value of None, but doesn't it mean that during fine-tuning I need to input the same value that it was originally trained on in the first place? If so, how do I know what it was trained on during its original training from scratch? Is there a documentation for that somewhere?
Following to this, if I am freezing the weights of BERT for sentiment classification, I again have the same question on what should be my positional encoding input to the `forward ( )` method.
Please clarify this. Thanks for your time!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4672/comments | https://api.github.com/repos/huggingface/transformers/issues/4672/events | https://github.com/huggingface/transformers/pull/4672 | 627,268,550 | MDExOlB1bGxSZXF1ZXN0NDI1MTAzMjI3 | 4,672 | [Longformer] Better handling of global attention mask vs local attention mask | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Regarding the Multiple Choice `global_attention_mask`. Discussion taken from PR: https://github.com/huggingface/transformers/pull/4645#issuecomment-635429380\r\n\r\n> @patil-suraj, we can leave it to the user, or we can just do as you suggested earlier, put global attention on the question and all choices, which should work.\r\n> \r\n> @patrickvonplaten, what do you think?\r\n\r\nRegarding the multiple choice, I think we usually have the following tensor:\r\n```\r\n[\r\n[ context, choice_a],\r\n[ context, choice_b],\r\n[ context, choice_c], \r\n...\r\n]\r\n```\r\n\r\nsee here: https://github.com/huggingface/transformers/blob/9c17256447b91cf8483c856cb15e95ed30ace538/examples/multiple-choice/utils_multiple_choice.py#L529\r\n\r\nSo I'd suggest if no `global_attention_mask` is provided by the user, we initialize the `global_attention_mask` so that all choice contexts do global attention. If the user wants a different global attention he has now the possibility to define it himself.\r\n\r\n@ibeltagy ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=h1) Report\n> Merging [#4672](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c17256447b91cf8483c856cb15e95ed30ace538&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `41.37%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4672 +/- ##\n==========================================\n- Coverage 77.23% 77.17% -0.06% \n==========================================\n Files 128 128 \n Lines 21050 21060 +10 \n==========================================\n- Hits 16257 16253 -4 \n- Misses 4793 4807 +14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.52% <41.37%> (-3.53%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=footer). Last update [9c17256...9ad5319](https://codecov.io/gh/huggingface/transformers/pull/4672?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok, checked the notebook: https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing and results are the same as before so no breaking changes. \r\n\r\nGood to merge for me!",
"> Ok, checked the notebook: https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing and results are the same as before so no breaking changes.\r\n> \r\n> Good to merge for me!\r\n\r\nThis is great !\r\n\r\nJust noticed one typo in the first line\r\n> This notebook shows how `nlp` can be leveraged `nlp` to evaluate Longformer on TriviaQA "
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR extends Longformer's API to also take a `global_attention_mask` besides the usual `attention_mask` as an input as discussed with @thomwolf @ibeltagy.
Docs are updated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4672/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4672/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4672",
"html_url": "https://github.com/huggingface/transformers/pull/4672",
"diff_url": "https://github.com/huggingface/transformers/pull/4672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4672.patch",
"merged_at": 1590767923000
} |
https://api.github.com/repos/huggingface/transformers/issues/4671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4671/comments | https://api.github.com/repos/huggingface/transformers/issues/4671/events | https://github.com/huggingface/transformers/issues/4671 | 627,246,152 | MDU6SXNzdWU2MjcyNDYxNTI= | 4,671 | get_from_cache in file_utils.py gobbles up error in making url requests | {
"login": "SinghB",
"id": 1682926,
"node_id": "MDQ6VXNlcjE2ODI5MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1682926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinghB",
"html_url": "https://github.com/SinghB",
"followers_url": "https://api.github.com/users/SinghB/followers",
"following_url": "https://api.github.com/users/SinghB/following{/other_user}",
"gists_url": "https://api.github.com/users/SinghB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SinghB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SinghB/subscriptions",
"organizations_url": "https://api.github.com/users/SinghB/orgs",
"repos_url": "https://api.github.com/users/SinghB/repos",
"events_url": "https://api.github.com/users/SinghB/events{/privacy}",
"received_events_url": "https://api.github.com/users/SinghB/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Ran into this today, this is a major issue behind corporate proxy/CA self signed certs. @SinghB maybe you can reopen this, I'm not sure who to `@mention`?\r\n\r\n```python\r\nget_from_cache(\r\n # ...\r\n etag = None\r\n if not local_files_only:\r\n try:\r\n response = requests.head(url, allow_redirects=True, proxies=proxies, timeout=etag_timeout)\r\n if response.status_code == 200:\r\n etag = response.headers.get(\"ETag\")\r\n except (EnvironmentError, requests.exceptions.Timeout):\r\n # etag is already None\r\n # THIS ALSO SWALLOWS ALL OTHER NON \"404\" ERRORS (e.g. SSL, Proxy, etc.)\r\n pass\r\n # ....\r\n```",
"Yes I want to track and solve that issue in the next couple of weeks.",
"+1 from me.\r\n\r\nCurrently I get this error message:\r\n\r\n```\r\nOSError: Can't load weights for 'bert-base-uncased'. Make sure that:\r\n\r\n- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n```\r\n\r\nBut if I add a `raise` after the `# etag is already None` comment in stadelmanma's snippet, I see:\r\n\r\n```\r\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='cdn.huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased-pytorch_model.bin (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)')))\r\n```\r\n\r\nThis leads to confusing debugging... the behavior of this error shouldn't be to indicate to the user that models like `bert-based-uncased` don't exist.",
"Hi @julien-c , I'm having the same issue as @stadelmanma (I'm behind a coporate proxy as well)\r\n\r\n``` File \"/home/USER/anaconda3/envs/codebert/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 376, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/USER/anaconda3/envs/codebert/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 436, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for 'microsoft/deberta-base'. Make sure that:\r\n\r\n- 'microsoft/deberta-base' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'microsoft/deberta-base' is the correct path to a directory containing a config.json file\r\n```\r\nI even tried setting the proxy in `proxies`: \r\n```config = config_class.from_pretrained(args.config_name, proxies={'http://': '<HOST>:<PORT>'})```\r\n\r\nBut same thing happens. \r\nMaybe there's a workaround for this?\r\n\r\nMany thanks!"
] | 1,590 | 1,614 | 1,605 | NONE | null | # 🐛 Bug
No information from the package on SSL error encountered, making it difficult to troubleshoot or figure out a workaround
## Information
When trying to do:
`TFAutoModelWithLMHead.from_pretrained("t5-small")`
Get an error:
`TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType`
The above is a result of an SSL error encountered when trying to fetch the model, however, since the exception handling isn't proper within file_utils.py I don't come to know of it, unless I debug.
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://huggingface.co/transformers/usage.html#summarization
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
Just getting familiar with transformers for summarization
## To reproduce
You need a machine with an expired certificate for proxy etc.
Steps to reproduce the behavior:
1. See information above
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
If there has been an issue in fetching the pre-trained model from s3 bucket etc. I should get an error to that effect.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Windows 10
- Python version: 3.7.4
- PyTorch version (GPU?): NA
- Tensorflow version (GPU?): 2.0.0 (Yes)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4670/comments | https://api.github.com/repos/huggingface/transformers/issues/4670/events | https://github.com/huggingface/transformers/issues/4670 | 627,239,398 | MDU6SXNzdWU2MjcyMzkzOTg= | 4,670 | Coversion between tokenizers | {
"login": "imbalu007",
"id": 629329,
"node_id": "MDQ6VXNlcjYyOTMyOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/629329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imbalu007",
"html_url": "https://github.com/imbalu007",
"followers_url": "https://api.github.com/users/imbalu007/followers",
"following_url": "https://api.github.com/users/imbalu007/following{/other_user}",
"gists_url": "https://api.github.com/users/imbalu007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imbalu007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imbalu007/subscriptions",
"organizations_url": "https://api.github.com/users/imbalu007/orgs",
"repos_url": "https://api.github.com/users/imbalu007/repos",
"events_url": "https://api.github.com/users/imbalu007/events{/privacy}",
"received_events_url": "https://api.github.com/users/imbalu007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"GPT-2 and BERT have very different tokenization mechanism. What do you mean by \"convert between the tokenizers\"? What do you want to do?",
"I want to use Bert as encoder and GPT2 as decoder. Then I want to evaluate the generated text with another Bert as discriminator (like the technique [here](https://arxiv.org/pdf/1703.00955.pdf)). I want the decoder to generate text just based on the context vector (refer to the above link). So I don't think I can use EncoderDecoderModel (am I right?)"
] | 1,590 | 1,590 | 1,590 | NONE | null | I want to convert GPT2 tokens to BERT tokens. Is there any API that can directly convert between the Tokenizers? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4669/comments | https://api.github.com/repos/huggingface/transformers/issues/4669/events | https://github.com/huggingface/transformers/issues/4669 | 627,225,434 | MDU6SXNzdWU2MjcyMjU0MzQ= | 4,669 | Cannot load labels from old models | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"I think I see where that could be a problem, indeed. Do you mind sharing your model configuration file so that I may take a closer look?",
"Here it is:\r\n\r\n```\r\n{\r\n \"architectures\": [\r\n \"BertForTokenClassification\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.3,\r\n \"bos_token_id\": 0,\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.3,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"I-MISC\",\r\n \"1\": \"B-MISC\",\r\n \"2\": \"O\",\r\n \"3\": \"I-LOC\",\r\n \"4\": \"I-ORG\",\r\n \"5\": \"B-LOC\",\r\n \"6\": \"B-ORG\",\r\n \"7\": \"I-PER\",\r\n \"8\": \"B-PER\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"I-MISC\": 0,\r\n \"B-MISC\": 1,\r\n \"O\": 2,\r\n \"I-LOC\": 3,\r\n \"I-ORG\": 4,\r\n \"B-LOC\": 5,\r\n \"B-ORG\": 6,\r\n \"I-PER\": 7,\r\n \"B-PER\": 8\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 9,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 28996\r\n}\r\n```\r\n",
"@GuillemGSubies Thanks for reporting, I can reproduce this issue.\r\n\r\nTo fix it on your side while we push a fix, you can just remove the `num_labels` attribute from your config.json, it's not needed anymore. Let me know if this solves your issue.",
"Thanks you very much. I will do that :heart: ",
"Should also be fixed on master by 751a1e08904fda197366e4b0033bdfb8b10d256c"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | # ❓ Questions & Help
If I load a model from 2.8 or older in 2.9 o newer, the labels from my model are changed automatically so all the tests in my code start to fail because instead of predicting `I-PER` it predicts `LABEL_2`.
After reading the source code I think I found what happens but I'm not quite sure. I think it all started here in #3967
https://github.com/huggingface/transformers/blob/e7cfc1a313cc928e962bb8699868f5dcf46f11eb/src/transformers/configuration_utils.py#L123
In the code above you can see that the labels dicts are modified if we set `num_labels`. I couldn't find the place in the code where that is done, but it definetly modifies 2 attributes from the class. I don't really think that a setter for attribute A, should change attributes B and C.
Am I missing something when loading my models?
Thanks you so much for reading and for the library, we all love it <3
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4669/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4668/comments | https://api.github.com/repos/huggingface/transformers/issues/4668/events | https://github.com/huggingface/transformers/issues/4668 | 627,199,787 | MDU6SXNzdWU2MjcxOTk3ODc= | 4,668 | Colab crashes due to tcmalloc large allocation | {
"login": "karndeb",
"id": 36044259,
"node_id": "MDQ6VXNlcjM2MDQ0MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/36044259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karndeb",
"html_url": "https://github.com/karndeb",
"followers_url": "https://api.github.com/users/karndeb/followers",
"following_url": "https://api.github.com/users/karndeb/following{/other_user}",
"gists_url": "https://api.github.com/users/karndeb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karndeb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karndeb/subscriptions",
"organizations_url": "https://api.github.com/users/karndeb/orgs",
"repos_url": "https://api.github.com/users/karndeb/repos",
"events_url": "https://api.github.com/users/karndeb/events{/privacy}",
"received_events_url": "https://api.github.com/users/karndeb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This is indeed a memory error. At which point does it crash?",
"A similar problem like this has been reported when using the Lazy version of LinebyLineTextDataset. Colab deals badly with situation where you are using 90+% of memory - it'll kick you out or throw OOM errors - which you would not get on local machines. This is unfortunate and hard to get around.\r\n\r\nIn this case, I think you are simply running out of memory. The newsroom dataset is huge (1M+ news _articles_). So that is likely the issue.",
"@LysandreJik It crashes when suddenly the ram usage increases to around 7-8 GB and the increase is also very sudden. Its like it stays at 2-3 GB usage for a minute or so and then suddenly it shoots to 8GB and crashes.\r\n@BramVanroy I tried reducing the dataset by half and running it but I am still getting the same error. So would you suggest running it on local machine?I will have to run this part on local as on my local I have a little better ram (16GB ) but then I will have to train in colab only as I dont have a GPU on my local laptop. Is there a better workaround\r\nAlso thanks guys for giving such a quick answer.",
"The sudden increase in RAM may be due to a sudden very large sentence/text which results in the whole batch having to be very large, exponentially increasing the memory usage. \r\n\r\nHow large is the dataset in terms of GB/number of lines?\r\n\r\nUnfortunately sometimes you cannot do what you would want due to practical restrictions (mostly money). If you want to train or finetune a model with a huge dataset, it is likely that you need more hardware than is available in free plans. \r\n\r\nPerhaps you can try out https://github.com/huggingface/nlp and find out if it has the dataset that you need. If not you can open an issue there and ask whether the dataset can be included. That should solve some issues since it takes into account RAM issues.",
"@BramVanroy I have tried with 26GB ram, but it still crashes, is there any minimum requirement of hardware mentioned?",
"No. I fear that this might simply not work on Colab. Line cache loads as much of the file as it can in memory and goes from there but Colab is being annoying and locks you out because it thinks you are going to throw an OOM error (but you won't on a regular system). ",
"@BramVanroy , I am using \"notebook\" platform from google AI platforms with 26GB ram, (without GPU) but after running 2% for the very 1st epoch, it says : \r\n\r\n**can't allocate memory: you tried to allocate 268435456 bytes. Error code 12 (Cannot allocate memory)** . \r\n\r\nAm I doing something wrong?",
"Just read my previous post. This is a problem about how Google deals with increasing memory usage and thinks an OOM will occur even though it won't. The problem is not with the implementation. It seems that you cannot. Use this functionality in these kinds of VMs. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Could anybody find a solution for the issue?"
] | 1,590 | 1,625 | 1,598 | NONE | null | I am pretraining a RoBERTa model on the Newsroom dataset on colab. I have trained a custom tokenizer on the text data. I am using the Text Dataset LinebyLineTextDataset as I have a single file and each line is the text of a news article. The colab crashes when I run this code
_%%time
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(tokenizer=tokenizer,
file_path="/content/drive/My Drive/Newsroom Dataset/newsroom-firsthalf.txt",
block_size=128)_
I tried with the full dataset and reduced it to half and have also tried it by reducing the block size.
The config is
_config = RobertaConfig(
vocab_size=52000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1)_
and the error log is
_tcmalloc: large alloc 7267041280 bytes == 0x9d916000 @ 0x7f3ea16311e7 0x5aca9b 0x4bb106 0x5bcf53 0x50a2bf 0x50bfb4 0x507d64 0x509042 0x594931 0x549e5f 0x5513d1 0x5a9cbc 0x50a5c3 0x50cd96 0x507d64 0x516345 0x50a2bf 0x50bfb4 0x507d64 0x588d41 0x59fc4e 0x50d356 0x507d64 0x509a90 0x50a48d 0x50bfb4 0x507d64 0x509a90 0x50a48d 0x50bfb4 0x509758._
Additional Notes: The increase RAM message doesnt come when colab crashes so I am essentially working 12.72 GB RAM.
Please help me
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4668/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4668/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4667/comments | https://api.github.com/repos/huggingface/transformers/issues/4667/events | https://github.com/huggingface/transformers/pull/4667 | 627,139,546 | MDExOlB1bGxSZXF1ZXN0NDI0OTk4NjIz | 4,667 | HooshvareLab readme parsbert-peymaner | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | Readme for HooshvareLab/bert-base-parsbert-peymaner-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4667",
"html_url": "https://github.com/huggingface/transformers/pull/4667",
"diff_url": "https://github.com/huggingface/transformers/pull/4667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4667.patch",
"merged_at": 1591014506000
} |
https://api.github.com/repos/huggingface/transformers/issues/4666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4666/comments | https://api.github.com/repos/huggingface/transformers/issues/4666/events | https://github.com/huggingface/transformers/pull/4666 | 627,135,859 | MDExOlB1bGxSZXF1ZXN0NDI0OTk1NjIx | 4,666 | HooshvareLab readme parsbert-armananer | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=h1) Report\n> Merging [#4666](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5015a2a0f4ea63035a877f5626cb0c3ce97e25d&el=desc) will **increase** coverage by `0.63%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4666 +/- ##\n==========================================\n+ Coverage 77.19% 77.83% +0.63% \n==========================================\n Files 128 128 \n Lines 21021 21021 \n==========================================\n+ Hits 16228 16362 +134 \n+ Misses 4793 4659 -134 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=footer). Last update [b5015a2...49068a5](https://codecov.io/gh/huggingface/transformers/pull/4666?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | Readme for HooshvareLab/bert-base-parsbert-armananer-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4666/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4666",
"html_url": "https://github.com/huggingface/transformers/pull/4666",
"diff_url": "https://github.com/huggingface/transformers/pull/4666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4666.patch",
"merged_at": 1591014523000
} |
https://api.github.com/repos/huggingface/transformers/issues/4665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4665/comments | https://api.github.com/repos/huggingface/transformers/issues/4665/events | https://github.com/huggingface/transformers/pull/4665 | 627,130,523 | MDExOlB1bGxSZXF1ZXN0NDI0OTkxMzE5 | 4,665 | HooshvareLab readme parsbert-ner | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=h1) Report\n> Merging [#4665](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5015a2a0f4ea63035a877f5626cb0c3ce97e25d&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4665 +/- ##\n==========================================\n+ Coverage 77.19% 77.41% +0.21% \n==========================================\n Files 128 128 \n Lines 21021 21021 \n==========================================\n+ Hits 16228 16274 +46 \n+ Misses 4793 4747 -46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.17% <0.00%> (+0.23%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=footer). Last update [b5015a2...e727442](https://codecov.io/gh/huggingface/transformers/pull/4665?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | Readme for HooshvareLab/bert-base-parsbert-ner-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4665/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4665",
"html_url": "https://github.com/huggingface/transformers/pull/4665",
"diff_url": "https://github.com/huggingface/transformers/pull/4665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4665.patch",
"merged_at": 1591014550000
} |
https://api.github.com/repos/huggingface/transformers/issues/4664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4664/comments | https://api.github.com/repos/huggingface/transformers/issues/4664/events | https://github.com/huggingface/transformers/issues/4664 | 627,091,803 | MDU6SXNzdWU2MjcwOTE4MDM= | 4,664 | run_tf_ner.py TFTrainer logdir cannot be none | {
"login": "jx669",
"id": 12667589,
"node_id": "MDQ6VXNlcjEyNjY3NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/12667589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jx669",
"html_url": "https://github.com/jx669",
"followers_url": "https://api.github.com/users/jx669/followers",
"following_url": "https://api.github.com/users/jx669/following{/other_user}",
"gists_url": "https://api.github.com/users/jx669/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jx669/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jx669/subscriptions",
"organizations_url": "https://api.github.com/users/jx669/orgs",
"repos_url": "https://api.github.com/users/jx669/repos",
"events_url": "https://api.github.com/users/jx669/events{/privacy}",
"received_events_url": "https://api.github.com/users/jx669/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello,\r\n\r\nThis is because you have to specify `--logging_dir /path/to/logs` as parameter. There will be a default location for the next release of the TF Trainer.",
"Right. I later passed a parameter in run_tf_ner.py and it worked. I felt somewhere in the Trainer or run_tf_ner.py, it needs to be indicated. \r\n\r\nThanks for your prompt response!"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert
Language I am using the model on (English, Chinese ...): bert-base-multilingual-cased
The problem arises when using:
* [x] the official example scripts: (give details below)
run_tf_ner.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
germeval2014ner
* [ ] my own task or dataset: (give details below)
## To reproduce
run_tf_ner.py (original)
run.sh (see below)
Steps to reproduce the behavior:
1.
run.sh
```
export MAX_LENGTH=128
export BERT_MODEL=bert-base-multilingual-cased
export OUTPUT_DIR=germeval-model
export BATCH_SIZE=32
export NUM_EPOCHS=1
export SAVE_STEPS=750
export SEED=1
export data_dir=data
python run_tf_ner.py --data_dir ./data/ \
--labels ./data/labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
```
2. error message
```
Traceback (most recent call last):
File "run_tf_ner.py", line 295, in <module>
main()
File "run_tf_ner.py", line 220, in main
compute_metrics=compute_metrics,
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 48, in __init__
self._setup_training()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 65, in _setup_training
self._create_summary_writer()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py", line 88, in _create_summary_writer
self.writer = tf.summary.create_file_writer(self.args.logging_dir)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/summary_ops_v2.py", line 377, in create_file_writer_v2
raise ValueError("logdir cannot be None")
ValueError: logdir cannot be None
```
## Expected behavior
I did not change the code or the data. I was trying to reproduce exactly the German NER tf2.0 example: [https://github.com/huggingface/transformers/tree/master/examples/token-classification](url)
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
google colab gpu
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?): tensorflow gpu 2.2.0
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4664/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4663/comments | https://api.github.com/repos/huggingface/transformers/issues/4663/events | https://github.com/huggingface/transformers/issues/4663 | 626,990,507 | MDU6SXNzdWU2MjY5OTA1MDc= | 4,663 | End-to-end object detection with Transformers | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I might be wrong, but I think the focus of this library is on NLP which at most is multimodal. Also including object detection transformers may fall out of the scope of this project."
] | 1,590 | 1,591 | 1,590 | CONTRIBUTOR | null | # 🚀 Feature request
A modular Transformer Encoder-Decoder block that can be attached to ConvNets for many tasks.
## Motivation
As shown in the recent paper [End-to-end Object Detection with Transformers](https://ai.facebook.com/research/publications/end-to-end-object-detection-with-transformers) which used Transformer for Object Detection. [https://github.com/facebookresearch/detr](https://github.com/facebookresearch/detr)
## Your contribution
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4663/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4662/comments | https://api.github.com/repos/huggingface/transformers/issues/4662/events | https://github.com/huggingface/transformers/issues/4662 | 626,976,699 | MDU6SXNzdWU2MjY5NzY2OTk= | 4,662 | run_tf_ner.py cannot run | {
"login": "jx669",
"id": 12667589,
"node_id": "MDQ6VXNlcjEyNjY3NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/12667589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jx669",
"html_url": "https://github.com/jx669",
"followers_url": "https://api.github.com/users/jx669/followers",
"following_url": "https://api.github.com/users/jx669/following{/other_user}",
"gists_url": "https://api.github.com/users/jx669/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jx669/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jx669/subscriptions",
"organizations_url": "https://api.github.com/users/jx669/orgs",
"repos_url": "https://api.github.com/users/jx669/repos",
"events_url": "https://api.github.com/users/jx669/events{/privacy}",
"received_events_url": "https://api.github.com/users/jx669/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Delete. It may be a hardware issue. When I changed it to Colab, this problem disappeared"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...):bert-base-multilingual-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
original code in _run_tf_ner.py_
original data from _germaeval2014eval_
original data process from _preprocess.py_
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
_germaeval2014eval_ NER
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. this is the structure
--data
---train.txt
---dev.txt
---test.txt
---label.txt
run.sh (see below)
run_tf_ner.py (orignal)
utils_ner.py (original)
preprocess.py(original)
2. run.sh
export MAX_LENGTH=128
export BERT_MODEL=bert-base-multilingual-cased
export OUTPUT_DIR=germeval-model
export BATCH_SIZE=32
export NUM_EPOCHS=1
export SAVE_STEPS=750
export SEED=1
export data_dir=data
python3 run_tf_ner.py \
--data_dir . \
--labels $data_dir/labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
3. sh run.sh will return the invalid configuration error
```
--05/29/2020 11:21:48 - INFO - __main__ - n_gpu: 2, distributed training: True, 16-bits training: False
05/29/2020 11:21:48 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='germeval-model', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=False, per_gpu_train_batch_size=32, per_gpu_eval_batch_size=8, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, warmup_steps=0, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=750, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, optimizer_name='adam', mode='text-classification', loss_name='SparseCategoricalCrossentropy', tpu_name=None, end_lr=0, eval_steps=1000, debug=False)
05/29/2020 11:21:50 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json from cache at /home/ll/.cache/torch/transformers/45629519f3117b89d89fd9c740073d8e4c1f0a70f9842476185100a8afe715d1.65df3cef028a0c91a7b059e4c404a975ebe6843c71267b67019c0e9cfa8a88f0
05/29/2020 11:21:50 - INFO - transformers.configuration_utils - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "B-LOC",
"1": "B-LOCderiv",
"2": "B-LOCpart",
"3": "B-ORG",
"4": "B-ORGderiv",
"5": "B-ORGpart",
"6": "B-OTH",
"7": "B-OTHderiv",
"8": "B-OTHpart",
"9": "B-PER",
"10": "B-PERderiv",
"11": "B-PERpart",
"12": "I-LOC",
"13": "I-LOCderiv",
"14": "I-LOCpart",
"15": "I-ORG",
"16": "I-ORGderiv",
"17": "I-ORGpart",
"18": "I-OTH",
"19": "I-OTHderiv",
"20": "I-OTHpart",
"21": "I-PER",
"22": "I-PERderiv",
"23": "I-PERpart",
"24": "O"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"B-LOC": 0,
"B-LOCderiv": 1,
"B-LOCpart": 2,
"B-ORG": 3,
"B-ORGderiv": 4,
"B-ORGpart": 5,
"B-OTH": 6,
"B-OTHderiv": 7,
"B-OTHpart": 8,
"B-PER": 9,
"B-PERderiv": 10,
"B-PERpart": 11,
"I-LOC": 12,
"I-LOCderiv": 13,
"I-LOCpart": 14,
"I-ORG": 15,
"I-ORGderiv": 16,
"I-ORGpart": 17,
"I-OTH": 18,
"I-OTHderiv": 19,
"I-OTHpart": 20,
"I-PER": 21,
"I-PERderiv": 22,
"I-PERpart": 23,
"O": 24
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 119547
}
05/29/2020 11:21:51 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json from cache at /home/ll/.cache/torch/transformers/45629519f3117b89d89fd9c740073d8e4c1f0a70f9842476185100a8afe715d1.65df3cef028a0c91a7b059e4c404a975ebe6843c71267b67019c0e9cfa8a88f0
05/29/2020 11:21:51 - INFO - transformers.configuration_utils - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 119547
}
05/29/2020 11:21:52 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt from cache at /home/ll/.cache/torch/transformers/96435fa287fbf7e469185f1062386e05a075cadbf6838b74da22bf64b080bc32.99bcd55fc66f4f3360bc49ba472b940b8dcf223ea6a345deb969d607ca900729
05/29/2020 11:21:54 - INFO - transformers.modeling_tf_utils - loading weights file https://cdn.huggingface.co/bert-base-multilingual-cased-tf_model.h5 from cache at /home/ll/.cache/torch/transformers/273ed844d60ef1d5a4ea8f7857e3c3869d05d7b22296f4ae9bc56026ed40eeb7.1b4841f14bf42137fc7ecee17a46c1b2f22b417f636347e4b810bd06dd9c45ea.h5
2020-05-29 11:21:55.520823: F ./tensorflow/core/kernels/random_op_gpu.h:232] Non-OK-status: GpuLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), gen, data, size, dist) status: Internal: invalid configuration argument
Aborted (core dumped)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I was trying to run the training of transformer ner extraction
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Ubuntu 16.04.4 LTS
- Python version: Python 3.6.10 :: Anaconda, Inc.
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.2.0 GPU
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4662/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4661/comments | https://api.github.com/repos/huggingface/transformers/issues/4661/events | https://github.com/huggingface/transformers/issues/4661 | 626,957,357 | MDU6SXNzdWU2MjY5NTczNTc= | 4,661 | Write With Transformer: PPLM page is broken | {
"login": "songproducer",
"id": 597346,
"node_id": "MDQ6VXNlcjU5NzM0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/597346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songproducer",
"html_url": "https://github.com/songproducer",
"followers_url": "https://api.github.com/users/songproducer/followers",
"following_url": "https://api.github.com/users/songproducer/following{/other_user}",
"gists_url": "https://api.github.com/users/songproducer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songproducer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songproducer/subscriptions",
"organizations_url": "https://api.github.com/users/songproducer/orgs",
"repos_url": "https://api.github.com/users/songproducer/repos",
"events_url": "https://api.github.com/users/songproducer/events{/privacy}",
"received_events_url": "https://api.github.com/users/songproducer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
}
] | closed | false | null | [] | [
"I cannot reproduce your problem. Can you try again?",
"I tried again. \r\n\r\nConsole on dev tools shows:\r\n\r\n```\r\nDevTools failed to load SourceMap: Could not load content for chrome-extension://ibnejdfjmmkpcnlpebklmnkoeoihofec/dist/contentScript.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME\r\nDevTools failed to load SourceMap: Could not load content for chrome-extension://nkbihfbeogaeaoehlefnkodbefgpgknn/sourcemaps/contentscript.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME\r\nDevTools failed to load SourceMap: Could not load content for chrome-extension://nkbihfbeogaeaoehlefnkodbefgpgknn/sourcemaps/inpage.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME\r\nDevTools failed to load SourceMap: Could not load content for chrome-extension://ibnejdfjmmkpcnlpebklmnkoeoihofec/dist/pageHook.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME\r\n```",
"Those are unrelated warnings related to source maps. I can;t reproduce your problem and I tried on different browsers. Can you clear the cache/try in private mode/incognito?",
"<img width=\"2543\" alt=\"Screen Shot 2020-06-01 at 1 50 50 am\" src=\"https://user-images.githubusercontent.com/597346/83359074-6acfea00-a3aa-11ea-97e9-d9fa0ed872ea.png\">\r\nStill no luck",
"D'oh, I didn't check the PPLM page. The other versions of Write With Transformer seem to work, but you are right that it doesn't seem to work for [PPLM](https://transformer.huggingface.co/doc/pplm). When you trigger \"autocomplete\", the web page seems to hang.\r\n\r\ncc @julien-c ",
"Yes, we turned off the PPLM machine as it was costly to host. We need to add a notice to try it locally instead, and/or re-spawn a cheaper machine. Both are on our todo-list.",
"Added a notice there: https://transformer.huggingface.co/doc/pplm"
] | 1,590 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
Triggering autocomplete results in endless spinning.
iOS safari But also desktop Safari, Firefox and Chrome.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4661/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4660/comments | https://api.github.com/repos/huggingface/transformers/issues/4660/events | https://github.com/huggingface/transformers/issues/4660 | 626,956,913 | MDU6SXNzdWU2MjY5NTY5MTM= | 4,660 | Assert message error in Reformer chunking | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2052904485,
"node_id": "MDU6TGFiZWwyMDUyOTA0NDg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/reformer",
"name": "reformer",
"color": "5319e7",
"default": false,
"description": "Everything related to the reformer model"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
In the function `apply_chunking_to_forward`, an assertion checking the input tensor shape is trying to print the contents of the tensor itself instead of its shape:
https://github.com/huggingface/transformers/blob/b5015a2a0f4ea63035a877f5626cb0c3ce97e25d/src/transformers/modeling_utils.py#L2195
I'm pretty sure that line should be `input_tensors[0].shape[chunk_dim], chunk_size` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4660/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4659/comments | https://api.github.com/repos/huggingface/transformers/issues/4659/events | https://github.com/huggingface/transformers/pull/4659 | 626,943,427 | MDExOlB1bGxSZXF1ZXN0NDI0ODQxODcx | 4,659 | Add support for gradient checkpointing in BERT | {
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=h1) Report\n> Merging [#4659](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f4e1f022100834bd00d4f877a883b5946c4cac37&el=desc) will **decrease** coverage by `0.34%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4659 +/- ##\n==========================================\n- Coverage 78.40% 78.06% -0.35% \n==========================================\n Files 138 138 \n Lines 23757 23766 +9 \n==========================================\n- Hits 18627 18552 -75 \n- Misses 5130 5214 +84 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.50% <44.44%> (-0.72%)` | :arrow_down: |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=footer). Last update [f4e1f02...400070b](https://codecov.io/gh/huggingface/transformers/pull/4659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> we'll look at upstreaming it in the `transformers.PretrainedModel` if everyone's on board.\r\n\r\nThanks, @LysandreJik. It would be great to make `gradient_checkpointing` available to more models. While the configuration can be upstreamed in `transformers.PretrainedConfig`, the implementation is model specific, where you need to call `torch.utils.checkpoint.checkpoint` inside the layers loop as in [here](https://github.com/huggingface/transformers/blob/bf4342743ad2f5a5e1090818ecb72f2ebc6e4f73/src/transformers/modeling_bert.py#L404).",
"I was thinking of having the implementation be model agnostic as well. I haven't really thought out the best way, but a possible way to achieve it would be with a decorator; for example, in `PretrainedModel` we could have something like:\r\n\r\n```py\r\n @staticmethod\r\n def gradient_checkpointing(layer):\r\n @functools.wraps(layer)\r\n def wrapper(*args):\r\n layer_instance = args[0]\r\n # Remove the wrapper to prevent infinite recursion on the wrapper\r\n layer_instance.forward = functools.partial(layer_instance.forward.__wrapped__, layer_instance)\r\n \r\n if args[0].config.gradient_checkpointing:\r\n return torch.utils.checkpoint.checkpoint(layer_instance, *args[1:])\r\n else:\r\n return layer(*args)\r\n return wrapper\r\n\r\n```\r\n\r\nThen we can very simply add that decorator on the layers where we want the checkpoint:\r\n\r\n```py\r\nclass BertLayer(nn.Module):\r\n\r\n ...\r\n\r\n @PreTrainedModel.gradient_checkpointing\r\n def forward(\r\n self,\r\n hidden_states,\r\n attention_mask=None,\r\n head_mask=None,\r\n encoder_hidden_states=None,\r\n encoder_attention_mask=None,\r\n ):\r\n\r\n ...\r\n```\r\n\r\nThis would require that these layers have access to the configuration so that they're aware of gradient check-pointing or not.\r\n\r\nPretty convenient, but pretty different from our coding style as well cc @thomwolf ",
"neat ",
"A model agnostic approach might be best. In my research for isolating https://github.com/minimaxir/aitextgen/issues/6 for finetuning larger GPT-2 models, it appeared that checkpointing would have to be implemented at the model level, as this PR does for BERT.",
"torch.utils.checkpoint.checkpoint works well in single GPU. But it causes OOM in multi-gpu with torch.nn.DataParallel.",
"> I was thinking of having the implementation be model agnostic as well. I haven't really thought out the best way, but a possible way to achieve it would be with a decorator; for example, in `PretrainedModel` we could have something like:\r\n> \r\n> ```python\r\n> @staticmethod\r\n> def gradient_checkpointing(layer):\r\n> @functools.wraps(layer)\r\n> def wrapper(*args):\r\n> layer_instance = args[0]\r\n> # Remove the wrapper to prevent infinite recursion on the wrapper\r\n> layer_instance.forward = functools.partial(layer_instance.forward.__wrapped__, layer_instance)\r\n> \r\n> if args[0].config.gradient_checkpointing:\r\n> return torch.utils.checkpoint.checkpoint(layer_instance, *args[1:])\r\n> else:\r\n> return layer(*args)\r\n> return wrapper\r\n> ```\r\n> \r\n> Then we can very simply add that decorator on the layers where we want the checkpoint:\r\n> \r\n> ```python\r\n> class BertLayer(nn.Module):\r\n> \r\n> ...\r\n> \r\n> @PreTrainedModel.gradient_checkpointing\r\n> def forward(\r\n> self,\r\n> hidden_states,\r\n> attention_mask=None,\r\n> head_mask=None,\r\n> encoder_hidden_states=None,\r\n> encoder_attention_mask=None,\r\n> ):\r\n> \r\n> ...\r\n> ```\r\n> \r\n> This would require that these layers have access to the configuration so that they're aware of gradient check-pointing or not.\r\n> \r\n> Pretty convenient, but pretty different from our coding style as well cc @thomwolf\r\n\r\nI like idea of having a decorator function! Would it be enough to have this wrapper only at all \"Model\" forward functions, like `BertModel.forward()`? ",
"> torch.utils.checkpoint.checkpoint works well in single GPU. But it causes OOM in multi-gpu with torch.nn.DataParallel.\r\n\r\nI haven't tried `torch.nn.DataParallel` but it works well with `torch.nn.DistributedDataParallel` on a single or multiple machines. ",
"> I like idea of having a decorator function! Would it be enough to have this wrapper only at all \"Model\" forward functions, like `BertModel.forward()`?\r\n\r\nI don't think so. Even with the decorator, it is still model-specific; the decorator just makes the syntax easier. You still need to decide where to call it because too few calls (e.g. only on `BertModel.forward`), and you won't save enough memory, too many calls (e.g. on every `.forward` function) and the backward pass will be very slow.",
"Pinging @julien-c so he can take a look.",
"> > torch.utils.checkpoint.checkpoint works well in single GPU. But it causes OOM in multi-gpu with torch.nn.DataParallel.\r\n> \r\n> I haven't tried `torch.nn.DataParallel` but it works well with `torch.nn.DistributedDataParallel` on a single or multiple machines.\r\n\r\nThanks for the advice. But I try `torch.nn.DistributedDataParallel` and meet the same problem in https://github.com/pytorch/pytorch/issues/24005. The pytorch version is 1.2.0.\r\n\r\nThe code is:\r\n```\r\nif n_gpu > 1:\r\n # model = torch.nn.DataParallel(model)\r\n torch.distributed.init_process_group(backend=\"nccl\")\r\n model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)\r\n```\r\nBoth `find_unused_parameters=True` and `find_unused_parameters=False` get errors.\r\n\r\n\r\n\r\n",
"@ibeltagy, after some back and forth offline with @julien-c and @thomwolf, the way you implemented it is preferred as it's simpler to understand and adheres better to the library's philosophy.\r\n\r\nI think we can merge this and then in a following PR add it to all the other models. Would you like to take care of that? No worries if not, I can definitely take care of it.",
"@LysandreJik, glad this will be merged. \r\n\r\n> Would you like to take care of that? No worries if not, I can definitely take care of it.\r\n\r\nI will pass :D \r\n\r\n",
"> > > torch.utils.checkpoint.checkpoint works well in single GPU. But it causes OOM in multi-gpu with torch.nn.DataParallel.\r\n> > \r\n> > \r\n> > I haven't tried `torch.nn.DataParallel` but it works well with `torch.nn.DistributedDataParallel` on a single or multiple machines.\r\n> \r\n> Thanks for the advice. But I try `torch.nn.DistributedDataParallel` and meet the same problem in [pytorch/pytorch#24005](https://github.com/pytorch/pytorch/issues/24005). The pytorch version is 1.2.0.\r\n> \r\n> The code is:\r\n> \r\n> ```\r\n> if n_gpu > 1:\r\n> # model = torch.nn.DataParallel(model)\r\n> torch.distributed.init_process_group(backend=\"nccl\")\r\n> model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)\r\n> ```\r\n> \r\n> Both `find_unused_parameters=True` and `find_unused_parameters=False` get errors.\r\n> \r\n> \r\n\r\nI encounter the same issue with torch 1.5.0 and latest transformers",
"@ewrfcas, @schinger, do you have a small example that reproduces the error?\r\n\r\nI don't think we can fix this issue (needs a PyTorch fix https://github.com/pytorch/pytorch/issues/24005), but I think we can work around it by removing the unused parameters mentioned in the error message. ",
"> @ewrfcas, @schinger, do you have a small example that reproduces the error?\r\n> \r\n> I don't think we can fix this issue (needs a PyTorch fix [pytorch/pytorch#24005](https://github.com/pytorch/pytorch/issues/24005)), but I think we can work around it by removing the unused parameters mentioned in the error message.\r\n\r\nsquad example training can reproduce this error: https://github.com/huggingface/transformers/tree/master/examples/question-answering\r\n\r\npython -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_squad.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-large-uncased-whole-word-masking \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --train_file SQUAD_DIR/dev-v1.1.json \\\r\n --predict_file SQUAD_DIR/dev-v1.1.json \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir ./examples/models/wwm_uncased_finetuned_squad/ \\\r\n --per_gpu_eval_batch_size=1 \\\r\n --per_gpu_train_batch_size=1 \\\r\n\r\nno matter find_unused_parameters is ture or false",
"Thanks. It would be more helpful if you provide a simpler and smaller example that I can easily run.",
"> Thanks. It would be more helpful if you provide a simpler and smaller example that I can easily run.\r\n\r\nyou can change --train_file to SQUAD_DIR/dev-v1.1.json, dev set is small for quickly run",
"> > torch.utils.checkpoint.checkpoint works well in single GPU. But it causes OOM in multi-gpu with torch.nn.DataParallel.\r\n> \r\n> I haven't tried `torch.nn.DataParallel` but it works well with `torch.nn.DistributedDataParallel` on a single or multiple machines.\r\n\r\ncould you show me a example about gradient checkpoint works successfully with `torch.nn.DistributedDataParallel` on multi-gpu?",
"> @ewrfcas, @schinger, do you have a small example that reproduces the error?\r\n> \r\n> I don't think we can fix this issue (needs a PyTorch fix [pytorch/pytorch#24005](https://github.com/pytorch/pytorch/issues/24005)), but I think we can work around it by removing the unused parameters mentioned in the error message.\r\n\r\nI have trained a base model instead of large to delay this problem.\r\nThe only differences in the code are \r\n```\r\nclass BertEncoder(nn.Module):\r\n def forward(...):\r\n ...\r\n for i, layer_module in enumerate(self.layer):\r\n ...\r\n if self.use_grad_ckpt:\r\n layer_outputs = torch.utils.checkpoint.checkpoint(layer_module, hidden_states, attention_mask, head_mask[i],\r\n encoder_hidden_states, encoder_attention_mask)\r\n else:\r\n layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i],\r\n encoder_hidden_states, encoder_attention_mask)\r\n ...\r\n ...\r\n```\r\nand \r\n```\r\ntorch.distributed.init_process_group(backend=\"nccl\")\r\nmodel = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)\r\n```\r\nOther codes are the same as normal finetuning codes.",
"Here's a small example to replicate the error\r\n```\r\nimport os\r\nimport torch\r\nfrom transformers import BertForPreTraining\r\nos.environ['MASTER_ADDR'] = 'localhost'\r\nos.environ['MASTER_PORT'] = '12355'\r\ntorch.distributed.init_process_group(backend=\"nccl\", rank=0, world_size=1)\r\n\r\nmodel = BertForPreTraining.from_pretrained('bert-base-uncased', gradient_checkpointing=True).cuda()\r\nmodel = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)\r\noutputs = model(torch.tensor([[1, 2, 3]]).cuda())\r\noutputs[0].sum().backward()\r\n```\r\n\r\nUse `find_unused_parameters=True` and you will get \r\n```\r\nRuntimeError: Expected to mark a variable ready only once. This error is caused by use of a module parameter outside the `forward` function.\r\n```\r\n\r\nUse `find_unused_parameters=False`, and things will work just fine. \r\n\r\nI couldn't replicate the other error, \r\n```\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. \r\n```\r\n@ewrfcas, do you know how to modify the example above to reproduce it?\r\n\r\n@schinger, can you try `find_unused_parameters=False` see if it fixes your problem.",
"> Here's a small example to replicate the error\r\n> \r\n> ```\r\n> import os\r\n> import torch\r\n> from transformers import BertForPreTraining\r\n> os.environ['MASTER_ADDR'] = 'localhost'\r\n> os.environ['MASTER_PORT'] = '12355'\r\n> torch.distributed.init_process_group(backend=\"nccl\", rank=0, world_size=1)\r\n> \r\n> model = BertForPreTraining.from_pretrained('bert-base-uncased').cuda()\r\n> model = torch.nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)\r\n> outputs = model(torch.tensor([[1, 2, 3]]).cuda())\r\n> outputs[0].sum().backward()\r\n> ```\r\n> \r\n> Use `find_unused_parameters=True` and you will get\r\n> \r\n> ```\r\n> RuntimeError: Expected to mark a variable ready only once. This error is caused by use of a module parameter outside the `forward` function.\r\n> ```\r\n> \r\n> Use `find_unused_parameters=False`, and things will work just fine.\r\n> \r\n> I couldn't replicate the other error,\r\n> \r\n> ```\r\n> RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. \r\n> ```\r\n> \r\n> @ewrfcas, do you know how to modify the example above to reproduce it?\r\n> \r\n> @schinger, can you try `find_unused_parameters=False` see if it fixes your problem.\r\n\r\nI have tried this code. Although it works in the first, the second forword will be failed. You can try to repeat the loss.backward for few times.",
"@ewrfcas, I get this error with `gradient_checkpointing=True` and `gradient_checkpointing=False` (btw, `gradient_checkpointing` was set to `False` in the example above and I just updated it), so it is a problem with the example, not gradient checkpointing. Can you try to fix the example? or can you try it in a training loop that uses DDP correctly, either with pytorch-lightning or hugginface trainer?",
"> @ewrfcas, I get this error with `gradient_checkpointing=True` and `gradient_checkpointing=False` (btw, `gradient_checkpointing` was set to `False` in the example above and I just updated it), so it is a problem with the example, not gradient checkpointing. Can you try to fix the example? or can you try it in a training loop that uses DDP correctly, either with pytorch-lightning or hugginface trainer?\r\n\r\nI have solved this problem by removing the self.pooler layer in BertModel because it did not forward any thing during the training. As the error saied, all parameters should be used in loss for DistributedDataParallel with find_unused_parameters=False, and find_unused_parameters=True is incompatible with gradient_checkpointing.\r\n\r\nMaybe we need this code after the first backward\r\n```\r\n# check parameters with no grad\r\nfor n, p in model.named_parameters():\r\n if p.grad is None and p.requires_grad is True:\r\n print('No forward parameters:', n, p.shape)\r\n```",
"Nice finding, @ewrfcas. \r\n\r\n@LysandreJik, what is the best way to address this problem? do we need to fix it or can we leave it to the user to make sure all the model params are used? maybe in a separate PR we can find a way to automatically remove unused model params?\r\n\r\nAlso, aside from this issue, anything else we need to merge the PR? ",
"Right, I think this should be looked at in a separate PR. Will take a final look and merge this PR tomorrow, and then look at implementing gradient checkpointing to the rest of the models."
] | 1,590 | 1,592 | 1,592 | CONTRIBUTOR | null | This PR adds support for gradient checkpointing in `modeling_bert.py` to save memory at training time at the expense of a slower backward pass. This is particularly useful if we want to pretrain a version of BERT for sequences longer than 512. It is also useful for long-document models like Longformer.
Stats:
```
Forward/backward - no grad checkpointing: 40.1GB memory, 25.3 seconds.
Forward/backward - with grad checkpointing: 8.2GB memory (~5x less), 33.5 seconds (~1.3x more)
Forward pass only - with/without gradient checkpointing: 4GB memory, 6.1 seconds.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4659",
"html_url": "https://github.com/huggingface/transformers/pull/4659",
"diff_url": "https://github.com/huggingface/transformers/pull/4659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4659.patch",
"merged_at": 1592837235000
} |
https://api.github.com/repos/huggingface/transformers/issues/4658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4658/comments | https://api.github.com/repos/huggingface/transformers/issues/4658/events | https://github.com/huggingface/transformers/issues/4658 | 626,925,749 | MDU6SXNzdWU2MjY5MjU3NDk= | 4,658 | Add upcoming GPT-3 model | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"My god, the paper hasn't even been up for a day...\r\n\r\nSaid being, +1",
"So who can run 175B parameters and what do I have to do for a favor?",
"The full model will be at least 350 GB (16-bit parameters). You'd need to partition it across more than (350 GB) / (16 GB) ~ **22 GPUs** just to run it! Not to mention the egress costs of making a model that size available.\r\n\r\nOf course, the paper shows **8** different-sized models, **4 of which are smaller than GPT-2**, so some of those could be practical. 🙂 \r\n\r\n",
"Is there any Colab to test at least GPT-3 XL ?",
"> Is there any Colab to test at least GPT-3 XL ?\r\n\r\nThey haven't released any code or pretrained models yet. See the issue on the official repo: https://github.com/openai/gpt-3/issues/1",
"Note that the released models may be FP16, which may require forcing FP16 for use/finetuning (and therefore hardware-limited), or casting up to FP32.",
"> Of course, the paper shows **8** different-sized models, **4 of which are smaller than GPT-2**, so some of those could be practical. slightly_smiling_face\r\n\r\nOne of the main benefits of the smaller gpt-3 models compared to their gpt-2 counterparts could be the increased context length of 2048 tokens.",
"Yeah, personally, I wouldn't be able to use the models that won't fit in a Tesla P100",
"The [GPT-3 repo](https://github.com/openai/gpt-3) is now archived (read-only) so perhaps OpenAI isn't planning on releasing anything this time around.",
"> The [GPT-3 repo](https://github.com/openai/gpt-3) is now archived (read-only) so perhaps OpenAI isn't planning on releasing anything this time around.\r\n\r\nThat is a crying shame, because my system could do-er... :(",
"Hopefully they have a better excuse than last time.",
"> Hopefully they have a better excuse than last time.\r\n\r\n@flarn2006 You mean the....ooohhhh we created something scary and have soggy diapers excuse with GPT-3?",
"@flarn2006 If they don't make excuses or drag their feet, and I finish my system build in a relatively congruent time frame, hopefully I can help...",
"A little update: OpenAI's now running their own API with GPT-3 on it. https://beta.openai.com\r\nYou can apply for access, but seems like they're aiming mostly for big companies, not researchers. Sad, way too sad.",
"But who put the \"Open\" in OpenAI then 🤔",
"I guess we will need to \"fundraise\" enough GPU-compute to run the GPT3 model. :smile: ",
"It should be possible to run lower-models on regular GPUs, like 1b model. But we don't have the model itself, and seems that OpenAI is against releasing it and would rather commercialize it :(",
"I wonder if you could hardcode the 175B model into an electronic chip(like an ASIC but more specific)",
"> I wonder if you could hardcode the 175B model into an electronic chip(like an ASIC but more specific)\r\n\r\nVery interesting as an idea. @StealthySemicolon do you have reference to other similar work done in the past?",
"> > I wonder if you could hardcode the 175B model into an electronic chip(like an ASIC but more specific)\r\n> \r\n> Very interesting as an idea. @StealthySemicolon do you have reference to other similar work done in the past?\r\n\r\nNo, just a hunch. Even if I did know how to do this, it's not like OpenAI would publicly release the model weights...",
"Guys when is this gonna be integrated!?",
"When OpenAI decides to release GPT-3 open-sourcely, but this won't happen it seems, they just want to sell access to big corporations.",
"https://bdtechtalks.com/2020/08/17/openai-gpt-3-commercial-ai/amp/\n\nHere it goes...",
"https://arxiv.org/abs/2009.07118\r\nhttps://github.com/timoschick/pet",
"> Hopefully they have a better excuse than last time.\r\n\r\nBecause Microsoft [gave us money.](https://openai.com/blog/openai-licenses-gpt-3-technology-to-microsoft/) ",
"GPT-3 is not coming out anytime soon :(",
"this thread signifies capitalism's pros and cons at the same time...😅",
"> The full model will be at least 350 GB (16-bit parameters). You'd need to partition it across more than (350 GB) / (16 GB) ~ **22 GPUs** just to run it! Not to mention the egress costs of making a model that size available.\r\n> \r\n> Of course, the paper shows **8** different-sized models, **4 of which are smaller than GPT-2**, so some of those could be practical. 🙂\r\n> \r\n> \r\n\r\n@AdamDanielKing is there a way to estimate the size of the GPT-3 XL model?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"we're still waiting.. :("
] | 1,590 | 1,685 | 1,666 | COLLABORATOR | null | # 🌟 New model addition
## Model description
The GPT-3 paper just landed on ArXiv: https://arxiv.org/abs/2005.14165.
Would be great to integrate it into Transformers, whenever models are available.
> Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
## Open source status
* [x] GitHub repository is available: [here](https://github.com/openai/gpt-3)
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4658/reactions",
"total_count": 159,
"+1": 92,
"-1": 0,
"laugh": 5,
"hooray": 16,
"confused": 1,
"heart": 20,
"rocket": 13,
"eyes": 12
} | https://api.github.com/repos/huggingface/transformers/issues/4658/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4657/comments | https://api.github.com/repos/huggingface/transformers/issues/4657/events | https://github.com/huggingface/transformers/issues/4657 | 626,814,902 | MDU6SXNzdWU2MjY4MTQ5MDI= | 4,657 | --fp causes an issue when running example scripts in distributed mode | {
"login": "CMobley7",
"id": 10121829,
"node_id": "MDQ6VXNlcjEwMTIxODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CMobley7",
"html_url": "https://github.com/CMobley7",
"followers_url": "https://api.github.com/users/CMobley7/followers",
"following_url": "https://api.github.com/users/CMobley7/following{/other_user}",
"gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions",
"organizations_url": "https://api.github.com/users/CMobley7/orgs",
"repos_url": "https://api.github.com/users/CMobley7/repos",
"events_url": "https://api.github.com/users/CMobley7/events{/privacy}",
"received_events_url": "https://api.github.com/users/CMobley7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | null | [] | [
"I've tried `transformers 2.10.0` under `CUDA 10.2` with `PyTorch 1.5.0` and apex compiled for that environment, as well as under `CUDA 10.1` with both PyTorch 1.5.0 and 1.4.1, as well as apex compiled for both of those. However, I get pretty much the same issue. Should I down convert to a different version of transformers?\r\n```\r\nEpoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last):\r\nptcc_1 | File \"/ptcc/run_language_modeling.py\", line 281, in <module>\r\nptcc_1 | main()\r\nptcc_1 | File \"/ptcc/run_language_modeling.py\", line 245, in main\r\nptcc_1 | trainer.train(model_path=model_path)\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 470, in train\r\nptcc_1 | tr_loss += self._training_step(model, inputs, optimizer)\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 577, in _training_step\r\nptcc_1 | scaled_loss.backward()\r\nptcc_1 | File \"/usr/lib/python3.6/contextlib.py\", line 88, in __exit__\r\nptcc_1 | next(self.gen)\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/handle.py\", line 127, in scale_loss\r\nptcc_1 | should_skip = False if delay_overflow_check else loss_scaler.update_scale()\r\nptcc_1 | File \"/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/scaler.py\", line 200, in update_scale\r\nptcc_1 | self._has_overflow = self._overflow_buf.item()\r\nptcc_1 | RuntimeError: CUDA error: an illegal memory access was encountered\r\nptcc_1 | Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0\r\nptcc_1 | /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\r\nptcc_1 | \"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate\", UserWarning)\r\nptcc_1 | terminate called after throwing an instance of 'c10::Error'\r\nptcc_1 | what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771)\r\nptcc_1 | frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f2ededfd536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)\r\nptcc_1 | frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f2edf040fbe in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)\r\nptcc_1 | frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f2edededabd in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)\r\nptcc_1 | frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x1d9 (0x7f2f26356d99 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #4: c10d::Reducer::~Reducer() + 0x23a (0x7f2f2634c6ea in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f2f2632b662 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f2f25cee306 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #7: <unknown function> + 0x87130b (0x7f2f2632c30b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #8: <unknown function> + 0x2403a0 (0x7f2f25cfb3a0 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #9: <unknown function> + 0x2415ee (0x7f2f25cfc5ee in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)\r\nptcc_1 | frame #10: /usr/bin/python3() [0x572a27]\r\nptcc_1 | frame #11: /usr/bin/python3() [0x54eef2]\r\nptcc_1 | frame #12: /usr/bin/python3() [0x588948]\r\nptcc_1 | frame #13: /usr/bin/python3() [0x5ad438]\r\nptcc_1 | frame #14: /usr/bin/python3() [0x5ad44e]\r\nptcc_1 | frame #15: /usr/bin/python3() [0x5ad44e]\r\nptcc_1 | frame #16: /usr/bin/python3() [0x56b276]\r\nptcc_1 | frame #17: PyDict_SetItemString + 0x153 (0x5709f3 in /usr/bin/python3)\r\nptcc_1 | frame #18: PyImport_Cleanup + 0x76 (0x4f2fc6 in /usr/bin/python3)\r\nptcc_1 | frame #19: Py_FinalizeEx + 0x5e (0x637e2e in /usr/bin/python3)\r\nptcc_1 | frame #20: Py_Main + 0x395 (0x638e95 in /usr/bin/python3)\r\nptcc_1 | frame #21: main + 0xe0 (0x4b0d00 in /usr/bin/python3)\r\nptcc_1 | frame #22: __libc_start_main + 0xe7 (0x7f2f2b53cb97 in /lib/x86_64-linux-gnu/libc.so.6)\r\nptcc_1 | frame #23: _start + 0x2a (0x5b250a in /usr/bin/python3)``` ",
"I've also tried 3 different machines. All ubuntu 18.04, but with different GPUs sets. 2 Tesla V100-SXM2, 2 P100-SXM2, and 2 Tesla M40, but still get the same error.",
"Can you install the repo from source and try again? There have been some issues with PyTorch upstream that Julien addressed here: https://github.com/huggingface/transformers/pull/4300. So you can try with the latest master branch.",
"@BramVanroy, that merge request appears to have been merged prior to v2.10.0 release. I've installed both `v2.10.0` and `master` from source and unfortunately get the same error above when I tried to train a model distributed using mixed precision. ",
"The one thing I can think of that you can try is specifically setting the current device for each process.\r\n\r\nCan you try cloning the library and installing in dev mode, and adding a line here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/examples/language-modeling/run_language_modeling.py#L134-L136\r\n\r\nSo that it looks like this:\r\n\r\n```python\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n torch.cuda.set_device(training_args.device)\r\n if data_args.eval_data_file is None and training_args.do_eval:\r\n```\r\n\r\n",
"Thanks @BramVanroy , you suggestion worked. I really appreciate it.",
"Re-opening so that we can close this in a PR.",
"@BramVanroy, while your suggestion works for multiple GPUs. I get the following error when trying to use a single GPU.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/ptcc/run_language_modeling.py\", line 283, in <module>\r\n main()\r\n File \"/ptcc/run_language_modeling.py\", line 136, in main\r\n torch.cuda.set_device(training_args.device)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py\", line 243, in set_device\r\n device = _get_device_index(device)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/cuda/_utils.py\", line 34, in _get_device_index\r\n 'or an integer, but got: '.format(device))\r\nValueError: Expected a cuda device with a specified index or an integer, but got:\r\n```\r\nand \r\n```\r\nTraceback (most recent call last):\r\n File \"/ptcc/run_glue.py\", line 230, in <module>\r\n main()\r\n File \"/ptcc/run_glue.py\", line 78, in main\r\n torch.cuda.set_device(training_args.device)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py\", line 243, in set_device\r\n device = _get_device_index(device)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/cuda/_utils.py\", line 34, in _get_device_index\r\n 'or an integer, but got: '.format(device))\r\nValueError: Expected a cuda device with a specified index or an integer, but got:\r\n```",
"@CMobley7 Thanks for the update! I pushed another update to my PR, can you try that one out? When we are not using DDP (and local_rank is -1), we do not specify the GPU id to use. It's best to strictly select that main device, so now we select it by using index 0. (This will still work if you set different devices with CUDA_VISIBLE_DEVICES, it'll just select the first device available _in that environment_).",
"@BramVanroy , I can confirm that the changes made in https://github.com/huggingface/transformers/pull/4728 successfully fix the apex issues with both a single and multiple GPUs. I've tested on 3 different machines. All ubuntu 18.04, but with different GPUs sets. 2 Tesla V100-SXM2, 2 P100-SXM2, and 2 Tesla M40. Thanks for your help.",
"Thank you @CMobley7 for the extensive testing, this is very valuable. \r\n\r\nAnd thanks @BramVanroy for fixing! "
] | 1,590 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`roberta-large`
Language I am using the model on (English, Chinese ...):
`English`
The problem arises when using:
* the official example scripts
The tasks I am working on is:
* Finetuning a LM with `run_language_modeling.py` and the SST-2 task with `run_glue.py`
* my own dataset
## To reproduce
If I run either of the following commands, I get the error included below. However, if I remove `--fp`, everything works normally. Also, if I add `--fp`, but run it non-distributed, everything works normally. So, it appears there is an issue with my running `-fp` in a distributed fashion. I haven't had an issue with this before; so, I'm not sure what the problem is. Any ideas? Thanks in advance.
I installed apex in two different way, but still get the same results.
```
#Install package required for fp16 computations
RUN git clone https://github.com/NVIDIA/apex.git \
&& cd apex \
&& python3 setup.py install --cuda_ext --cpp_ext
```
```
Install package required for fp16 computations
RUN git clone https://github.com/NVIDIA/apex.git \
&& cd apex \
&& pip3 install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
```
python3 -m torch.distributed.launch --nproc_per_node 2 run_language_modeling.py --output_dir=/ptcc/shared/lm_roberta_20200528_164228 --model_type=roberta --do_train --train_data_file=/ptcc/data/train.txt --do_eval --eval_data_file=/ptcc/data/test.txt --evaluate_during_training --per_gpu_train_batch_size=2 --per_gpu_eval_batch_size=2 --learning_rate=5e-06 --model_name_or_path=roberta-large --mlm --max_steps=120000 --warmup_steps=10000 --save_steps=12000 --seed=42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_164228_tf_logs'
```
```
python3 -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type roberta --task_name SST-2 --do_train --do_eval --evaluate_during_training --data_dir /ptcc/data/ --per_gpu_train_batch_size 2 --per_gpu_eval_batch_size 2 --learning_rate 1e-06 --output_dir clf_roberta_20200528_162937 --model_name_or_path /ptcc/shared/lm_roberta_20200528_113420 --num_train_epochs 2.0 --save_steps 1000 --seed 42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_162937_tf_logs
```
```
ptcc_1 | 05/28/2020 20:30:38 - INFO - transformers.trainer - Starting fine-tuning.
Epoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last):
ptcc_1 | File "/ptcc/run_glue.py", line 228, in <module>
ptcc_1 | main()
ptcc_1 | File "/ptcc/run_glue.py", line 160, in main
ptcc_1 | model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 470, in train
ptcc_1 | tr_loss += self._training_step(model, inputs, optimizer)
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 577, in _training_step
ptcc_1 | scaled_loss.backward()
ptcc_1 | File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
ptcc_1 | next(self.gen)
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/handle.py", line 127, in scale_loss
ptcc_1 | should_skip = False if delay_overflow_check else loss_scaler.update_scale()
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/scaler.py", line 200, in update_scale
ptcc_1 | self._has_overflow = self._overflow_buf.item()
ptcc_1 | RuntimeError: CUDA error: an illegal memory access was encountered
ptcc_1 | /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
ptcc_1 | "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
ptcc_1 | terminate called after throwing an instance of 'c10::Error'
ptcc_1 | what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771)
ptcc_1 | frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f69777f6536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
ptcc_1 | frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f6977a39fbe in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)
ptcc_1 | frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f69777e6abd in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
ptcc_1 | frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x1d9 (0x7f69c3926ef9 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #4: c10d::Reducer::~Reducer() + 0x23a (0x7f69c391c84a in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f69c38fb7c2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f69c32be466 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #7: <unknown function> + 0x87146b (0x7f69c38fc46b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #8: <unknown function> + 0x240500 (0x7f69c32cb500 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #9: <unknown function> + 0x24174e (0x7f69c32cc74e in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #10: /usr/bin/python3() [0x572a27]
ptcc_1 | frame #11: /usr/bin/python3() [0x54eef2]
ptcc_1 | frame #12: /usr/bin/python3() [0x588948]
ptcc_1 | frame #13: /usr/bin/python3() [0x5ad438]
ptcc_1 | frame #14: /usr/bin/python3() [0x5ad44e]
ptcc_1 | frame #15: /usr/bin/python3() [0x5ad44e]
ptcc_1 | frame #16: /usr/bin/python3() [0x56b276]
ptcc_1 | frame #17: PyDict_SetItemString + 0x153 (0x5709f3 in /usr/bin/python3)
ptcc_1 | frame #18: PyImport_Cleanup + 0x76 (0x4f2fc6 in /usr/bin/python3)
ptcc_1 | frame #19: Py_FinalizeEx + 0x5e (0x637e2e in /usr/bin/python3)
ptcc_1 | frame #20: Py_Main + 0x395 (0x638e95 in /usr/bin/python3)
ptcc_1 | frame #21: main + 0xe0 (0x4b0d00 in /usr/bin/python3)
ptcc_1 | frame #22: __libc_start_main + 0xe7 (0x7f69e4727b97 in /lib/x86_64-linux-gnu/libc.so.6)
ptcc_1 | frame #23: _start + 0x2a (0x5b250a in /usr/bin/python3)
```
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-5.3.0-26-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Y, 2 Tesla V100-SXM2
- Using distributed or parallel set-up in script?: Y, 2 Tesla V100-SXM2
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4657/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/4657/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4656/comments | https://api.github.com/repos/huggingface/transformers/issues/4656/events | https://github.com/huggingface/transformers/pull/4656 | 626,805,005 | MDExOlB1bGxSZXF1ZXN0NDI0NzI3MDYw | 4,656 | Electra training from scratch | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=h1) Report\n> Merging [#4656](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76779363160a598f130433209a77f8a747351b61&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `52.38%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4656 +/- ##\n==========================================\n- Coverage 77.38% 77.34% -0.04% \n==========================================\n Files 128 128 \n Lines 21071 21096 +25 \n==========================================\n+ Hits 16305 16316 +11 \n- Misses 4766 4780 +14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.37% <48.27%> (-0.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.42% <100.00%> (ø)` | |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.66% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4656/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=footer). Last update [7677936...e50654c](https://codecov.io/gh/huggingface/transformers/pull/4656?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @Frozenwords, thanks for your comments! These were introduced in the latest commit, reverting part of that commit now.",
"It runs correctly now. If you run it, please let me know of the results!",
"Hey @LysandreJik !\r\n\r\nI'm in the process of training an electra-small model and I'm having some issues. \r\n\r\nI'm running the `run_electra_pretraining` script with the following command:\r\n<img width=\"694\" alt=\"Screenshot 2020-06-08 at 19 11 24\" src=\"https://user-images.githubusercontent.com/32191917/84064906-bf372300-a9c3-11ea-9d61-68d1c9644e2a.png\">\r\n\r\nHere are my generator and discriminator configurations:\r\n<img width=\"291\" alt=\"Screenshot 2020-06-08 at 18 35 30\" src=\"https://user-images.githubusercontent.com/32191917/84064927-c78f5e00-a9c3-11ea-9feb-fe3ead8f5afc.png\">\r\n<img width=\"290\" alt=\"Screenshot 2020-06-08 at 18 35 07\" src=\"https://user-images.githubusercontent.com/32191917/84064946-ce1dd580-a9c3-11ea-9707-8ce65712291d.png\">\r\n\r\nAfter making the fixes I suggested the script is running just fine but I suspect either an error with my configurations/tokenizer setup or a case of silent failure. Indeed, the training loss quickly goes down but then slightly increases and plateaus after only 2000 training steps (which is quite different from the training curves shown in the original electra repository).\r\n\r\n<img width=\"992\" alt=\"Screenshot 2020-06-04 at 09 58 47\" src=\"https://user-images.githubusercontent.com/32191917/84065306-79c72580-a9c4-11ea-9e65-cf6c3a5ec470.png\">\r\n\r\nThe only changes I've made to the script are the following:\r\n- Replace the `OpenWebTextDataset` by a `LineByLineTextDataset` \r\n- Set the CLS token ID to 5 in line 424 since I'm using Camembert's tokenizer\r\n\r\nI've been through the script a few times and I can't seem to find the potential issue. Maybe you've observed that behaviour in your end-to-end tests ?\r\n\r\n\r\n",
"Do you have any graph of your generator/discriminator accuracies?",
"I don't have any validation metrics at the moment since I'm using a `LineByLineTextDataset`. Indeed, `_prediction_loop` in the `Trainer` concatenates predictions per batches of sequences (line 804). However in my case sequences have variable lengths (dynamic padding is done at the batch level by the `DataCollatorForLanguageModeling`) thus a variable number of tokens are being masked resulting in a mismatch of shapes. I believe a fix would be to flatten predictions before concatenation.",
"Hey @LysandreJik, any news on the testing process ? If the model performs as expected on your end then there must be something wrong with my setup 🤔 ",
"I can't reproduce yet either unfortunately, still trying to find out what's wrong.",
"I am not sure if this is the right place to ask this question, so apologies in advance.\r\n\r\nwhy are position_ids fed twice here? \r\n1) [generator](https://github.com/huggingface/transformers/blob/e50654c03cd28e79bded1276774abe7572793a2c/examples/language-modeling/run_electra_pretraining.py#L434)\r\n2) [discriminator](https://github.com/huggingface/transformers/blob/e50654c03cd28e79bded1276774abe7572793a2c/examples/language-modeling/run_electra_pretraining.py#L457) \r\n\r\n```python\r\ngenerator_loss, generator_output = self.generator(\r\n masked_lm_inputs,\r\n attention_mask,\r\n token_type_ids,\r\n position_ids, # <--\r\n head_mask,\r\n position_ids, # <--\r\n masked_lm_labels=masked_lm_labels,\r\n )[:2]\r\n```\r\n",
"Is this planned for the next release of transformers lib?",
"@LysandreJik \r\nHello, I have a little confusion. In ELECTRA paper, word embedding, type embedding, position embedding are all shared. However in this pretraining code, it seems only word embedding shared. I'm not very sure, so is it a correct way to set the embedding?\r\nThank you.",
"> It runs correctly now. If you run it, please let me know of the results!\r\n\r\nHi. I’m not sure if we should use the call masked_lm_inputs.scatter_(-1, replace_with_mask_positions, masked_tokens) or use the mask_fill method in line 425. It seems that the current version is using the scatter call, is it ok or we should switch to the mask_fill call as suggested by @Frozenwords ? Thanks!",
"> Hey @LysandreJik !\r\n> \r\n> I'm in the process of training an electra-small model and I'm having some issues.\r\n\r\n@Frozenwords - any chance you could share the exact script/text you're running? I'd be happy to test this on my own dataset (sentences without spaces, requiring tokenization from scratch, followed by finetuning), but i'd like to make sure i'm using the right \"script\". (I haven't used HF much before, i'm a keras person :))",
"# Training a small Electra on custom data\r\n\r\nI have changed my data to have a fixed length (128), and now everything seems to work. However, I do not have anything to compare with. I am training on a small ARM device (Xavier AGX), and it will take a few days before training is done and I can benchmark the model :)\r\n\r\n## Traning with variable length tensors\r\n\r\nI think that the current version can not support dynamically padded/variable length tensors. I suspect this is what was giving me issues with earlier runs.\r\n\r\nAt least the ```mask_inputs``` function would probably have to change a bit. As I mentioned above, the fake tokens will get scattered to the padding tokens. Moreover, when choosing how many tokens will be masked in ```mask_inputs```, a single number, ```number_of_tokens_to_be_masked``` is calculated, based on the longest tensor. I think this would need to vary along the batch dimension, and the sample probabilities would also need to vary.\r\n\r\n## Data\r\n\r\nI use my own data. It is around 18.5gb raw and 34gb in precomputed tensors. I precompute and save tensors using the ```tokenizers``` library, much like the OpenWebText data loader. The data has been transliterated to ASCII characters, with the exception of special danish letters. The tensors all have length 128. The tokenizer is the word piece tokenizer. Evaluation is run on the same 1024 tensors each time.\r\n\r\n## Changes to the code\r\n\r\nAside from the custom data loader, I have only changed line 424, to use the token_id for my [CLS] token, instead of 101.\r\n\r\n\r\n## Script parameters and model configs\r\n\r\n```\r\npython3 ~/nvme/elektra/transformers/examples/language-modeling/run_electra_pretraining.py \\\r\n --output_dir ./model_electra/models_dense_128 \\\r\n --logging_dir ./model_electra/logging \\\r\n --generator_config_name generator_config.json \\\r\n --discriminator_config_name discriminator_config.json \\\r\n --tokenizer_name ./tokenizer/ \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluate_during_training \\\r\n --max_eval_steps 10 \\\r\n --danish_corpus_directory ./features_dense_128 \\\r\n --overwrite_output_dir \\\r\n --block_size 128 \\\r\n --num_tensors_per_file 65536 \\\r\n --fp16 \\\r\n --seed 31 \\\r\n --max_steps -1 \\\r\n --logging_steps 100 \\\r\n --save_steps 32768 \\\r\n --save_total_limit 20 \\\r\n --learning_rate 5e-4 \\\r\n --adam_epsilon 1e-6 \\\r\n --per_device_train_batch_size=64 \\\r\n --per_device_eval_batch_size=64 \\\r\n --num_train_epochs=1 \\\r\n --warmup_steps 10000\r\n```\r\n\r\n## Discriminator config\r\n```\r\n{\r\n \"architectures\": [\r\n \"ElectraForPreTraining\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"embedding_size\": 128,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 256,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 1024,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"electra\",\r\n \"num_attention_heads\": 4,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 46997\r\n}\r\n```\r\n\r\n## Generator config \r\n\r\n```\r\n{\r\n \"architectures\": [\r\n \"ElectraForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"embedding_size\": 128,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 64,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 256,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"electra\",\r\n \"num_attention_heads\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 46997\r\n}\r\n```\r\n\r\n## Graphs\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@EmilLaursen I had same requirement, i.e. to have different number of masked tokens per sample in batch, because of huge variation in my (non-padded) sentence lengths. This is how I modified the electra pretraining model:\r\n(I am training on a non-NLP application, so I won't share any metrics or losses as they won't mean much)\r\n\r\n```python\r\nclass CombinedModel(nn.Module):\r\n\r\n ...\r\n \r\n def mask_inputs_by_row(\r\n self, input_ids: torch.Tensor, tokens_to_ignore, proposal_distribution=1.0,\r\n ):\r\n input_ids = input_ids.clone()\r\n inputs_which_can_be_masked = torch.ones_like(input_ids)\r\n for token in tokens_to_ignore:\r\n inputs_which_can_be_masked -= torch.eq(input_ids, token).long()\r\n\r\n total_number_of_tokens = input_ids.shape[-1]\r\n\r\n # Identify the number of tokens to be masked, which should be: 1 < num < max_predictions per seq.\r\n # It is set to be: n_tokens * mask_probability, but is truncated if it goes beyond bounds.\r\n num_mask_per_row = (inputs_which_can_be_masked.sum(dim=1) *\r\n self.mask_probability).type(torch.long)\r\n\r\n device = inputs_which_can_be_masked.device\r\n\r\n number_of_tokens_to_be_masked = torch.max(\r\n torch.tensor(1).to(device),\r\n torch.min(\r\n torch.min(\r\n torch.tensor(self.max_predictions_per_sequence,\r\n dtype=torch.long),\r\n torch.tensor(int(total_number_of_tokens *\r\n self.mask_probability), dtype=torch.long),\r\n ).to(device),\r\n num_mask_per_row\r\n )\r\n )\r\n\r\n # The probability of each token being masked\r\n sample_prob = proposal_distribution * inputs_which_can_be_masked\r\n sample_prob /= torch.sum(sample_prob, dim=1).view(-1, 1)\r\n # At this point each row should sum to 1. \r\n # i.e. all maskable tokens treated equally (equal opportunity)\r\n \r\n masked_lm_positions = torch.full_like(sample_prob, False).type(torch.bool)\r\n # Not sure if there is a way around using a for loop\r\n for i in range(sample_prob.size(0)):\r\n masked = sample_prob[i].multinomial(\r\n number_of_tokens_to_be_masked[i])\r\n masked_lm_positions[i, masked] = True\r\n \r\n return masked_lm_positions\r\n\r\n def forward(\r\n self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, labels=None,\r\n ):\r\n # get the masked positions as well as their original values\r\n masked_lm_positions = self.mask_inputs_by_row(\r\n input_ids, [self.tokenizer.cls_token_id,\r\n self.tokenizer.sep_token_id,\r\n self.tokenizer.mask_token_id,\r\n self.tokenizer.pad_token_id],\r\n )\r\n\r\n # masked_lm_ids = masked_lm_positions * input_ids\r\n\r\n # # Of the evaluated tokens, 15% of those will keep their original tokens\r\n # replace_with_mask_positions = masked_lm_positions * (\r\n # torch.rand(masked_lm_positions.shape, device=masked_lm_positions.device) < (\r\n # 1 - self.mask_probability)\r\n # )\r\n\r\n masked_lm_inputs = input_ids.clone()\r\n # use a bool mask of positions we want to MASKed to insert MASK token ids\r\n # bool_masked_positions = (masked_lm_positions > 0)\r\n masked_lm_inputs[masked_lm_positions] = self.tokenizer.mask_token_id\r\n\r\n # create MASKED labels with real token ids in positions where the MASKed tokens\r\n # were inserted, -100 otherwise \r\n masked_lm_labels = torch.full_like(input_ids, -100)\r\n masked_lm_labels[masked_lm_positions] = input_ids[masked_lm_positions]\r\n\r\n generator_loss, generator_output = self.generator(\r\n masked_lm_inputs,\r\n attention_mask,\r\n token_type_ids,\r\n position_ids,\r\n head_mask,\r\n None, # position_ids,\r\n masked_lm_labels=masked_lm_labels,\r\n )[:2]\r\n\r\n # softmax the predictions\r\n fake_softmaxed = torch.softmax(generator_output, dim=-1)\r\n # At this point if we sum 3rd dim, we should get a tensor of ones, the same size as input\r\n # i.e. fake_softmaxed.sum(dim=2) == torch.ones_like(input_ids)\r\n\r\n # for each position in sentence, sample ONE token from the generator probability distribution\r\n # this is why the 3rd dim is ONE.\r\n fake_sampled = torch.zeros_like(input_ids).view(\r\n input_ids.shape[0], input_ids.shape[1], 1)\r\n\r\n # multinomial cannot be applid to 3d array. so loop over examples\r\n # what we are doing here is for each position in a sentence,\r\n # we will sample a token using Generator's learned probability \r\n # distribution.\r\n for i in range(fake_softmaxed.shape[0]):\r\n fake_sampled[i] = fake_softmaxed[i,:,:].multinomial(1)\r\n\r\n # At this point we have generator samples for ALL the positions in the sentence.\r\n # But we only need the predictions for the positions corresponding to MASKED tokens\r\n\r\n # First, align shape with the input. Get rid of 3rd dim which was created to make\r\n # sampling easier\r\n fake_sampled = fake_sampled.view(input_ids.shape[0], input_ids.shape[1])\r\n\r\n # Discriminator input is same as generator, except instead of masked tokens\r\n # we insert tokens sampled from the generator distribution.\r\n fake_tokens = input_ids.clone()\r\n fake_tokens[masked_lm_positions] = fake_sampled[masked_lm_positions]\r\n\r\n # D labels are binary labels indicating whether the \r\n discriminator_labels = (labels != fake_tokens).int()\r\n\r\n discriminator_loss, discriminator_output = self.discriminator(\r\n fake_tokens,\r\n attention_mask,\r\n token_type_ids,\r\n position_ids,\r\n head_mask,\r\n None, # position_ids,\r\n labels=discriminator_labels,\r\n )[:2]\r\n\r\n discriminator_predictions = torch.round(\r\n (torch.sign(discriminator_output) + 1.0) * 0.5)\r\n\r\n total_loss = (self.discriminator_weight * discriminator_loss) + \\\r\n (self.generator_weight * generator_loss)\r\n\r\n\r\n # For evaluation, pass tensors of masked tokens and sampled tokens\r\n masked_input_ids = input_ids[masked_lm_positions]\r\n fake_sampled_ids = fake_sampled[masked_lm_positions]\r\n\r\n return (\r\n total_loss,\r\n (generator_output, discriminator_output),\r\n (masked_input_ids, fake_sampled_ids),\r\n (discriminator_labels, discriminator_predictions),\r\n )\r\n\r\n```",
"@EmilLaursen \r\nCan you train your model on glue to see the dev accuracy ? (especially matthew correlation for CoLA task)\r\nI used another training script and found even train loss 1x may still result in very bad glue accuracy, so I am wondering if your model score good acc on GLUE. ",
"@richarddwang \r\nMy model is trained on danish text, so I suspect the glue score would be terrible. I have assembled my own danish benchmark suite, with 3 objectives: pos-tag, NER, and text classification.\r\n\r\nFor what it is worth, my model scores about 4-5% F1 lower than BERT multilingual and the Danish BERT model (uncased and same size as bert-base) on the NER task (0.845 vs 0.795). On the pos tagging task, it is about 0.5 F1 lower, (0.977 vs 0.972). I have not tried the text classification task yet.\r\n\r\nTo me, this seems comparable with what is stated in the Electra paper, i.e. glue score about 4-5 points lower on the small model compared to the base-sized models.",
"Thanks! It is very kind of you to share such detailed results.",
"@LysandreJik This would be very helpful! Is there any plan to get this merged soon?",
"@LysandreJik I and Phil have implemented and verified Electra on the Glue downstream task.\r\n\r\nhttps://github.com/lucidrains/electra-pytorch\r\n\r\nThe forward pass is based on your replica of the TF forward pass (HF 3.0.2).\r\n\r\nThe remaining code is written to closely replicate the TF reference code.\r\n\r\nThat is, data-preprocessing (including ExampleBuilder), learning rate schedule, gradient clipping, etc. might differ.\r\n\r\nI believe the two main difference between this PR and our code might be:\r\n\r\n(1) For the \"small\" setting, for the generator, there is a discrepancy of the configuration reported in the paper and used in the TF reference code.\r\n(2) Data preprocessing with example builder.\r\n\r\nFor the TF reference, after 1M updates, the Glue MRPC accuracy is ~87%. Note, there is high variance in these measurements.\r\n\r\nFor our code, after 200k updates, the Glue MRPC accuracy is ~82%, which might approach ~87% accuracy after 1M updates.",
"Hi @enijkamp , nick work !\r\nCould you share both dev and test score for every task in GLUE ?\r\nIt will help a lot, thanks !",
"Hi, can I use `electra-trainer` branch to run pre-trianing, then save the model `checkpoint`.\r\n\r\nIs there problem to use the saved `checkpoint ` in the latest master branch?\r\n\r\nThanks~",
"Hi @LysandreJik @enijkamp and all !\r\n\r\nAfter develop and debug for a long time. My implementaion of ELECTRA training and finetuning finally successfully pretrains a model from scratch and replicates the results in the paper.\r\n\r\n|Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.|\r\n|---|---|---|---|---|---|---|---|---|---|\r\n|ELECTRA-Small|54.6|89.1|83.7|80.3|88.0|79.7|87.7|60.8|78.0|\r\n|ELECTRA-Small (electra_pytorch)|57.2|87.1|82.1|80.4|88|78.9|87.9|63.1|78.08\r\n\r\n💻Code: https://github.com/richarddwang/electra_pytorch\r\n📰Post: https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004\r\n🐦Tweet: https://twitter.com/_RichardWang_/status/1302545459374772224?s=20 \r\n\r\nI've listed details that are easy to be overlooked when reimplementing ELECTRA, includes a bug in the original official implementation. I would be glad if this helps 😊",
"Hey @LysandreJik and all\r\nI was wondering if you have tried to run this with pytorch 1.6 ? I am currently on a device, where i'd have to reinstall from scratch to downgrade to a lower version of pytorch. I am getting some strange result, and I am considering if it is a compatibility issue, after trying with different learning rates, tokenizers and batch sizes. It seems that my generator does not learn anything, and does not converge. I used fixed-length tensors as @EmilLaursen of size 256. My discriminator does seem to converge, however performs poorly on downstream tasks (Danish NER and POSTAG tasks), which is suspect is caused by the generator not converging.\r\n### Update (18 October)\r\nI can confirm that this behavior is **not** observed when using PyTorch 1.5.0. Perhaps it has something to do with PyTorch \"Automatic Mixed Precision\" feature released in version 1.6. If anyone else is experiencing the same issue, then i recommend you use PyTorch < 1.6.\r\n### Changes to code\r\nCustom data loader (danish corpus), and changed the [CLS] token in line 426 to the one of my vocab.\r\n### Configs\r\n#### Discriminator\r\n```json \r\n{\r\n \"architectures\": [\r\n \"ElectraForPreTraining\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"embedding_size\": 128,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 256,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 1024,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"electra\",\r\n \"num_attention_heads\": 4,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 45000\r\n}\r\n```\r\n#### Generator\r\n```json\r\n{\r\n \"architectures\": [\r\n \"ElectraForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"embedding_size\": 128,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 64,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 256,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"electra\",\r\n \"num_attention_heads\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 45000\r\n}\r\n```\r\n#### Run config\r\n```shell\r\npython3 pretraining/run_electra_pretraining.py \\\r\n\t--output_dir ./models/electra/small_256 \\\r\n\t--discriminator_config_name ./pretraining/config/discriminator_config.json \\\r\n\t--generator_config_name ./pretraining/config/generator_config.json \\\r\n\t--tokenizer_name ./models/tokenizers/ \\\r\n\t--do_train \\\r\n\t--do_eval \\\r\n\t--evaluate_during_training \\\r\n\t--max_eval_steps 16 \\\r\n\t--danish_feature_directory ./data/features_dense_256 \\\r\n\t--overwrite_output_dir \\\r\n\t--block_size 256 \\\r\n\t--num_tensors_per_file 65536 \\\r\n\t--fp16 \\\r\n\t--seed 1337 \\\r\n\t--max_steps -1 \\\r\n\t--logging_steps 200 \\\r\n\t--save_steps 20000 \\\r\n\t--save_total_limit 20 \\\r\n\t--learning_rate 2.5e-4 \\\r\n\t--adam_epsilon 1e-6 \\\r\n\t--per_device_train_batch_size=64 \\\r\n\t--per_device_eval_batch_size=64 \\\r\n\t--num_train_epochs=1 \\\r\n\t--warmup_steps 10000\r\n```\r\n### Logged metrics\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"I am wondering when will \"electra training from scratch\" feature be released? ",
"This PR is unfortunately in a stale state, with no projects to work on it further in the near future. You can take a look at this discussion: https://discuss.huggingface.co/t/electra-training-reimplementation-and-discussion/1004 or at the above comment by @richarddwang for a PyTorch implementation of the ELECTRA training from scratch.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,590 | 1,651 | 1,619 | MEMBER | null | # Electra Trainer
_**Still in testing process. Feedback welcome!**_
This PR introduces:
## New features
- A new language modeling script based on the [ELECTRA pre-training method](https://github.com/google-research/electra).
### Combined model
- Combines `ElectraForMaskedLM` and `ElectraForPreTraining` with embedding sharing + custom masking/replaced token detection
### Lazy Dataset for OpenWebText
- Tokenizes text into multiple files
- Lazy loads files into memory
## Trainer
- Introduces `IterableDataset` handling to the trainer
- New evaluation [_to be discussed_]:
Up to now the evaluation was only possible when the trainer was provided `preds` and `label_ids`. The `compute_metrics` function allowed the user to compute specific metrics, but not to customize which inputs to use for this function.
This re-vamped evaluation works with DataParallel and with TPU.
- Better logging when a limiting a training to a specific amount of steps (both training and evaluation)
- `max_eval_steps` flag
## Bugfix
- Fixes a bug in the ELECTRA model for batch sizes of 1
Left to do:
- [ ] Post wandb graphs here as they complete
- [ ] Allow all models to be used as discriminator/generator (same tokenizer, same embedding matrix)
- [ ] Better way to handle dataset building when using TPUs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4656/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4656/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4656",
"html_url": "https://github.com/huggingface/transformers/pull/4656",
"diff_url": "https://github.com/huggingface/transformers/pull/4656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4656.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4655/comments | https://api.github.com/repos/huggingface/transformers/issues/4655/events | https://github.com/huggingface/transformers/issues/4655 | 626,706,816 | MDU6SXNzdWU2MjY3MDY4MTY= | 4,655 | Tokenization_utils doesn't work with Pytorch-Lightning on 2.10.0 version | {
"login": "sirily",
"id": 26658676,
"node_id": "MDQ6VXNlcjI2NjU4Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26658676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirily",
"html_url": "https://github.com/sirily",
"followers_url": "https://api.github.com/users/sirily/followers",
"following_url": "https://api.github.com/users/sirily/following{/other_user}",
"gists_url": "https://api.github.com/users/sirily/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirily/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirily/subscriptions",
"organizations_url": "https://api.github.com/users/sirily/orgs",
"repos_url": "https://api.github.com/users/sirily/repos",
"events_url": "https://api.github.com/users/sirily/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirily/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you are using `pytorch-lightning` then you won't need to transfer data on GPU or TPU manually. `lightning` takes care of that for you",
"I have copied the method, which transfer data from pytorch-lightning, so you can reproduce the error.",
"Okay, sorry I misunderstood the question.",
"It is my bad English, I guess. Do you need more explanation, or the question is clear now?",
"The reason is in latest version `batch_encode_plus` returns an instance of `BatchEncoding` and in 2.8.0 it return a `dict`. So you can just do it like this\r\nin your collate function\r\n```\r\nreturn dict(tokens), torch.tensor(labels, dtype=torch.long)\r\n```\r\n\r\nI think tokenizer should handle it itself, so tagging @mfuntowicz ",
"BatchEncoding is indeed a UserDict, if you want to access the actual dict, you can use the data attribute:\n\n```python\nbe = tokenizer.batch_encode_plus(...)\nbe.data\n```",
"Thank you! So it's not a bug, but an expected behaviour"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert with pytorch-lightning
Language I am using the model on (English, Chinese ...): English
The problem arises when using
* my own modified scripts: Pytorch Dataset with tokenizer inside
The tasks I am working on is:
* my own task or dataset
## To reproduce
Take a look at [this colab link](https://colab.research.google.com/drive/1SH1xRzhNwgnSn382OLCoMFi_-8CCqk5y?usp=sharing).
I've copied the method from pytorch-lightning which shows an error on 2.10.0 transformers when process a batch. Doing the same with transformers 2.8.0 cause no error.
```
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in __getattr__(self, item)
201
202 def __getattr__(self, item: str):
--> 203 return self.data[item]
204
205 def keys(self):
KeyError: 'cuda'
```
## Expected behavior
No error
- `transformers` version: 2.10.0
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4655/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4654/comments | https://api.github.com/repos/huggingface/transformers/issues/4654/events | https://github.com/huggingface/transformers/pull/4654 | 626,676,148 | MDExOlB1bGxSZXF1ZXN0NDI0NjIxOTYy | 4,654 | TfElectraForSequenceClassification | {
"login": "ypapanik",
"id": 22024955,
"node_id": "MDQ6VXNlcjIyMDI0OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22024955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ypapanik",
"html_url": "https://github.com/ypapanik",
"followers_url": "https://api.github.com/users/ypapanik/followers",
"following_url": "https://api.github.com/users/ypapanik/following{/other_user}",
"gists_url": "https://api.github.com/users/ypapanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ypapanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ypapanik/subscriptions",
"organizations_url": "https://api.github.com/users/ypapanik/orgs",
"repos_url": "https://api.github.com/users/ypapanik/repos",
"events_url": "https://api.github.com/users/ypapanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ypapanik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik this should work (assuming tf >=2.2.0) let me know if I can do anything to help.",
"Hello !\r\n\r\nThanks a lot for this PR!! Can you rebase on master plz? It will be easier to review :)",
"@jplu (I think/hope) I just did it. Let me know if everything is ok. Great work by the way, congrats to all of Huggingface team!",
"Well no, you did a merge not a rebase :smile: can you revert your merge with the following command line:\r\n\r\n```\r\ngit reset --hard e4741ef\r\ngit fetch upstream\r\ngit pull --rebase upstream master\r\ngit push --force\r\n```\r\n\r\nAlso be careful because the tests are broken.\r\n",
":blush: yes sorry, I had done all the above but with a commit before the final push. Now it should (hopefully) be ok. I saw the tests are failing, but I was able to successfully use electra for some text classification tasks of interest.",
"Awesome!! Thanks for having re-pushed ^^\r\n\r\nNow there are several things to change, the output of the `call` method should be like the PyTorch one, it means `(loss), logits, (hidden_states), (attentions)`. Can you add the following updates:\r\n\r\n1) just before the return add:\r\n```python\r\nif labels is not None:\r\n loss = self.compute_loss(labels, logits)\r\n outputs = (loss,) + outputs\r\n```\r\n\r\n2) add the `labels` parameter to the method.\r\n\r\n3) The `TFElectraForSequenceClassification` should inherit from the `TFSequenceClassificationLoss` class.",
"Sorry my bad, the signature of the `call` method should look like this:\r\n```python\r\ndef call(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n token_type_ids=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n labels=None,\r\n training=False,\r\n ):\r\n```",
"I also changed the call to self.electra",
"Here are the errors raised by the tests:\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n__________________ TFElectraModelTest.test_attention_outputs ___________________\r\n[gw0] linux -- Python 3.7.7 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_tf_electra.TFElectraModelTest testMethod=test_attention_outputs>\r\n\r\n def test_attention_outputs(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n decoder_seq_length = (\r\n self.model_tester.decoder_seq_length\r\n if hasattr(self.model_tester, \"decoder_seq_length\")\r\n else self.model_tester.seq_length\r\n )\r\n encoder_seq_length = (\r\n self.model_tester.encoder_seq_length\r\n if hasattr(self.model_tester, \"encoder_seq_length\")\r\n else self.model_tester.seq_length\r\n )\r\n decoder_key_length = (\r\n self.model_tester.key_length if hasattr(self.model_tester, \"key_length\") else decoder_seq_length\r\n )\r\n encoder_key_length = (\r\n self.model_tester.key_length if hasattr(self.model_tester, \"key_length\") else encoder_seq_length\r\n )\r\n \r\n for model_class in self.all_model_classes:\r\n config.output_attentions = True\r\n config.output_hidden_states = False\r\n model = model_class(config)\r\n outputs = model(inputs_dict)\r\n attentions = [t.numpy() for t in outputs[-1]]\r\n self.assertEqual(model.config.output_attentions, True)\r\n self.assertEqual(model.config.output_hidden_states, False)\r\n> self.assertEqual(len(attentions), self.model_tester.num_hidden_layers)\r\nE AssertionError: 13 != 5\r\n\r\ntests/test_modeling_tf_common.py:324: AssertionError\r\n_________________ TFElectraModelTest.test_hidden_states_output _________________\r\n[gw0] linux -- Python 3.7.7 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_tf_electra.TFElectraModelTest testMethod=test_hidden_states_output>\r\n\r\n def test_hidden_states_output(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n for model_class in self.all_model_classes:\r\n config.output_hidden_states = True\r\n config.output_attentions = False\r\n model = model_class(config)\r\n outputs = model(inputs_dict)\r\n hidden_states = [t.numpy() for t in outputs[-1]]\r\n self.assertEqual(model.config.output_attentions, False)\r\n self.assertEqual(model.config.output_hidden_states, True)\r\n> self.assertEqual(len(hidden_states), self.model_tester.num_hidden_layers + 1)\r\nE AssertionError: 13 != 6\r\n\r\ntests/test_modeling_tf_common.py:369: AssertionError\r\n______________________ TFElectraModelTest.test_save_load _______________________\r\n[gw0] linux -- Python 3.7.7 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_tf_electra.TFElectraModelTest testMethod=test_save_load>\r\n\r\n def test_save_load(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n for model_class in self.all_model_classes:\r\n model = model_class(config)\r\n outputs = model(inputs_dict)\r\n \r\n with tempfile.TemporaryDirectory() as tmpdirname:\r\n model.save_pretrained(tmpdirname)\r\n model = model_class.from_pretrained(tmpdirname)\r\n after_outputs = model(inputs_dict)\r\n \r\n> self.assert_outputs_same(after_outputs, outputs)\r\n\r\ntests/test_modeling_tf_common.py:93: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_tf_common.py:154: in assert_outputs_same\r\n self.assertLessEqual(max_diff, 1e-5)\r\nE AssertionError: 0.24899402 not less than or equal to 1e-05\r\n```",
"Hello! Any news on this PR? :)",
"> Hello! Any news on this PR? :)\r\n\r\nHi, sorry I am a bit pressed at the moment, I'd be glad if you would want to take over, otherwise it might take some time for me to re-grab this.",
"Ok, no problem I will try to retake what you have done. Thanks a lot for the update.",
"Any news on this PR @ypapanik @jplu ? Can I help in some way ?",
"Sorry, no time on my side to work on this for now.",
"@maxibor The code should be working I have used it successfully to get a better acc than BERT, but some tests had failed. Meanwhile the main codebase has evolved and there should be some conflicts, easy to resolve probably. \r\nI don't have time to complete those two things now (tests failing, resolve conflicts), perhaps someone from HF should spare a few minutes? @LysandreJik ",
"Updated version right here https://github.com/huggingface/transformers/pull/6227",
"Thanks @jplu for opening the new PR, closing this."
] | 1,590 | 1,596 | 1,596 | NONE | null | This pull request add functionality for sequence classification with Electra. The only missing bit is that I have put the activation function as "tanh" instead of "gelu" (in the orig. implementation) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4654",
"html_url": "https://github.com/huggingface/transformers/pull/4654",
"diff_url": "https://github.com/huggingface/transformers/pull/4654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4654.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4653/comments | https://api.github.com/repos/huggingface/transformers/issues/4653/events | https://github.com/huggingface/transformers/pull/4653 | 626,635,114 | MDExOlB1bGxSZXF1ZXN0NDI0NTkwNDIw | 4,653 | [Longformer] fix model name in examples | {
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=h1) Report\n> Merging [#4653](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5015a2a0f4ea63035a877f5626cb0c3ce97e25d&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4653 +/- ##\n=======================================\n Coverage 77.19% 77.20% \n=======================================\n Files 128 128 \n Lines 21021 21021 \n=======================================\n+ Hits 16228 16230 +2 \n+ Misses 4793 4791 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4653/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `96.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4653/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4653/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=footer). Last update [b5015a2...dfce20a](https://codecov.io/gh/huggingface/transformers/pull/4653?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR fixes the model identifier of Longformer to the new standard fomat <organisation/model_name> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4653",
"html_url": "https://github.com/huggingface/transformers/pull/4653",
"diff_url": "https://github.com/huggingface/transformers/pull/4653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4653.patch",
"merged_at": 1590750755000
} |
https://api.github.com/repos/huggingface/transformers/issues/4652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4652/comments | https://api.github.com/repos/huggingface/transformers/issues/4652/events | https://github.com/huggingface/transformers/pull/4652 | 626,628,452 | MDExOlB1bGxSZXF1ZXN0NDI0NTg1NDE0 | 4,652 | [Community notebooks] add longformer-for-qa notebook | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=h1) Report\n> Merging [#4652](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e737018e1fcb22c8b76052058279552a8d6c806&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4652 +/- ##\n==========================================\n- Coverage 77.19% 77.19% -0.01% \n==========================================\n Files 128 128 \n Lines 21021 21021 \n==========================================\n- Hits 16228 16227 -1 \n- Misses 4793 4794 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4652/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.47% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=footer). Last update [5e73701...88eab9f](https://codecov.io/gh/huggingface/transformers/pull/4652?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"One small comment, the model name was changed in a recent commit to `allenai/longformer-base-4096'`.\r\n",
"The notebook looks great - thanks @patil-suraj ! Apart from @ibeltagy's suggestion, I think it's great!",
"Great, thank you! I've updated the model paths."
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds a community notebook to showcase how to fine-tune Longformer for QA task.
@ibeltagy @patrickvonplaten Please provide feedback if you think this notebook can be further improved.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4652",
"html_url": "https://github.com/huggingface/transformers/pull/4652",
"diff_url": "https://github.com/huggingface/transformers/pull/4652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4652.patch",
"merged_at": 1590697643000
} |
https://api.github.com/repos/huggingface/transformers/issues/4651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4651/comments | https://api.github.com/repos/huggingface/transformers/issues/4651/events | https://github.com/huggingface/transformers/pull/4651 | 626,573,271 | MDExOlB1bGxSZXF1ZXN0NDI0NTM5OTk1 | 4,651 | Update modeling_electra for adding ability to using electra as decoder | {
"login": "blizda",
"id": 9090456,
"node_id": "MDQ6VXNlcjkwOTA0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9090456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blizda",
"html_url": "https://github.com/blizda",
"followers_url": "https://api.github.com/users/blizda/followers",
"following_url": "https://api.github.com/users/blizda/following{/other_user}",
"gists_url": "https://api.github.com/users/blizda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blizda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blizda/subscriptions",
"organizations_url": "https://api.github.com/users/blizda/orgs",
"repos_url": "https://api.github.com/users/blizda/repos",
"events_url": "https://api.github.com/users/blizda/events{/privacy}",
"received_events_url": "https://api.github.com/users/blizda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=h1) Report\n> Merging [#4651](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e444648a302dc8520beec96356a4bf500944355c&el=desc) will **decrease** coverage by `0.29%`.\n> The diff coverage is `20.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4651 +/- ##\n==========================================\n- Coverage 77.42% 77.13% -0.30% \n==========================================\n Files 128 128 \n Lines 21017 21041 +24 \n==========================================\n- Hits 16273 16229 -44 \n- Misses 4744 4812 +68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `71.11% <20.00%> (-5.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.11% <0.00%> (-14.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.94% <0.00%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4651/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=footer). Last update [e444648...9660de8](https://codecov.io/gh/huggingface/transformers/pull/4651?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | Adapt electra for using as decoder(code for it borrowed from modeling_bert) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4651/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4651/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4651",
"html_url": "https://github.com/huggingface/transformers/pull/4651",
"diff_url": "https://github.com/huggingface/transformers/pull/4651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4651.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4650/comments | https://api.github.com/repos/huggingface/transformers/issues/4650/events | https://github.com/huggingface/transformers/pull/4650 | 626,566,033 | MDExOlB1bGxSZXF1ZXN0NDI0NTM0MTg1 | 4,650 | Allow pathlib.Path to be used on save_pretrained and save_vocabulary | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=h1) Report\n> Merging [#4650](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e444648a302dc8520beec96356a4bf500944355c&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `71.42%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4650 +/- ##\n=======================================\n Coverage 77.42% 77.43% \n=======================================\n Files 128 128 \n Lines 21017 21019 +2 \n=======================================\n+ Hits 16273 16276 +3 \n+ Misses 4744 4743 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.54% <71.42%> (+0.02%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4650/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=footer). Last update [e444648...d453872](https://codecov.io/gh/huggingface/transformers/pull/4650?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,651 | 1,596 | MEMBER | null | Related to #4541
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4650",
"html_url": "https://github.com/huggingface/transformers/pull/4650",
"diff_url": "https://github.com/huggingface/transformers/pull/4650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4650.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4649/comments | https://api.github.com/repos/huggingface/transformers/issues/4649/events | https://github.com/huggingface/transformers/pull/4649 | 626,543,588 | MDExOlB1bGxSZXF1ZXN0NDI0NTE1MTU4 | 4,649 | Update modeling_electra for adding ability to using electra as decoder | {
"login": "blizda",
"id": 9090456,
"node_id": "MDQ6VXNlcjkwOTA0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9090456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blizda",
"html_url": "https://github.com/blizda",
"followers_url": "https://api.github.com/users/blizda/followers",
"following_url": "https://api.github.com/users/blizda/following{/other_user}",
"gists_url": "https://api.github.com/users/blizda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blizda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blizda/subscriptions",
"organizations_url": "https://api.github.com/users/blizda/orgs",
"repos_url": "https://api.github.com/users/blizda/repos",
"events_url": "https://api.github.com/users/blizda/events{/privacy}",
"received_events_url": "https://api.github.com/users/blizda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | NONE | null | I added some small change(borrow it from modeling_bert). Now, electra able to work as decoder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4649/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4649",
"html_url": "https://github.com/huggingface/transformers/pull/4649",
"diff_url": "https://github.com/huggingface/transformers/pull/4649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4649.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4648/comments | https://api.github.com/repos/huggingface/transformers/issues/4648/events | https://github.com/huggingface/transformers/pull/4648 | 626,531,912 | MDExOlB1bGxSZXF1ZXN0NDI0NTA1MzYz | 4,648 | Update modeling_electra for using it as decoder | {
"login": "blizda",
"id": 9090456,
"node_id": "MDQ6VXNlcjkwOTA0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9090456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blizda",
"html_url": "https://github.com/blizda",
"followers_url": "https://api.github.com/users/blizda/followers",
"following_url": "https://api.github.com/users/blizda/following{/other_user}",
"gists_url": "https://api.github.com/users/blizda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blizda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blizda/subscriptions",
"organizations_url": "https://api.github.com/users/blizda/orgs",
"repos_url": "https://api.github.com/users/blizda/repos",
"events_url": "https://api.github.com/users/blizda/events{/privacy}",
"received_events_url": "https://api.github.com/users/blizda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | NONE | null | I add some adding(borrow it from modeling_bert). Now, electra able to work as decoder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4648/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4648",
"html_url": "https://github.com/huggingface/transformers/pull/4648",
"diff_url": "https://github.com/huggingface/transformers/pull/4648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4648.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4647/comments | https://api.github.com/repos/huggingface/transformers/issues/4647/events | https://github.com/huggingface/transformers/issues/4647 | 626,524,537 | MDU6SXNzdWU2MjY1MjQ1Mzc= | 4,647 | Encode-Decode after training, generation gives the same results regardless of the input | {
"login": "Mantisus",
"id": 34358312,
"node_id": "MDQ6VXNlcjM0MzU4MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/34358312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mantisus",
"html_url": "https://github.com/Mantisus",
"followers_url": "https://api.github.com/users/Mantisus/followers",
"following_url": "https://api.github.com/users/Mantisus/following{/other_user}",
"gists_url": "https://api.github.com/users/Mantisus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mantisus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mantisus/subscriptions",
"organizations_url": "https://api.github.com/users/Mantisus/orgs",
"repos_url": "https://api.github.com/users/Mantisus/repos",
"events_url": "https://api.github.com/users/Mantisus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mantisus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I trained a bert model from pretrained models. and the output embedding are all the same regardless of the input and attention mask during prediction. But when set model.train(), the model will give different embeddings for different input. I'm quite confused to be honest. I suppose that's the same problem?",
"Hi @Mantisus,\r\nMultiple bugs were fixed in #4680 . Can you please take a look whether this error persists?",
"Hi, @patrickvonplaten \r\n\r\nYes, the latest update fixed the generation issue.\r\n\r\nBut I have suspicions that I am not training the model correctly.\r\n\r\nAs the parameters decoder_input_is and lm_labels, I supplied the same values, the text to be generated. But logic suggests that in lm_labels we should submit text shifted 1 token to the right and starting with Pad.\r\nI tried to train the model in this way, but in this case the loss drops almost immediately to almost 0 and the model does not learn.\r\n\r\nI am somewhat confused about what format the training data should be organized in. I will be glad of any advice from you\r\n\r\nHowever, when training the model decoder_input_is == lm_labels, I get pretty good results even on a small dataset (12500), but I think they can be better.",
"Hi @Mantisus, \r\n\r\nDoing `decoder_input_is = lm_labels` is correct. Let's say you want to fine-tune a Bert2Bert for summarization. Then you should do the following (untested example):\r\n\r\n```python \r\nfrom transformers import EncoderDecoder, BertTokenizerFast\r\nbert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-uncased\", \"bert-base-uncased\")\r\ntokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\r\n\r\ncontext = \"\"\" New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.\r\nA year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.\r\nOnly 18 days after that marriage, she got hitched yet again. Then, Barrientos declared \"I do\" five more times, sometimes only within two weeks of each other.\r\nIn 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her \"first and only\" marriage.\r\nBarrientos, now 39, is facing two criminal counts of \"offering a false instrument for filing in the first degree,\" referring to her false statements on the\r\n2010 marriage license application, according to court documents.\r\nProsecutors said the marriages were part of an immigration scam.\r\nOn Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.\r\nAfter leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective\r\nAnnette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.\r\nAll occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.\r\nProsecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.\r\nAny divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.\r\nThe case was referred to the Bronx District Attorney\\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\\'s\r\nInvestigation Division. Seven of the men are from so-called \"red-flagged\" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.\r\nHer eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.\r\nIf convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.\r\n\"\"\"\r\n\r\nsummary = \"'Liana Barrientos has been married 10 times, sometimes within two weeks of each other. Prosecutors say the marriages were part of an immigration scam. She is believed to still be married to four men, and at one time, she was married to eight men at once. Her eighth husband was deported in 2006 to his native Pakistan.\"\r\n\r\ninput_ids = tokenizer.encode(context, return_tensors=\"pt\")\r\ndecoder_input_ids = tokenizer.encode(summary, return_tensors=\"pt\")\r\n\r\nloss, *args = bert2bert(input_ids=input_ids, decoder_input_ids=decoder_input_ids, lm_labels=decoder_input_ids)\r\n```\r\n\r\nThe reason that you don't have to shift the `lm_labels` is that Bert does that automatically for you here: https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/src/transformers/modeling_bert.py#L951\r\n\r\nBTW, the summary example was just taken from: https://github.com/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb",
"The best way for us to check your code if it's a longer training setup is to provide a google colab which we can copy and tweak ourselves :-) ",
"Great, thanks for the example @patrickvonplaten \r\n\r\nIt is convenient that BERT takes care of everything.\r\n\r\nThe code that I use for training is not much different from the example that I threw above. The only thing is that since I use Google Ecolab for training, I wrapped the creation of input Tensors in generators, in order to reduce RAM consumption on large datasets.\r\n\r\nhttps://colab.research.google.com/drive/1uVP09ynQ1QUmSE2sjEysHjMfKgo4ssb7?usp=sharing",
"> Great, thanks for the example @patrickvonplaten\r\n> \r\n> It is convenient that BERT takes care of everything.\r\n> \r\n> The code that I use for training is not much different from the example that I threw above. The only thing is that since I use Google Ecolab for training, I wrapped the creation of input Tensors in generators, in order to reduce RAM consumption on large datasets.\r\n> \r\n> https://colab.research.google.com/drive/1uVP09ynQ1QUmSE2sjEysHjMfKgo4ssb7?usp=sharing\r\n\r\nI am doing something similar to Mantisus, but fine tuning on a large dataset and trying to do it in parallel. My code is actually quite similar to his google colab-- but I am trying to wrap the model in torch.nn.DataParallel so that I can up the batch size to 32 and use two GPU'S. I can get the training to work, as far as i can tell, but since the generate function is only exposed to the underlying model, when I try to run generate, I get blank tokens as output. I must be doing something wrong.\r\n\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\nif(multi_gpu):\r\n bert2bert_o = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-cased\", \"bert-base-cased\")\r\n bert2bert = torch.nn.DataParallel(bert2bert_o)\r\nelse:\r\n bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained(\"bert-base-cased\", \"bert-base-cased\")\r\n\r\n# convert to GPU model\r\n#bert2bert.to(device)\r\ntorch.cuda.set_device(0)\r\nbert2bert.cuda(0)\r\n# put in training mode\r\nbert2bert.train()\r\n```\r\n\r\nthen the rest of the code essentially looks like the @Mantisus code from google colab. How do I access the generate properly, and does anybody know if the same parameters pass all the way through to the underlying model (I would assume .train() and .eval() work?\r\n\r\nHere's the training block, I've adapted it to look like the @Mantisus code-- but the other goofy thing I don't understand is how to access the right loss, since the wrapped parallel model returns a squeezed tensor, so I've been doing this and I don't know if it's right:\r\n\r\n```python\r\n loss, outputs = bert2bert(input_ids = input_ids_encode,\r\n decoder_input_ids = input_ids_decode,\r\n attention_mask = attention_mask_encode,\r\n decoder_attention_mask = attention_mask_decode,\r\n labels = labels)[:2]\r\n\r\n if(multi_gpu):\r\n loss = loss[0]\r\n```\r\n\r\nAnd finally here's the code that's been augmented to attempt to use generate by accessing the module subclass of the wrapped model, that I am not sure is working properly:\r\n\r\n```python\r\nbert2bert.eval()\r\ntest_input = tokenizer.encode([\"This is a test!\"], return_tensors='pt')\r\nwith torch.no_grad(): \r\n generated = bert2bert.module.generate(test_input, \r\n decoder_start_token_id=bert2bert.module.config.decoder.pad_token_id,\r\n do_sample=True, \r\n max_length=100, \r\n top_k=200, \r\n top_p=0.75, \r\n num_return_sequences=10)\r\n```\r\n\r\nThank you. This is all really great stuff by the way.\r\n",
"Hey @HodorTheCoder, \r\n\r\nSorry for the late reply. I have been working on the encoder-decoder framework and verified \r\nthat it works, but only on single GPU training. \r\n\r\nThis model + model card shows how to train a Bert2Bert model and how it should be used: \r\nhttps://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16\r\n\r\nRegarding your code, why do you do \r\n```python\r\nbert2bert.module.generate(...)\r\n``` \r\ninstead of just doing \r\n```python \r\nbert2bert.generate(...)\r\n```\r\n? \r\n\r\nThe encoder decoder inherits from `PretrainedModel` and thus has direct access to `generate(...)`, see here: \r\nhttps://github.com/huggingface/transformers/blob/0b6c255a95368163d2b1d37635e5ce5bdd1b9423/src/transformers/modeling_encoder_decoder.py#L29\r\n.\r\nAlso no need to wrap everything into the `torch.no_grad()` context -> `generate()` is always in `no_grad` mode.\r\n\r\nHope this helps! I will be off for the next two weeks - if it's urgent feel free to ping @sshleifer (hope it's fine to ping you here Sam ;-) )",
"Thank you so much for your work @patrickvonplaten ",
"Yes, all pings are welcome. We also have the https://discuss.huggingface.co/ if you want some hyperparameter advice!",
"@patrickvonplaten \r\n\r\nThanks for your response! I successfully trained a bert2bert EncoderDecoderModel wrapped in torch.nn.DataParallel. I could only fit a batchsize of 16 on a single Titan XP, but was able to train a batchsize of 32 using two of them. \r\n\r\nYou may well be right about the generate propagating properly, and I think when I tried that initially I wasn't training properly (I wasn't updating the loss and optimizer in between batches and there was zero conversion.)\r\n\r\nWhat I ended up doing was training, saving the modeule state dict, and then reloading on a single GPU for inference. Worked great.\r\n\r\nComing from mainly Tensorflow it took me a while to understand how to get torch to do what I wanted but the huggingface documentation has been great, and hopefully, anybody searching will come across these posts.\r\n\r\nALSO:\r\n\r\nTo anybody else who it was not immediately obvious to when converting to use a parallel model, you have to mean() the loss or it won't take the loss of both GPU's into account when calculating grads/opt. So, in my previous example, I erroneously had loss[0] which isn't right-- I changed it to the following training block that properly uses the loss. It is setup on a flag that I set as input if I want to train on one or two gpu's (multigpu). Below is an abstracted code block.\r\n\r\nFYI: I definitely get better results training on batchsize=32 as opposed to 16. Couldn't fit batchsize=64 on the GPU's, might be time to upgrade to some Titan RTX. Anybody got $5k?\r\n\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained(case_selection)\r\nif(multi_gpu):\r\n bert2bert_o = EncoderDecoderModel.from_encoder_decoder_pretrained(case_selection, case_selection)\r\n bert2bert = torch.nn.DataParallel(bert2bert_o)\r\nelse:\r\n bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained(case_selection, case_selection)\r\n\r\n# set up adam optimizer\r\nparam_optimizer = list(bert2bert.named_parameters())\r\nno_decay = ['bias', 'gamma', 'beta']\r\n# seperate decay\r\noptimizer_grouped_parameters = [\r\n {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],\r\n 'weight_decay_rate': 0.01},\r\n {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],\r\n 'weight_decay_rate': 0.0}\r\n]\r\n# create optimizer object\r\noptimizer = AdamW(optimizer_grouped_parameters, lr=3e-5)\r\n\r\nnum_epochs=4\r\nfor epoch in range(num_epochs):\r\n start = datetime.datetime.now()\r\n batches = batch_generator(tokenizer, input_text, target_text, batch_size=batch_size)\r\n\r\n # enumerate over the batch yield function\r\n for step, batch in enumerate(batches):\r\n\r\n batch = tuple(t.to(device) for t in batch)\r\n\r\n input_ids_encode, attention_mask_encode, input_ids_decode, attention_mask_decode, labels = batch\r\n\r\n optimizer.zero_grad()\r\n bert2bert.zero_grad()\r\n \r\n loss, outputs = bert2bert(input_ids = input_ids_encode,\r\n decoder_input_ids = input_ids_decode,\r\n attention_mask = attention_mask_encode,\r\n decoder_attention_mask = attention_mask_decode,\r\n labels = labels)[:2]\r\n\r\n if(multi_gpu):\r\n train_loss_set.append(loss.mean().item())\r\n loss.mean().backward()\r\n display_loss = loss.mean().item()\r\n\r\n else:\r\n train_loss_set.append(loss.item())\r\n loss.backward()\r\n display_loss = loss.item()\r\n\r\n optimizer.step()\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,603 | 1,601 | NONE | null | # ❓ Questions & Help
Hi, everyone. I need help with the encoding-decoding model. I'm trying to train the model to create a title for a small text.
I'm creating a basic Encode-Decode model with Bert
```
from transformers import EncoderDecoderModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
```
After training on my data, when generate I get the same results independent of the input data in model.eval () mode. If you convert model to train, then different results will be generated.
The code I use for training.
```
tokenized_texts = [tokenizer.tokenize(sent) for sent in train_sentences]
tokenized_gt = [tokenizer.tokenize(sent) for sent in train_gt]
input_ids = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_texts]
input_ids = pad_sequences(
input_ids,
maxlen=max_len_abstract,
dtype="long",
truncating="post",
padding="post"
)
attention_masks = [[float(i>0) for i in seq] for seq in input_ids]
input_ids_decode = [tokenizer.convert_tokens_to_ids(x) for x in tokenized_gt]
input_ids_decode = pad_sequences(
input_ids_decode,
maxlen=max_len_title,
dtype="long",
truncating="post",
padding="post"
)
attention_masks_encode = [[float(i>0) for i in seq] for seq in input_ids]
attention_masks_decode = [[float(i>0) for i in seq] for seq in input_ids_decode]
input_ids = torch.tensor(input_ids)
input_ids_decode = torch.tensor(input_ids_decode)
attention_masks_encode = torch.tensor(attention_masks_encode)
attention_masks_decode = torch.tensor(attention_masks_decode)
train_data = TensorDataset(input_ids, input_ids_decode, attention_masks_encode, attention_masks_decode)
train_dataloader = DataLoader(train_data, sampler=RandomSampler(train_data), batch_size=4)
model.cuda()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5)
model.train()
train_loss_set = []
train_loss = 0
for i in range(4):
for step, batch in enumerate(train_dataloader):
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_ids_de, b_attention_masks_encode, b_attention_masks_decode = batch
optimizer.zero_grad()
model.zero_grad()
loss, outputs = model(input_ids=b_input_ids, decoder_input_ids=b_input_ids_de, lm_labels=b_input_ids_de)[:2]
train_loss_set.append(loss.item())
loss.backward()
optimizer.step()
train_loss += loss.item()
clear_output(True)
plt.plot(train_loss_set)
plt.title("Training loss")
plt.xlabel("Batch")
plt.ylabel("Loss")
plt.show()
if step != 0 and step % 20 == 0:
torch.save(model.state_dict(), model_weigth)
print(f'Epoch {i}')
```
Maybe I'm doing something wrong? I would be grateful for any advice. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4647/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4646/comments | https://api.github.com/repos/huggingface/transformers/issues/4646/events | https://github.com/huggingface/transformers/pull/4646 | 626,483,698 | MDExOlB1bGxSZXF1ZXN0NDI0NDY1NDg1 | 4,646 | add longformer docs | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=h1) Report\n> Merging [#4646](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e444648a302dc8520beec96356a4bf500944355c&el=desc) will **decrease** coverage by `1.57%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4646 +/- ##\n==========================================\n- Coverage 77.42% 75.85% -1.58% \n==========================================\n Files 128 128 \n Lines 21017 21017 \n==========================================\n- Hits 16273 15942 -331 \n- Misses 4744 5075 +331 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4646/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.51% <0.00%> (-78.31%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4646/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4646/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4646/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4646/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=footer). Last update [e444648...200b97a](https://codecov.io/gh/huggingface/transformers/pull/4646?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Docs were provided in another PR - closing."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4646/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4646",
"html_url": "https://github.com/huggingface/transformers/pull/4646",
"diff_url": "https://github.com/huggingface/transformers/pull/4646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4646.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4645/comments | https://api.github.com/repos/huggingface/transformers/issues/4645/events | https://github.com/huggingface/transformers/pull/4645 | 626,458,366 | MDExOlB1bGxSZXF1ZXN0NDI0NDQ0NjM2 | 4,645 | [Longformer] Multiple choice for longformer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As mentioned in the paper, they used global attention on all answer candidates for WikiHop, so maybe we can use global attention on all choice tokens ?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=h1) Report\n> Merging [#4645](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e444648a302dc8520beec96356a4bf500944355c&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4645 +/- ##\n==========================================\n+ Coverage 77.42% 77.44% +0.01% \n==========================================\n Files 128 128 \n Lines 21017 21046 +29 \n==========================================\n+ Hits 16273 16299 +26 \n- Misses 4744 4747 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.80% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.96% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.05% <100.00%> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=footer). Last update [e444648...826c1b0](https://codecov.io/gh/huggingface/transformers/pull/4645?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks like multiple choice models are used for datasets with short input like swag (question + multiple choices), and datasets with long context like Wikihop (question + multiple choices + a long context). For the second use case, as @patil-suraj mentioned, we need global attention on all tokens of the question and the choices. \r\n\r\nFor the first use case, I am not sure how to handle it. Maybe global attention everywhere, but in this case, it is equivalent to n^2 attention. Do you know of a dataset of the first use case with inputs longer than swag?",
"@ibeltagy If we have such multiple use-cases then I think it would be better if we leave this to the user. Not entirely sure though.",
"@patil-suraj, we can leave it to the user, or we can just do as you suggested earlier, put global attention on the question and all choices, which should work. \r\n\r\n@patrickvonplaten, what do you think?\r\n",
"Yeah I think ideally we would leave it to the user with a `global_attention_mask` input argument that is automatically set when `None`. We could actually have this for all forward functions...I'll think a bit about it tomorrow!",
"We had some internal discussion and decided to change / extend the API of `Longformer` slightly. \r\n\r\nWe will have two \"mask\" arguments for every `forward()` function in Longformer: `attention_mask` (as usual composed of 0's and 1's only) and a `global_attention_mask` also composed of zeros and ones (0 => local attention; 1=> global attention). If `global_attention` is not defined by the user we create if necessary `LongformerForQuestionAnswering` and maybe `LongformerForMultipleChoice`.\r\n\r\nWe will keep the inner workings the same (merge `attention_mask` with `global_attention_mask`) but make sure the user has a more intuitive API, since people always think of masks as boolean tensors in this library. \r\nIs that ok for you @ibeltagy ? \r\n\r\nI will merge this PR and open a new one with the proposed changes."
] | 1,590 | 1,590 | 1,590 | MEMBER | null | ## Description:
- This PR adds Multiple Choice for Longformer as in: #4644 (Sorry for not telling you earlier @patil-suraj). @ibeltagy
- The documentation is updated for all models using MultipleChoice since their `input_ids` have 3 dimensions. It is done by use of `{}.format()` for `INPUTS_DOCSTRING`. @LysandreJik
- Adds a couple of models that were missing to the respective `models_page` @LysandreJik
@ibeltagy Regarding global attention - I think we probably should automatically add global attention here since the multiple inputs are flattened across the dimension `num_choices` and then should attend each other. Maybe add global attention always on the first token of each input of the dim `num_choices`?
- Global attention should probably still be implemented. Waiting for @ibeltagy answer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4645/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4645",
"html_url": "https://github.com/huggingface/transformers/pull/4645",
"diff_url": "https://github.com/huggingface/transformers/pull/4645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4645.patch",
"merged_at": 1590752769000
} |
https://api.github.com/repos/huggingface/transformers/issues/4644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4644/comments | https://api.github.com/repos/huggingface/transformers/issues/4644/events | https://github.com/huggingface/transformers/pull/4644 | 626,458,053 | MDExOlB1bGxSZXF1ZXN0NDI0NDQ0Mzc3 | 4,644 | LongformerForMultipleChoice | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Haha, I shouldn't have touched this - I kinda already assumed you are working on it :D See PR here: #4645",
"Yes 😀 , I was just waiting for `LongformerForTokenClassifiction` to be merged",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=h1) Report\n> Merging [#4644](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e444648a302dc8520beec96356a4bf500944355c&el=desc) will **decrease** coverage by `2.80%`.\n> The diff coverage is `27.58%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4644 +/- ##\n==========================================\n- Coverage 77.42% 74.62% -2.81% \n==========================================\n Files 128 128 \n Lines 21017 21046 +29 \n==========================================\n- Hits 16273 15705 -568 \n- Misses 4744 5341 +597 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.80% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.16% <27.58%> (-77.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.00% <0.00%> (-68.00%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `72.81% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.94% <0.00%> (-2.71%)` | :arrow_down: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/4644/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=footer). Last update [e444648...5237ac2](https://codecov.io/gh/huggingface/transformers/pull/4644?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Closing in favor of #4645 - sorry should have communicated here better!"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds `LongformerForMultipleChoice` following `RobertaForMultipleChoice`
@patrickvonplaten , @ibeltagy
Same question as before, do we need any automatic global attention here ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4644/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4644",
"html_url": "https://github.com/huggingface/transformers/pull/4644",
"diff_url": "https://github.com/huggingface/transformers/pull/4644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4644.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4643/comments | https://api.github.com/repos/huggingface/transformers/issues/4643/events | https://github.com/huggingface/transformers/issues/4643 | 626,339,249 | MDU6SXNzdWU2MjYzMzkyNDk= | 4,643 | question-answering examples bug in pipelines document | {
"login": "sakares",
"id": 1306031,
"node_id": "MDQ6VXNlcjEzMDYwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1306031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sakares",
"html_url": "https://github.com/sakares",
"followers_url": "https://api.github.com/users/sakares/followers",
"following_url": "https://api.github.com/users/sakares/following{/other_user}",
"gists_url": "https://api.github.com/users/sakares/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sakares/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sakares/subscriptions",
"organizations_url": "https://api.github.com/users/sakares/orgs",
"repos_url": "https://api.github.com/users/sakares/repos",
"events_url": "https://api.github.com/users/sakares/events{/privacy}",
"received_events_url": "https://api.github.com/users/sakares/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | CONTRIBUTOR | null | Regarding
https://github.com/huggingface/transformers/blob/96f57c9ccb6363623005fb3f05166dfd7acb3f53/src/transformers/pipelines.py#L1739
it will cause a reproducible bug:
```python
from transformers import pipeline
nlp_qa = pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased')
nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/my_name/Library/Python/3.7/lib/python/site-packages/transformers/pipelines.py", line 1188, in __call__
start, end = self.model(**fw_args)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
but it should use `tokenizer='distilbert-base-cased'`
```python
from transformers import pipeline
nlp_qa = pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='distilbert-base-cased')
nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?')
{'score': 0.9632966867654424, 'start': 42, 'end': 50, 'answer': 'New-York.'}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4643/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4643/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4642/comments | https://api.github.com/repos/huggingface/transformers/issues/4642/events | https://github.com/huggingface/transformers/pull/4642 | 626,284,158 | MDExOlB1bGxSZXF1ZXN0NDI0MzAyNDg5 | 4,642 | [Longformer] Notebook to train Longformer | {
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=h1) Report\n> Merging [#4642](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e737018e1fcb22c8b76052058279552a8d6c806&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4642 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 128 128 \n Lines 21021 21021 \n=======================================\n Hits 16228 16228 \n Misses 4793 4793 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.47% <0.00%> (+0.23%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=footer). Last update [5e73701...b86c516](https://codecov.io/gh/huggingface/transformers/pull/4642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"BTW @patil-suraj - you can also add your longformer squad notebook to the community notebooks if you want",
"Made a copy and added some comments here: https://colab.research.google.com/drive/1kS2yerrLwLnc-hM6PlisFVaJT98kUKRV#scrollTo=4TTSvW8MlKJJ \r\n\r\nI think the use case is really nice! For better engagement with the notebook, I think it can be polished a bit in terms of small code refactoring / better descriptions. I left some TODO: there as suggestions :-) ",
"@patrickvonplaten, thanks for the review, the notebook is much better now after incorporating your suggestions. ",
"Looks great merging!"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR adds a community notebook that demonstrates how we pretrained Longformer starting from the RoBERTa checkpoint. The same procedure can be followed to convert other existing pretrained models into their Long version.
@patrickvonplaten, @patil-suraj, it would be great if you also check the notebook. Any comments are welcomed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4642/reactions",
"total_count": 6,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4642/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4642",
"html_url": "https://github.com/huggingface/transformers/pull/4642",
"diff_url": "https://github.com/huggingface/transformers/pull/4642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4642.patch",
"merged_at": 1590698116000
} |
https://api.github.com/repos/huggingface/transformers/issues/4641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4641/comments | https://api.github.com/repos/huggingface/transformers/issues/4641/events | https://github.com/huggingface/transformers/pull/4641 | 626,275,750 | MDExOlB1bGxSZXF1ZXN0NDI0Mjk2MDIw | 4,641 | Fix onnx export input names order | {
"login": "RensDimmendaal",
"id": 9828683,
"node_id": "MDQ6VXNlcjk4Mjg2ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RensDimmendaal",
"html_url": "https://github.com/RensDimmendaal",
"followers_url": "https://api.github.com/users/RensDimmendaal/followers",
"following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}",
"gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions",
"organizations_url": "https://api.github.com/users/RensDimmendaal/orgs",
"repos_url": "https://api.github.com/users/RensDimmendaal/repos",
"events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}",
"received_events_url": "https://api.github.com/users/RensDimmendaal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=h1) Report\n> Merging [#4641](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/14cb5b35faeda7881341656aacf89d12a8a7e07b&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4641 +/- ##\n==========================================\n- Coverage 78.04% 78.01% -0.03% \n==========================================\n Files 123 123 \n Lines 20477 20477 \n==========================================\n- Hits 15981 15975 -6 \n- Misses 4496 4502 +6 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4641/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4641/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=footer). Last update [14cb5b3...a4d4611](https://codecov.io/gh/huggingface/transformers/pull/4641?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@mfuntowicz what do you think of my proposed changes?\r\n\r\nI'm not sure why the CI fails on `TFAutoModelTest.test_from_identifier_from_model_type` as I haven't touched it.",
"LGTM! Thanks @RensDimmendaal for looking at this :)",
"@LysandreJik should we merge with the failing test? It seems totally unrelated "
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | This PR makes it possible to export custom bert models to onnx.
It resolves the issue addressed [here](https://github.com/huggingface/transformers/issues/4523#issuecomment-634920569).
TODO:
- [x] update ensure_valid_input function
- [x] update test ensure_valid_input_function
- [x] update convert_pytorch
- [x] add a test of exporting a custom pytorch models
@mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4641/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4641",
"html_url": "https://github.com/huggingface/transformers/pull/4641",
"diff_url": "https://github.com/huggingface/transformers/pull/4641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4641.patch",
"merged_at": 1591020769000
} |
https://api.github.com/repos/huggingface/transformers/issues/4640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4640/comments | https://api.github.com/repos/huggingface/transformers/issues/4640/events | https://github.com/huggingface/transformers/issues/4640 | 626,262,099 | MDU6SXNzdWU2MjYyNjIwOTk= | 4,640 | Error when loading a trained Encoder-Decoder model. | {
"login": "Xunzhuo",
"id": 48784001,
"node_id": "MDQ6VXNlcjQ4Nzg0MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48784001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xunzhuo",
"html_url": "https://github.com/Xunzhuo",
"followers_url": "https://api.github.com/users/Xunzhuo/followers",
"following_url": "https://api.github.com/users/Xunzhuo/following{/other_user}",
"gists_url": "https://api.github.com/users/Xunzhuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xunzhuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xunzhuo/subscriptions",
"organizations_url": "https://api.github.com/users/Xunzhuo/orgs",
"repos_url": "https://api.github.com/users/Xunzhuo/repos",
"events_url": "https://api.github.com/users/Xunzhuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xunzhuo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Xunzhuo,\r\nMultiple bugs were fixed in #4680 . Can you please take a look whether this error persists?",
"okay tks!"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug Report
When loading the config the in configuration_auto.py the model_type is expected on the form encoder-decoder
but in configuration_encoder_decoder.py
model_type is on the form encoder_decoder which raises a KeyError. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4640/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4639/comments | https://api.github.com/repos/huggingface/transformers/issues/4639/events | https://github.com/huggingface/transformers/issues/4639 | 626,253,663 | MDU6SXNzdWU2MjYyNTM2NjM= | 4,639 | How to generate prediction/answer from a custom model fined-tuned/trained for self-defined questions? | {
"login": "ZhiliWang",
"id": 16056781,
"node_id": "MDQ6VXNlcjE2MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/16056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhiliWang",
"html_url": "https://github.com/ZhiliWang",
"followers_url": "https://api.github.com/users/ZhiliWang/followers",
"following_url": "https://api.github.com/users/ZhiliWang/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhiliWang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhiliWang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhiliWang/subscriptions",
"organizations_url": "https://api.github.com/users/ZhiliWang/orgs",
"repos_url": "https://api.github.com/users/ZhiliWang/repos",
"events_url": "https://api.github.com/users/ZhiliWang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhiliWang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Considering you saved the model(the files you mentioned above) in 'model_dir' here's how you can do it.\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"model_dir\")\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"model_dir\")\r\n\r\nquestion, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\nencoding = tokenizer.encode_plus(question, text, return_tensors=\"pt\")\r\n\r\ninput_ids = encoding[\"input_ids\"]\r\nattention_mask = encoding[\"attention_mask\"]\r\n\r\nstart_scores, end_scores = model(input_ids, attention_mask=attention_mask)\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())\r\n\r\nanswer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]\r\nanswer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))\r\n```\r\n\r\nOr better yet, use the `pipline`\r\n```python\r\n\r\nfrom transformers import pipeline\r\n\r\nnlp = pipeline('question-answering', model='model_dir', tokenizer='model_dir')\r\n\r\nnlp({\r\n 'question': \"Who was Jim Henson?\"\r\n 'context': \"Jim Henson was a nice puppet\"\r\n})\r\n```",
"> Considering you saved the model(the files you mentioned above) in 'model_dir' here's how you can do it.\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\n> import torch\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(\"model_dir\")\r\n> model = AutoModelForQuestionAnswering.from_pretrained(\"model_dir\")\r\n> \r\n> question, text = \"Who was Jim Henson?\", \"Jim Henson was a nice puppet\"\r\n> encoding = tokenizer.encode_plus(question, text, return_tensors=\"pt\")\r\n> \r\n> input_ids = encoding[\"input_ids\"]\r\n> attention_mask = encoding[\"attention_mask\"]\r\n> \r\n> start_scores, end_scores = model(input_ids, attention_mask=attention_mask)\r\n> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())\r\n> \r\n> answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1]\r\n> answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))\r\n> ```\r\n> \r\n> Or better yet, use the `pipline`\r\n> \r\n> ```python\r\n> from transformers import pipeline\r\n> \r\n> nlp = pipeline('question-answering', model='model_dir', tokenizer='model_dir')\r\n> \r\n> nlp({\r\n> 'question': \"Who was Jim Henson?\"\r\n> 'context': \"Jim Henson was a nice puppet\"\r\n> })\r\n> ```\r\n\r\nHey Suraj thank you so much! My mistake was coding the pipeline incorrectly, but now it's doing fine. Thank you!\r\n\r\nHowever, I have another question if you don't mind: \r\nWhen I used your method above by coding the system from scratch without using \"pipeline\", I would get this \"index out of range\" error, unless i limit my input context within about 2,103 characters (my full input text contains 57,373 characters). Nevertheless, this issue never occurred in pipeline. Do you have any idea why this happened? I think I fine-tuned a large-distilled-BERT-uncase so sequence length should not be an issue here. Is there a significant difference between the pipeline and coding the components from scratch?",
"Hi @ZhiliWang, the pipeline is a high level abstraction that takes care of several things so that you don't have to. Overflowing sequences is a good example, where the pipeline will automatically truncate it. You can specify `max_seq_length=n` to the pipeline if you want to manage that parameter yourself.\r\n\r\nYou can do the same with `encode_plus` by specifying the `max_length` argument.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | I used run_squad.py to fine-tuned a pre-trained DistilledBERT, and I wonder how exactly to implement my model to answer a list of self-defined questions. It seems to me that the pipeline only works with one of the "regular" models (BERT DistBert, XLM, XLNET, etc.), or a model that has already been uploaded to the community. I spent a lot of time researching this but couldn't find a solution that suits the best for my case. Would anyone please explain, and if possibly, provide a demo? Here are all the files generated after my fine-tuning and evaluation:
config.json
pytorch_model.bin
training_args.bin
nbest_predictions_.json
special_tokens_map.json
vocab.txt
predictions_.json
tokenizer_config.json
Thank you!
SO Link (no answer yet): https://stackoverflow.com/questions/62057333/how-exactly-to-generate-prediction-answer-from-a-custom-model-fined-tuned-traine | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4639/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4638/comments | https://api.github.com/repos/huggingface/transformers/issues/4638/events | https://github.com/huggingface/transformers/pull/4638 | 626,227,433 | MDExOlB1bGxSZXF1ZXN0NDI0MjU4MzA1 | 4,638 | LongformerForTokenClassification | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=h1) Report\n> Merging [#4638](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96f57c9ccb6363623005fb3f05166dfd7acb3f53&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `96.55%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4638 +/- ##\n==========================================\n+ Coverage 77.39% 77.43% +0.04% \n==========================================\n Files 128 128 \n Lines 20989 21018 +29 \n==========================================\n+ Hits 16244 16275 +31 \n+ Misses 4745 4743 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `73.80% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `96.83% <96.55%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=footer). Last update [96f57c9...c8bcd05](https://codecov.io/gh/huggingface/transformers/pull/4638?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"No. We didn't use any global attention for token classification tasks. It is possible that certain choices of global attention will improve results, but it is task-specific and better be left to the user. ",
"Great! Then I think it can be merged.",
"Awesome! LGTM"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds `LongformerForTokenClassification`
@patrickvonplaten @ibeltagy
do we need any automatic global attention here ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4638/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4638/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4638",
"html_url": "https://github.com/huggingface/transformers/pull/4638",
"diff_url": "https://github.com/huggingface/transformers/pull/4638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4638.patch",
"merged_at": 1590662899000
} |
https://api.github.com/repos/huggingface/transformers/issues/4637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4637/comments | https://api.github.com/repos/huggingface/transformers/issues/4637/events | https://github.com/huggingface/transformers/pull/4637 | 626,215,465 | MDExOlB1bGxSZXF1ZXN0NDI0MjQ4OTY1 | 4,637 | Movement pruning | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
},
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the valuable inputs on quantization @mfuntowicz! --> `Saving_PruneBERT.ipynb`",
"> This is great. Didn't you want to include the `MaskedBertXXX` directly in the library, as was done with `DistilBERT`?\r\n> \r\n> We can also do it at a later date, as we'll need to write the tests (kudos on the docs, looks nice!)\r\n\r\nGood question. I think it's fair for the moment to leave it outside of the library itself: once a pre-trained model has been fine-pruned, it can be pruned once for all and you end up with a standard `BertForSequenceClassification` for instance. So I see it more as an \"intermediate tool\".",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=h1) Report\n> Merging [#4637](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec4cdfdd05d89b243d6d842fce019959291dd92a&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4637 +/- ##\n==========================================\n- Coverage 78.04% 78.04% -0.01% \n==========================================\n Files 124 124 \n Lines 20676 20676 \n==========================================\n- Hits 16137 16136 -1 \n- Misses 4539 4540 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=footer). Last update [ec4cdfd...bee5496](https://codecov.io/gh/huggingface/transformers/pull/4637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Looks great. Well organized & clearly communicated.\r\n> \r\n> Would it be worthwhile to add the released prunebert encoders to exBERT? I don't think we have any fine-tuned models there right now but it'd be cool to let people see how the movement pruning process affects the attention distributions.\r\n\r\nGood point! I'll have a look."
] | 1,590 | 1,591 | 1,591 | MEMBER | null | This PR adds the code to reproduce the results of our recent work on Movement pruning.
Some supplemental treats:
- A notebook showcasing how to efficiently store an extremely sparse model
- Sharing a couple of fine-pruned checkpoints (PruneBERT)
- Details of all hyper-parameters and results
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4637/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4637",
"html_url": "https://github.com/huggingface/transformers/pull/4637",
"diff_url": "https://github.com/huggingface/transformers/pull/4637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4637.patch",
"merged_at": 1591017812000
} |
https://api.github.com/repos/huggingface/transformers/issues/4636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4636/comments | https://api.github.com/repos/huggingface/transformers/issues/4636/events | https://github.com/huggingface/transformers/pull/4636 | 626,192,126 | MDExOlB1bGxSZXF1ZXN0NDI0MjMxNDMy | 4,636 | Kill model archive maps | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Love this! is there a plan to allow aliases moving forward?\r\ntyping `'Helsinki-NLP/opus-mt-romance-en'` is not awful so I don't feel very strongly that we should, but interested in your thoughts.",
"Hmm no I don't see an obvious need for aliases personally.",
"Awesome!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=h1) Report\n> Merging [#4636](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/88762a2f8cc409fe15a9e6a049fe69ae3197fc49&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4636 +/- ##\n==========================================\n- Coverage 77.12% 77.04% -0.09% \n==========================================\n Files 128 128 \n Lines 21071 20977 -94 \n==========================================\n- Hits 16252 16162 -90 \n+ Misses 4819 4815 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (-0.15%)` | :arrow_down: |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (-0.09%)` | :arrow_down: |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (-0.08%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| ... and [56 more](https://codecov.io/gh/huggingface/transformers/pull/4636/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=footer). Last update [88762a2...a5f6993](https://codecov.io/gh/huggingface/transformers/pull/4636?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"For configs, I decided to leave the URLs (even if they're not used) to have quick reference and be able to open them from the code. We can always delete them later though.\r\n\r\nOk, merging this!"
] | 1,590 | 1,591 | 1,591 | MEMBER | null | Links to model weights inside the code are not useful anymore, on the contrary, defining those shortcuts to URLs in code tend to lead to discrepancies with the canonical naming scheme of models at huggingface.co
As an example, we cannot cleanly load [`facebook/bart-large-cnn`](https://huggingface.co/facebook/bart-large-cnn) from either huggingface.co or the inference API because it's aliased to bart-large-cnn in the code.
If this PR is approved, before merging I will:
- do the same thing for configs (easy)
- just rename the renamed model identifiers for tokenizers. (completely getting rid of "archive maps" for tokenizers is way harder because of the hardcoded maps like `PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES` and `PRETRAINED_INIT_CONFIGURATION`)
---
⚠️ **Note that the change in this PR is breaking for the names of the following models:**
```
"cl-tohoku/bert-base-japanese"
"cl-tohoku/bert-base-japanese-whole-word-masking"
"cl-tohoku/bert-base-japanese-char"
"cl-tohoku/bert-base-japanese-char-whole-word-masking"
"TurkuNLP/bert-base-finnish-cased-v1"
"TurkuNLP/bert-base-finnish-uncased-v1"
"wietsedv/bert-base-dutch-cased"
"flaubert/flaubert_small_cased"
"flaubert/flaubert_base_uncased"
"flaubert/flaubert_base_cased"
"flaubert/flaubert_large_cased"
all variants of "facebook/bart"
```
^^ You'll need to specify the organization prefix for those models from now on. However, no files were moved on S3 so this doesn't change anything for all current versions of the library.
However it's a breaking change so we'll be sure to pinpoint to it in the next release. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4636",
"html_url": "https://github.com/huggingface/transformers/pull/4636",
"diff_url": "https://github.com/huggingface/transformers/pull/4636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4636.patch",
"merged_at": 1591105174000
} |
https://api.github.com/repos/huggingface/transformers/issues/4635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4635/comments | https://api.github.com/repos/huggingface/transformers/issues/4635/events | https://github.com/huggingface/transformers/issues/4635 | 626,191,613 | MDU6SXNzdWU2MjYxOTE2MTM= | 4,635 | 03-pipelines.ipynb on Colab: error on "Summarization" | {
"login": "ArtyomSalnikov",
"id": 66049691,
"node_id": "MDQ6VXNlcjY2MDQ5Njkx",
"avatar_url": "https://avatars.githubusercontent.com/u/66049691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArtyomSalnikov",
"html_url": "https://github.com/ArtyomSalnikov",
"followers_url": "https://api.github.com/users/ArtyomSalnikov/followers",
"following_url": "https://api.github.com/users/ArtyomSalnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/ArtyomSalnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArtyomSalnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArtyomSalnikov/subscriptions",
"organizations_url": "https://api.github.com/users/ArtyomSalnikov/orgs",
"repos_url": "https://api.github.com/users/ArtyomSalnikov/repos",
"events_url": "https://api.github.com/users/ArtyomSalnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArtyomSalnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,590 | 1,590 | 1,590 | NONE | null | ` ` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4634/comments | https://api.github.com/repos/huggingface/transformers/issues/4634/events | https://github.com/huggingface/transformers/issues/4634 | 626,178,795 | MDU6SXNzdWU2MjYxNzg3OTU= | 4,634 | tensorflow2_gpt2 Slow speed | {
"login": "only-yao",
"id": 36235579,
"node_id": "MDQ6VXNlcjM2MjM1NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/36235579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/only-yao",
"html_url": "https://github.com/only-yao",
"followers_url": "https://api.github.com/users/only-yao/followers",
"following_url": "https://api.github.com/users/only-yao/following{/other_user}",
"gists_url": "https://api.github.com/users/only-yao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/only-yao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/only-yao/subscriptions",
"organizations_url": "https://api.github.com/users/only-yao/orgs",
"repos_url": "https://api.github.com/users/only-yao/repos",
"events_url": "https://api.github.com/users/only-yao/events{/privacy}",
"received_events_url": "https://api.github.com/users/only-yao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"With tf. Varibable model for packaging\r\n```\r\n @tf.function(experimental_relax_shapes=True)\r\n def model_static(self, model_inputs):\r\n outputs = self(**model_inputs)\r\n return outputs\r\n\r\n```",
"Same problem here\r\n@only-yao Did you find a solution yet?",
"> @ only-yao这是同样的问题,您找到解决方案了吗?\r\n\r\nWrap the model with @tf.function,the model as a static diagram",
"Very interesting! Thanks for your code snippet @only-yao - I will take a closer look in a week or so :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ The speed of pytorch_gpt2 based on transformers is 4-5 times faster than that of tensorflow_gpt2, which is unreasonable. Where is my problem
**pytorch_gpt2**
```
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.to(dev)
raw_text_2 = "And if so be ye can descrive what ye bear,"
inputs = tokenizer.encode(raw_text_2, add_special_tokens=False, return_tensors="pt")
inputs = inputs.to(dev)
generated = model.generate(inputs, top_k=0, max_length=512)
```
**tensorflow2_gpt2**
```
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
raw_text = "And if so be ye can descrive what ye bear,"
tokens = tokenizer.encode(raw_text, return_tensors="tf")
output_ids = model.generate(tokens, top_k = 0, max_length=512)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4634/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4633/comments | https://api.github.com/repos/huggingface/transformers/issues/4633/events | https://github.com/huggingface/transformers/pull/4633 | 626,130,139 | MDExOlB1bGxSZXF1ZXN0NDI0MTg0NzU5 | 4,633 | Merge pull request #1 from huggingface/master | {
"login": "stjordanis",
"id": 4212985,
"node_id": "MDQ6VXNlcjQyMTI5ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4212985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stjordanis",
"html_url": "https://github.com/stjordanis",
"followers_url": "https://api.github.com/users/stjordanis/followers",
"following_url": "https://api.github.com/users/stjordanis/following{/other_user}",
"gists_url": "https://api.github.com/users/stjordanis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stjordanis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stjordanis/subscriptions",
"organizations_url": "https://api.github.com/users/stjordanis/orgs",
"repos_url": "https://api.github.com/users/stjordanis/repos",
"events_url": "https://api.github.com/users/stjordanis/events{/privacy}",
"received_events_url": "https://api.github.com/users/stjordanis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | NONE | null | - | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4633/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4633",
"html_url": "https://github.com/huggingface/transformers/pull/4633",
"diff_url": "https://github.com/huggingface/transformers/pull/4633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4633.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4632/comments | https://api.github.com/repos/huggingface/transformers/issues/4632/events | https://github.com/huggingface/transformers/pull/4632 | 626,095,495 | MDExOlB1bGxSZXF1ZXN0NDI0MTU4NDEx | 4,632 | Pipelines: miscellanea of QoL improvements and small features... | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,591 | 1,591 | MEMBER | null | ...needed for inference API.
see individual commits for description | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4632/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4632",
"html_url": "https://github.com/huggingface/transformers/pull/4632",
"diff_url": "https://github.com/huggingface/transformers/pull/4632.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4632.patch",
"merged_at": 1591170692000
} |
https://api.github.com/repos/huggingface/transformers/issues/4631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4631/comments | https://api.github.com/repos/huggingface/transformers/issues/4631/events | https://github.com/huggingface/transformers/issues/4631 | 626,087,281 | MDU6SXNzdWU2MjYwODcyODE= | 4,631 | Numpy format string issue in TFTrainer | {
"login": "VDCN12593",
"id": 29075149,
"node_id": "MDQ6VXNlcjI5MDc1MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/29075149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VDCN12593",
"html_url": "https://github.com/VDCN12593",
"followers_url": "https://api.github.com/users/VDCN12593/followers",
"following_url": "https://api.github.com/users/VDCN12593/following{/other_user}",
"gists_url": "https://api.github.com/users/VDCN12593/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VDCN12593/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VDCN12593/subscriptions",
"organizations_url": "https://api.github.com/users/VDCN12593/orgs",
"repos_url": "https://api.github.com/users/VDCN12593/repos",
"events_url": "https://api.github.com/users/VDCN12593/events{/privacy}",
"received_events_url": "https://api.github.com/users/VDCN12593/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"same here ",
"How did you resolve it? Do we have to wait till TFTrainer developer change the code in trainer_tf.py?\r\nIs there a way around it?",
"Hello,\r\n\r\nCan you give a sample of data and a command line with which I can reproduce the issue? Thanks!",
"> Hello,\r\n> \r\n> Can you give a sample of data and a command line with which I can reproduce the issue? Thanks!\r\n\r\nI run into the same problem with the exact setting as I posted here. #4664 (comment)\r\nAfter I fixed the TFTrainer parameter issue, the training started but it returned this error after the normal BERT log.\r\n\r\n",
"Sorry I don't succeed to reproduce the issue with the command line in #4664 for me it works. Which dataset are you using? Germeval?",
"yes, I followed all the process here: https://github.com/huggingface/transformers/tree/master/examples/token-classification",
"Sorry, impossible to reproduce the issue :(\r\n\r\nI tried with different NER datasets including germeval and everything goes well.",
"I would suggest to wait the next version of the TF Trainer to see if it solves your problem or not. It should arrives soon. Sorry :(",
"I am trying to reproduce it to see where the glitch is. Unfortunately colab gpu is too busy for me to get connected at the moment. I will post here once I locate the problem. \r\n\r\nNo worries. Thanks for bring out the TFTrainer!",
"I experienced the same issue while trying the latest **run_tf_ner.py**. I have almost no problem with the old version (months ago) of run_tf_ner.py and **utils_ner.py**, Trained several models and got very good predictions. But after update to the latest **run_tf_ner.py,** I got several problems: (1) logging_dir none (this already solved by passing the parameter) (2) the value of pad_token_label_id. In the old version I used this value was set to 0, but in the latest run_tf_ner.py it set to -1, but I got wrong prediction results if this set to -1. (3) The third issue is this. \r\n\r\nTo force the training process moving, I created a new class inherit from TFTrainer, modified the train method --> added except TypeError logger.info(\"Epoch {} Step {} Train Loss {}\".format(epoch, step, 'TypeError')) \r\n\r\nHere is the training_loss and trining_loss.numpy() printed\r\n\r\n<class 'tensorflow.python.framework.ops.EagerTensor'>\r\n<class 'numpy.ndarray'>\r\n[3.86757078e-04 6.49182359e-04 1.50194198e-01 1.72556902e-03\r\n 7.37545686e-03 7.55832903e-03 2.59326249e-01 1.65126711e-01\r\n 1.45479038e-01 2.91670375e-02 1.02433632e-03 1.09142391e-03\r\n 7.45586725e-03 1.56116625e-03 6.97672069e-02 6.09296076e-02\r\n 1.59586817e-02 2.96084117e-02 3.36027122e-04 2.67877331e-04\r\n 2.72625312e-02 3.24607291e-03 2.79245054e-04 8.95933714e-04\r\n 1.38876194e-05 4.55974305e-06 7.18232468e-06 6.49688218e-06\r\n 4.67895006e-06 4.67895188e-06 4.08290907e-06 5.72202407e-06\r\n 5.99023815e-06 5.48360913e-06 1.09671510e-05 1.32022615e-05\r\n 7.30153261e-06 4.67895097e-06 4.88756723e-06 4.73855425e-06\r\n 4.70875511e-06 5.33459615e-06 4.35112906e-06 8.13599218e-06\r\n 4.14251372e-06 3.48686262e-06 7.68894461e-06 4.14251281e-06\r\n 4.55974168e-06 4.29152169e-06 9.68567110e-06 2.68220538e-06\r\n 3.63587583e-06 4.14251235e-06 3.18884304e-06 4.38093048e-06\r\n 4.52994209e-06 4.70875284e-06 3.30805187e-06 5.63261574e-06\r\n 3.15904026e-06 6.55648546e-06 5.87103386e-06 4.14251190e-06\r\n 3.81468908e-06 3.39745884e-06 4.47033653e-06 6.49688172e-06\r\n 6.25846224e-06 4.08290816e-06 4.08290680e-06 3.69548002e-06\r\n 4.35112725e-06 3.60607328e-06 4.97697329e-06 6.88430828e-06\r\n 5.72202634e-06 4.79816072e-06 5.75182776e-06 6.43727981e-06\r\n 3.78488676e-06 1.53479104e-05 6.70549389e-06 7.03331716e-06\r\n 3.18884258e-06 7.18232604e-06 5.27499060e-06 6.07965376e-06\r\n 3.72528302e-06 9.03003547e-06 5.03657793e-06 6.43727435e-06\r\n 5.33459661e-06 4.85776036e-06 9.38766698e-06 4.11270958e-06\r\n 3.36765652e-06 5.42400539e-06 5.18558409e-06 6.73529667e-06\r\n 9.03001182e-06 4.47033699e-06 3.51666586e-06 5.15578267e-06\r\n 3.87429282e-06 3.39745884e-06 4.08290725e-06 7.48034654e-06\r\n 7.71875875e-06 3.75508489e-06 3.60607396e-06 3.72528302e-06\r\n 5.84123518e-06 2.89082072e-06 4.32132674e-06 6.37766652e-06\r\n 4.64915001e-06 7.03332262e-06 3.99350029e-06 9.14925931e-06\r\n 4.32132583e-06 5.66242352e-06 3.75508489e-06 6.10945517e-06\r\n 4.85776673e-06 5.60281842e-06 4.70875375e-06 3.75508534e-06]\r\ntf.Tensor(\r\n[3.86757078e-04 6.49182359e-04 1.50194198e-01 1.72556902e-03\r\n 7.37545686e-03 7.55832903e-03 2.59326249e-01 1.65126711e-01\r\n 1.45479038e-01 2.91670375e-02 1.02433632e-03 1.09142391e-03\r\n 7.45586725e-03 1.56116625e-03 6.97672069e-02 6.09296076e-02\r\n 1.59586817e-02 2.96084117e-02 3.36027122e-04 2.67877331e-04\r\n 2.72625312e-02 3.24607291e-03 2.79245054e-04 8.95933714e-04\r\n 1.38876194e-05 4.55974305e-06 7.18232468e-06 6.49688218e-06\r\n 4.67895006e-06 4.67895188e-06 4.08290907e-06 5.72202407e-06\r\n 5.99023815e-06 5.48360913e-06 1.09671510e-05 1.32022615e-05\r\n 7.30153261e-06 4.67895097e-06 4.88756723e-06 4.73855425e-06\r\n 4.70875511e-06 5.33459615e-06 4.35112906e-06 8.13599218e-06\r\n 4.14251372e-06 3.48686262e-06 7.68894461e-06 4.14251281e-06\r\n 4.55974168e-06 4.29152169e-06 9.68567110e-06 2.68220538e-06\r\n 3.63587583e-06 4.14251235e-06 3.18884304e-06 4.38093048e-06\r\n 4.52994209e-06 4.70875284e-06 3.30805187e-06 5.63261574e-06\r\n 3.15904026e-06 6.55648546e-06 5.87103386e-06 4.14251190e-06\r\n 3.81468908e-06 3.39745884e-06 4.47033653e-06 6.49688172e-06\r\n 6.25846224e-06 4.08290816e-06 4.08290680e-06 3.69548002e-06\r\n 4.35112725e-06 3.60607328e-06 4.97697329e-06 6.88430828e-06\r\n 5.72202634e-06 4.79816072e-06 5.75182776e-06 6.43727981e-06\r\n 3.78488676e-06 1.53479104e-05 6.70549389e-06 7.03331716e-06\r\n 3.18884258e-06 7.18232604e-06 5.27499060e-06 6.07965376e-06\r\n 3.72528302e-06 9.03003547e-06 5.03657793e-06 6.43727435e-06\r\n 5.33459661e-06 4.85776036e-06 9.38766698e-06 4.11270958e-06\r\n 3.36765652e-06 5.42400539e-06 5.18558409e-06 6.73529667e-06\r\n 9.03001182e-06 4.47033699e-06 3.51666586e-06 5.15578267e-06\r\n 3.87429282e-06 3.39745884e-06 4.08290725e-06 7.48034654e-06\r\n 7.71875875e-06 3.75508489e-06 3.60607396e-06 3.72528302e-06\r\n 5.84123518e-06 2.89082072e-06 4.32132674e-06 6.37766652e-06\r\n 4.64915001e-06 7.03332262e-06 3.99350029e-06 9.14925931e-06\r\n 4.32132583e-06 5.66242352e-06 3.75508489e-06 6.10945517e-06\r\n 4.85776673e-06 5.60281842e-06 4.70875375e-06 3.75508534e-06], shape=(128,), dtype=float32)\r\n\r\n",
"@xl2602 Thanks for your feedback, `-1` was also the default value of `pad_token_label_id` in the previous version of the script.\r\n\r\n@jx669 and @xl2602 Can you try to add the `--mode token-classification` parameter?",
"@jplu I think this has nothing to do with the context of the training script. To reproduce, just run `logging.info(\"Here is an error example {:.4f}\".format(np.array([1,2,3])))` in python console. Maybe this is related to the `numpy` version. I've tried 1.16.4 and 1.18 and they both failed.",
"Tested with 1.18.4 only, I'm gonna try with other versions to see if I succeed to get the same issue.",
"numpy 1.18.4 is the same as what I installed. \r\n\r\nI just reproduced the same error message with colab gpu:\r\n\r\nThese are what I installed:\r\n!pip install transformers\r\n!pip install seqeval\r\n!pip install wandb; wandb login\r\n\r\nI did not install numpy or TF separately. think they come with the transformers package.\r\nI checked the numpy version:\r\n'1.18.4'\r\nTF version:\r\n'2.2.0'\r\n\r\n\r\n",
"Ok, still don't get any error, including with different versions of Numpy.\r\n\r\n@jx669 @xl2602 @VDCN12593 Can you please tell me if you do the exact same thing than in this colab please https://colab.research.google.com/drive/19zAfUN8EEmiT4imwzLeFv6q1PJ5CgcRb?usp=sharing",
"It might have something to do with the mode command: `--mode token-classification `\r\n\r\nIf you remove that line in your colab notebook, the same error message will reoccur. ",
"Cool! Happy we found the problem.\r\n\r\nWhen you run the TF Trainer you have to specify over which task it will be trained on, here for example it is `token-classification` when it on text content it will be `text-classification` (the default) and the same for the two other tasks QA and MC.\r\n\r\nThis behavior will be removed in the next version of the TF trainer.",
"I see. Good to learn. Thanks!\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: tf_ner
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Running the `run_tf_ner` example raises the following exception:
`Traceback (most recent call last):
File "run_tf_ner.py", line 282, in <module>
main()
File "run_tf_ner.py", line 213, in main
trainer.train()
File "venv/lib/python3.7/site-packages/transformers/trainer_tf.py", line 308, in train
logger.info("Epoch {} Step {} Train Loss {:.4f}".format(epoch, step, training_loss.numpy()))
TypeError: unsupported format string passed to numpy.ndarray.__format__`
This issue was reported by multiple people:
https://github.com/numpy/numpy/issues/12491
https://github.com/numpy/numpy/issues/5543
I think the easiest solution is to avoid using the numpy format string this way in `TFTrainer`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.1.0
- Platform: Ubuntu-18.04
- Python version: 3.7.7
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4631/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4631/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4630/comments | https://api.github.com/repos/huggingface/transformers/issues/4630/events | https://github.com/huggingface/transformers/issues/4630 | 626,079,533 | MDU6SXNzdWU2MjYwNzk1MzM= | 4,630 | Model evaluated at each checkpoint, but results not in checkpoint file | {
"login": "CMobley7",
"id": 10121829,
"node_id": "MDQ6VXNlcjEwMTIxODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CMobley7",
"html_url": "https://github.com/CMobley7",
"followers_url": "https://api.github.com/users/CMobley7/followers",
"following_url": "https://api.github.com/users/CMobley7/following{/other_user}",
"gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions",
"organizations_url": "https://api.github.com/users/CMobley7/orgs",
"repos_url": "https://api.github.com/users/CMobley7/repos",
"events_url": "https://api.github.com/users/CMobley7/events{/privacy}",
"received_events_url": "https://api.github.com/users/CMobley7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What is the proper way to get an `eval.txt` file for each checkpoint?",
"No, this is not a built-in feature. I'd suggest you install from source and modify the code directly. \r\n\r\nThe code in this repo is meant to be optimized for \"hackability\", feel free to open a new issue if needed."
] | 1,590 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
I start `run_language_modeling.py` and `run_glue.py` with `--do_eval` and `--evaluate_during_training` arguments. While it checkpoints and evaluates the model at each saving step, the evaluation results are not output to the checkpoint folder but merely the terminal at each saving step, as well as logging step. However, the final model is evaluated and its results placed in the appropriate location. I'd like the performance of each checkpoint. Am I doing something wrong? Is there an additional argument I must specify? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4630/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4630/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4629/comments | https://api.github.com/repos/huggingface/transformers/issues/4629/events | https://github.com/huggingface/transformers/pull/4629 | 626,015,473 | MDExOlB1bGxSZXF1ZXN0NDI0MTAyNzc4 | 4,629 | gpt2 typo | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=h1) Report\n> Merging [#4629](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ec4cdfdd05d89b243d6d842fce019959291dd92a&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4629 +/- ##\n=======================================\n Coverage 78.04% 78.04% \n=======================================\n Files 124 124 \n Lines 20676 20676 \n=======================================\n Hits 16137 16137 \n Misses 4539 4539 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <ø> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.49% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.94% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.89% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.66% <ø> (ø)` | |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/4629/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=footer). Last update [ec4cdfd...95e5c41](https://codecov.io/gh/huggingface/transformers/pull/4629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think it's a bogus search and replace by @LysandreJik in #2532 but it applies for all models not just GPT2.",
"can you fix in all files?",
"Ups, I didn't saw this.\r\nI will do this in a few minutes",
"LGTM, thanks!"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | From #4572 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4629/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4629",
"html_url": "https://github.com/huggingface/transformers/pull/4629",
"diff_url": "https://github.com/huggingface/transformers/pull/4629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4629.patch",
"merged_at": 1590698683000
} |
https://api.github.com/repos/huggingface/transformers/issues/4628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4628/comments | https://api.github.com/repos/huggingface/transformers/issues/4628/events | https://github.com/huggingface/transformers/pull/4628 | 625,992,602 | MDExOlB1bGxSZXF1ZXN0NDI0MDg1NzY0 | 4,628 | [Longformer] more models + model cards | {
"login": "ibeltagy",
"id": 2287797,
"node_id": "MDQ6VXNlcjIyODc3OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibeltagy",
"html_url": "https://github.com/ibeltagy",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions",
"organizations_url": "https://api.github.com/users/ibeltagy/orgs",
"repos_url": "https://api.github.com/users/ibeltagy/repos",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibeltagy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=h1) Report\n> Merging [#4628](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a801c7fd74f56a651ba43bfc93eba93c63e84766&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4628 +/- ##\n==========================================\n- Coverage 78.02% 78.02% -0.01% \n==========================================\n Files 124 124 \n Lines 20626 20625 -1 \n==========================================\n- Hits 16094 16093 -1 \n Misses 4532 4532 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4628/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4628/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <100.00%> (-0.01%)` | :arrow_down: |\n| [src/transformers/tokenization\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4628/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4628/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4628/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=footer). Last update [a801c7f...06e80ad](https://codecov.io/gh/huggingface/transformers/pull/4628?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR adds the following:
- Longformer models trained with frozen-roberta weights
- Model cards
- Model names start with `allenai/`
- Remove unnecessary type casting
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4628/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4628",
"html_url": "https://github.com/huggingface/transformers/pull/4628",
"diff_url": "https://github.com/huggingface/transformers/pull/4628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4628.patch",
"merged_at": 1590657066000
} |
https://api.github.com/repos/huggingface/transformers/issues/4627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4627/comments | https://api.github.com/repos/huggingface/transformers/issues/4627/events | https://github.com/huggingface/transformers/pull/4627 | 625,989,076 | MDExOlB1bGxSZXF1ZXN0NDI0MDgzMjA0 | 4,627 | [WIP] lightning glue example uses nlp package | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@julien-c I'm not sure why my checks for isort aren't passing on run_pl_glue.py. Could it be because I imported `pyarrow` which is not listed in third party within `setup.cfg`?",
"lmk when you’re ready for me to give this a look over :)\r\n\r\none major feature we added is the option to not rely on hparams, they are of course backward compatible but now you have the option to instead pass all the args to init directly and we’ll still save the correct stuff in the checkpoint",
"@sshleifer can you advise on the style check issue when you get the chance, please?",
"Adding note for myself...seems `pandas` has been removed from lightning's requirements.txt so we may need to add that to `examples/requirements.txt`.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=h1) Report\n> Merging [#4627](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4402879ee48dcff0f657738d8af5e35b266bd0ed&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `79.52%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4627 +/- ##\n==========================================\n- Coverage 78.02% 76.99% -1.03% \n==========================================\n Files 124 128 +4 \n Lines 20635 21602 +967 \n==========================================\n+ Hits 16100 16633 +533 \n- Misses 4535 4969 +434 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.33% <ø> (-0.15%)` | :arrow_down: |\n| [src/transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (-0.08%)` | :arrow_down: |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21hcmlhbi5weQ==) | `100.00% <ø> (ø)` | |\n| ... and [119 more](https://codecov.io/gh/huggingface/transformers/pull/4627/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=footer). Last update [4402879...c394f68](https://codecov.io/gh/huggingface/transformers/pull/4627?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Closing and will reopen cleaner one later."
] | 1,590 | 1,592 | 1,592 | CONTRIBUTOR | null | Main goal here was to use `nlp` library to load datasets instead of custom scripts. I've made some changes to the GLUE example, and will reflect those changes elsewhere if the patterns used seem reasonable. Feedback would be greatly appreciated.
**So far, this PR:**
- Uses `nlp` library instead of manual processing scripts to download/process the benchmark datasets (Ref #4494)
- Uses lightning `Trainer` default arguments instead of previous custom logic. See[ list of available args](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html#trainer-class) in their documentation. Resolves #3925
- Generates submission files on test sets (partially resolves #3692)
- Fixes bug mentioned in #4214
- Pins lightning version to latest stable release, as master branch is a little to volatile for my taste.
- upgrades to pl=.76
**TODOs**
- Validate multi-gpu logic is up to date with lightning base practices
- Validate TPU logic is up to date with lightning best practices
- See if there's a better way to save dataset with `nlp` lib directly instead of current `torch.save` logic
- Run exhaustive test over model types over each dataset
- Optionally generate larger output table reporting benchmarks across models
**Note on Trainer args**
The argument parser will now accept any kwargs from the `Trainer` class's init function. For example, to load a checkpoint and run predictions to get submission files, you could run something like this to get a submission file at `./submissions/mrpc_submission.csv`:
```bash
export TASK=mrpc
export DATA_DIR=./cached_glue_data
export MAX_LENGTH=128
export LEARNING_RATE=2e-5
export BERT_MODEL=bert-base-cased
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SEED=2
export GPUS=1
export NUM_WORKERS=4
# Add parent directory to python path to access lightning_base.py
export PYTHONPATH="../":"${PYTHONPATH}"
python3 -i run_pl_glue.py \
--model_name_or_path $BERT_MODEL \
--task $TASK \
--data_dir $DATA_DIR \
--max_seq_length $MAX_LENGTH \
--max_epochs $NUM_EPOCHS \
--learning_rate $LEARNING_RATE \
--seed $SEED \
--gpus $GPUS \
--num_workers $NUM_WORKERS \
--train_batch_size $BATCH_SIZE \
--resume_from_checkpoint ./lightning_logs/version_0/checkpoints/epoch=1.ckpt \
--output_dir ./submissions/
--do_predict
```
CC: @srush @williamFalcon | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4627/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4627",
"html_url": "https://github.com/huggingface/transformers/pull/4627",
"diff_url": "https://github.com/huggingface/transformers/pull/4627.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4627.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4626/comments | https://api.github.com/repos/huggingface/transformers/issues/4626/events | https://github.com/huggingface/transformers/issues/4626 | 625,952,668 | MDU6SXNzdWU2MjU5NTI2Njg= | 4,626 | How to use run_glue.py with tensorboard? | {
"login": "ayrtondenner",
"id": 13112588,
"node_id": "MDQ6VXNlcjEzMTEyNTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13112588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayrtondenner",
"html_url": "https://github.com/ayrtondenner",
"followers_url": "https://api.github.com/users/ayrtondenner/followers",
"following_url": "https://api.github.com/users/ayrtondenner/following{/other_user}",
"gists_url": "https://api.github.com/users/ayrtondenner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayrtondenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayrtondenner/subscriptions",
"organizations_url": "https://api.github.com/users/ayrtondenner/orgs",
"repos_url": "https://api.github.com/users/ayrtondenner/repos",
"events_url": "https://api.github.com/users/ayrtondenner/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayrtondenner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What `--logging_dir` have you specified to the `run_glue.py` script?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm having this problem how do you fixed this?\r\nI put the direction to the folder to --logging_dir but nothing is written there",
"> What `--logging_dir` have you specified to the `run_glue.py` script?\r\n\r\nI'm having this problem how do you fixed this?\r\nI put the direction to the folder to --logging_dir but nothing is written there"
] | 1,590 | 1,659 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
I'm running `run_glue.py` script, where I added a new task_name in `data/metrics` and `data/processors`. The training happens OK, the checkpoints are being saved, but no tfevent file is being written. Shouldn't it be written in checkpoints folder during training? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4626/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4626/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4625/comments | https://api.github.com/repos/huggingface/transformers/issues/4625/events | https://github.com/huggingface/transformers/pull/4625 | 625,854,895 | MDExOlB1bGxSZXF1ZXN0NDIzOTc5MjE3 | 4,625 | [Model Card] model card for longformer-base-4096-finetuned-squadv1 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=h1) Report\n> Merging [#4625](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a17688021268fe429e78c66ea0932cb55cd03b1&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4625 +/- ##\n==========================================\n- Coverage 78.02% 78.00% -0.02% \n==========================================\n Files 124 124 \n Lines 20635 20635 \n==========================================\n- Hits 16100 16097 -3 \n- Misses 4535 4538 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4625/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4625/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4625/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=footer). Last update [6a17688...8c230e9](https://codecov.io/gh/huggingface/transformers/pull/4625?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That's great thanks @patil-suraj "
] | 1,590 | 1,590 | 1,590 | MEMBER | null | @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4625/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4625",
"html_url": "https://github.com/huggingface/transformers/pull/4625",
"diff_url": "https://github.com/huggingface/transformers/pull/4625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4625.patch",
"merged_at": 1590598084000
} |
https://api.github.com/repos/huggingface/transformers/issues/4624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4624/comments | https://api.github.com/repos/huggingface/transformers/issues/4624/events | https://github.com/huggingface/transformers/issues/4624 | 625,851,238 | MDU6SXNzdWU2MjU4NTEyMzg= | 4,624 | Can't see logger output | {
"login": "parmarsuraj99",
"id": 9317265,
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parmarsuraj99",
"html_url": "https://github.com/parmarsuraj99",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, have you tried setting the logging level to `INFO`? You can do so with the following lines:\r\n\r\n```py\r\nimport logging\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"It worked! Thanks",
"Hey, this doesn't log the training progress by trainer.train() into a log file. I want to keep appending the training progress to my log file but all I get are the prints and the parameters info at the end of trainer.train(). What would be a way around to achieve this? @parmarsuraj99 @LysandreJik ",
"+1\r\n\r\nsame request. @parmarsuraj99 @LysandreJik ",
"Share a solution, not so elegant but works. \r\n\r\nI define a new `Callback` function, which logging the logs using the outside logger. And then pass it to the trainer.\r\n\r\n```python\r\nclass LoggerLogCallback(transformers.TrainerCallback):\r\n def on_log(self, args, state, control, logs=None, **kwargs):\r\n control.should_log = False\r\n _ = logs.pop(\"total_flos\", None)\r\n if state.is_local_process_zero:\r\n logger.info(logs) # using your custom logger\r\n```"
] | 1,590 | 1,636 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): Sanskrit
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Can't see logger output showing model config and other parameters in Trainer that were printed in training_scripts.
1.
```
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./model_path",
overwrite_output_dir=True,
num_train_epochs=1,
per_gpu_train_batch_size=128,
per_gpu_eval_batch_size =256,
save_steps=1_000,
save_total_limit=2,
logging_first_step = True,
do_train=True,
do_eval = True,
evaluate_during_training=True,
logging_steps = 1000
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset = valid_dataset,
prediction_loss_only=True,
)
```
2.
```
%%time
trainer.train(model_path="./model_path")
```
Is it It is overriden by tqdm?
but I can still see `Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.`
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+916084d (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: TPU
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4624/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4623/comments | https://api.github.com/repos/huggingface/transformers/issues/4623/events | https://github.com/huggingface/transformers/issues/4623 | 625,825,359 | MDU6SXNzdWU2MjU4MjUzNTk= | 4,623 | `train.jsonl` file missing in MM-IMDb task | {
"login": "prabhakar267",
"id": 10768588,
"node_id": "MDQ6VXNlcjEwNzY4NTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/10768588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabhakar267",
"html_url": "https://github.com/prabhakar267",
"followers_url": "https://api.github.com/users/prabhakar267/followers",
"following_url": "https://api.github.com/users/prabhakar267/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhakar267/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabhakar267/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhakar267/subscriptions",
"organizations_url": "https://api.github.com/users/prabhakar267/orgs",
"repos_url": "https://api.github.com/users/prabhakar267/repos",
"events_url": "https://api.github.com/users/prabhakar267/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabhakar267/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It seems like this is the method to get the jsonl files: https://github.com/facebookresearch/mmbt/blob/master/scripts/mmimdb.py"
] | 1,590 | 1,606 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MM-IMDb
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download the **raw** MM-IMDb dataset from http://lisi1.unal.edu.co/mmimdb/
2. Download `run_mmimdb.py` and `utils_mmimdb.py` from `/examples/contrib/mm-imdb`
3. Run the command given in [README.md](https://github.com/huggingface/transformers/blob/master/examples/contrib/mm-imdb/README.md#training-on-mm-imdb)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_mmimdb.py", line 614, in <module>
main()
File "run_mmimdb.py", line 555, in main
train_dataset = load_examples(args, tokenizer, evaluate=False)
File "run_mmimdb.py", line 339, in load_examples
dataset = JsonlDataset(path, tokenizer, transforms, labels, args.max_seq_length - args.num_image_embeds - 2)
File ".../mmimdb/utils_mmimdb.py", line 50, in __init__
self.data = [json.loads(l) for l in open(data_path)]
FileNotFoundError: [Errno 2] No such file or directory: '.../mmimdb/dataset/train.jsonl'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
+ `train.jsonl` should be present in the script directory
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: Python 3.6.5
- PyTorch version (GPU?): `1.4.0` (yes)
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4623/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4622/comments | https://api.github.com/repos/huggingface/transformers/issues/4622/events | https://github.com/huggingface/transformers/issues/4622 | 625,806,171 | MDU6SXNzdWU2MjU4MDYxNzE= | 4,622 | GPU memory usage | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How do you launch your training? Can you paste your command?",
"I am using the trainer.\r\n```\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"albert_model\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_gpu_train_batch_size=85,\r\n learning_rate=5e-5,\r\n save_steps=50,\r\n save_total_limit=20,\r\n do_train=True,\r\n logging_steps=100\r\n )\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n prediction_loss_only=True,\r\n)\r\n\r\ntrainer.train()\r\n```",
"But how do you launch the actual script? \r\n\r\nTo efficiently harness your 8 V100s you should probably use torch.distributed, not nn.DataParallel.\r\n\r\nSo you would need to launch your script with e.g. \r\n```\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node 8 your_script.py\r\n```\r\n\r\n",
"Just tried this. this is loading pretraining data in each GPU independently. Shouldn't data be read once?",
"It should, yes.",
" Previously this was not the case and all transformer models utilize all GPUs automatically.\r\n.\r\ngetting this : reading for each gpu using ` python -m torch.distributed.launch --nproc_per_node 8 test_lm.py \r\n`\r\n```*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\nCalling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated\r\n/language_model/lm/lib/python3.6/site-packages/transformers/tokenization_utils.py:830: FutureWarning: Parameter max_len is deprecated and will be removed in a future release. Use model_max_length instead.\r\n category=FutureWarning,\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I face a similar problem. I wonder if this problem is ever resolved? \r\n\r\nI try to fine-tune a bert model and found the gpu memory usage behaves exactly like what the original post described \"gpu 0 is almost completely used but others have around 50% ram unused\"\r\n\r\n```\r\npython run_mlm.py \\\r\n--model_name_or_path bert-base-uncased \\\r\n--dataset_name wikitext \\\r\n--dataset_config_name wikitext-2-raw-v1 \\\r\n--per_device_train_batch_size 10 \\\r\n--max_seq_length=256 \\\r\n--do_train \\\r\n--output_dir /tmp/test \\\r\n```\r\n\r\nIf I run the example training script above, Here's my gpu usage:\r\n\r\n\r\nAfter some research, I found people has the same issue when training different model using pytorch. like [this](https://forums.fast.ai/t/training-language-model-with-nn-dataparallel-has-unbalanced-gpu-memory-usage/42494)\r\n\r\nThen I found this [post ](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255) written by Thomas Wolf suggesting using the [PyTorch-Encoding](https://github.com/zhanghang1989/PyTorch-Encoding) library. However, this post is 2 years old. So I was wondering if there's a more recent solution for it.\r\n\r\n@thomwolf @sgugger \r\n\r\nThanks!"
] | 1,590 | 1,605 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am training albert from scratch. I am using 8 V100. Issue is gpu 0 is almost completely used but others have around 50% ram unused. I am getting only 85 batch size on this system and above this OOM.
Using transformers from source 2.10.0
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:00:16.0 Off | 0 |
| N/A 77C P0 291W / 300W | 30931MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 |
| N/A 71C P0 255W / 300W | 18963MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 |
| N/A 71C P0 95W / 300W | 18963MiB / 32510MiB | 98% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 |
| N/A 68C P0 89W / 300W | 18963MiB / 32510MiB | 72% Default |
+-------------------------------+----------------------+----------------------+
| 4 Tesla V100-SXM2... On | 00000000:00:1A.0 Off | 0 |
| N/A 68C P0 78W / 300W | 18963MiB / 32510MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 5 Tesla V100-SXM2... On | 00000000:00:1B.0 Off | 0 |
| N/A 69C P0 96W / 300W | 18963MiB / 32510MiB | 65% Default |
+-------------------------------+----------------------+----------------------+
| 6 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 |
| N/A 69C P0 79W / 300W | 18963MiB / 32510MiB | 95% Default |
+-------------------------------+----------------------+----------------------+
| 7 Tesla V100-SXM2... On | 00000000:00:1D.0 Off | 0 |
| N/A 74C P0 80W / 300W | 18963MiB / 32510MiB | 12% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 28066 C python 30917MiB |
| 1 28066 C python 18949MiB |
| 2 28066 C python 18949MiB |
| 3 28066 C python 18949MiB |
| 4 28066 C python 18949MiB |
| 5 28066 C python 18949MiB |
| 6 28066 C python 18949MiB |
| 7 28066 C python 18949MiB |
+-----------------------------------------------------------------------------+
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4622/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4622/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4621 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4621/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4621/comments | https://api.github.com/repos/huggingface/transformers/issues/4621/events | https://github.com/huggingface/transformers/pull/4621 | 625,805,256 | MDExOlB1bGxSZXF1ZXN0NDIzOTM5Njk4 | 4,621 | Cleanup glue | {
"login": "jysohn23",
"id": 19496130,
"node_id": "MDQ6VXNlcjE5NDk2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jysohn23",
"html_url": "https://github.com/jysohn23",
"followers_url": "https://api.github.com/users/jysohn23/followers",
"following_url": "https://api.github.com/users/jysohn23/following{/other_user}",
"gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions",
"organizations_url": "https://api.github.com/users/jysohn23/orgs",
"repos_url": "https://api.github.com/users/jysohn23/repos",
"events_url": "https://api.github.com/users/jysohn23/events{/privacy}",
"received_events_url": "https://api.github.com/users/jysohn23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=h1) Report\n> Merging [#4621](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/003c4771290b00e6d14b871210c3a369edccaeed&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4621 +/- ##\n=======================================\n Coverage 78.03% 78.04% \n=======================================\n Files 124 124 \n Lines 20626 20627 +1 \n=======================================\n+ Hits 16096 16098 +2 \n+ Misses 4530 4529 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.15% <ø> (ø)` | |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `26.66% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.99% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=footer). Last update [003c477...01a48df](https://codecov.io/gh/huggingface/transformers/pull/4621?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c bump..!"
] | 1,590 | 1,591 | 1,591 | COLLABORATOR | null | * Make sure that MNLI acc metrics are named differently for match vs mismatch
* Allow writing to `cache_dir` from Glue dataset in case where datasets live in readOnly filesystem
* Flush TB writer at end to not miss any metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4621/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4621/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4621",
"html_url": "https://github.com/huggingface/transformers/pull/4621",
"diff_url": "https://github.com/huggingface/transformers/pull/4621.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4621.patch",
"merged_at": 1591119614000
} |
https://api.github.com/repos/huggingface/transformers/issues/4620 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4620/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4620/comments | https://api.github.com/repos/huggingface/transformers/issues/4620/events | https://github.com/huggingface/transformers/issues/4620 | 625,796,457 | MDU6SXNzdWU2MjU3OTY0NTc= | 4,620 | The new abstractions in /master are counterproductive | {
"login": "shenkev",
"id": 5405172,
"node_id": "MDQ6VXNlcjU0MDUxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5405172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenkev",
"html_url": "https://github.com/shenkev",
"followers_url": "https://api.github.com/users/shenkev/followers",
"following_url": "https://api.github.com/users/shenkev/following{/other_user}",
"gists_url": "https://api.github.com/users/shenkev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenkev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenkev/subscriptions",
"organizations_url": "https://api.github.com/users/shenkev/orgs",
"repos_url": "https://api.github.com/users/shenkev/repos",
"events_url": "https://api.github.com/users/shenkev/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenkev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Thanks for your feedback, it's interesting.\r\n\r\nWe've also heard good feedback on the refactoring you mention (because for instance opening an example script that was 700 lines of code long could be daunting) so it's good to hear a diversity of feedback.\r\n\r\nI think the crux of it is that it's not realistic to maintain dozens of combinations – e.g. `{apex, TPU, TF, distributed} x {all possible tasks}` – of self-contained example scripts. \r\n\r\nUnless you use code generation (which comes with its own set of problems), things will get out-of-sync and break really fast. CI would be really hard too.\r\n\r\nTo give a concrete example, adding TPU support to each individual script without refactoring would have been an overwhelming task. With the Trainer, we've been able to do it in a decently easy way and we'll have robust CI in place in the coming weeks.\r\n\r\nI do agree that \"hackability\" of this library is very important and we're trying to find a good trade-off for this. I feel like we don't have many layers of abstraction. (models are mostly self-contained, etc.)\r\n\r\nWe're very open to any suggestion to improving things so let us know of any ideas you'd have to make this hackability easier.",
"I have a different perspective here. Transformer is pretty hackable as it is now. It's very easy to take any model, add any task specific heads on it in all sorts of exotic ways. Also the recently introduced `Trainer` is pretty impressive, it removes lot of the boilerplate from previous examples and I didn't find that it limits hackability. The way Trainer code is structured, its very easy to modify it.\r\n\r\nThe models are also pretty hackable, here are two really great examples of this\r\n1. [This](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) notebook shows how you can replace the existing attention mechanism in BERT like models and replace them with `LongformerSelfAttention` to convert them to long versions, also trains using new `Trainer`\r\n2. This second [notebook](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb) shows how you can train HF models with fastai.\r\n\r\nAlso I've recently started contributing, and navigating though the codebase and making changes was a breeze.\r\n\r\nAlso HF models can be trained with all sorts trainers. I've personally trained HF models with my own training loop, HF Trainer, pytorch-lightning, ignite, fastai and it plays nicely with all of these.\r\n\r\nAnd I think the goal of the examples is to give standard templates for doing certain tasks but it doesn't limit or discourage from modifying them in any way. \r\n\r\nSo considering all this I would say that Transformers is pretty hackable and also provides nice light-weight abstractions wherever needed. I really appreciate this!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | I understand the desire to use abstraction to make the library easier to use out-of-the-box, especially for newer users who just want to call run_model.py with some hyparameters.
However, as someone who considers himself a novice-intermediate and frequently needs to look at or modify the source code to achieve what I want, it's been a huge pain adapting to the new changes.
I'll try to be as concrete as possible but here are some big painpoints:
(I will say, these painpoints may come from inexpeience with the library rather than something is just hard/impossible to achieve but either way, as a novice-intermediate, the end result is the same: I have a hard time navigating the library)
- arguments are hidden, so far example in run_language_modeling.py I'm not able to see what all the available parameters are. I feel using abstraction over argument types like training_args, model_args, etc. is overkill and just programming fluff
- model source code is hidden under many layers of abstraction
- support for basic functionality is not implemented when code is refactored. One example is the new run_language_modeling.py doesn't have support for training continuation from a checkpoint
Like for example, it's very hard to tell what any of the XXX.from_pretrained classes are actually doing.
These are just the painpoints I can think of from the top of my head. It's overall been just unsmooth using the new example files like run_language_modeling.py and the rest of the API.
My main suggestion is to rethink the trade-off beween simplicity, abstraction versus flexibility and ability to hack/modify the code. Generally I think abstraction is good, but it seems overly excessive in certain places and I wonder if you can achieve the same goal while doing a bit less.
A tangential observation is the example scripts get way too cumbersome when you try to support all these things within the same file (e.g. run_language_modeling.py): apex, tpu, tensorflow, distributed. There are flags everywhere. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4620/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 2,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/4620/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.