url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4619 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4619/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4619/comments | https://api.github.com/repos/huggingface/transformers/issues/4619/events | https://github.com/huggingface/transformers/pull/4619 | 625,781,893 | MDExOlB1bGxSZXF1ZXN0NDIzOTIwODcx | 4,619 | removed deprecated use of Variable API from pplm example | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=h1) Report\n> Merging [#4619](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/842588c12ffbbe3502c5ab4a18646ad31d9c1e34&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4619 +/- ##\n=======================================\n Coverage 78.02% 78.03% \n=======================================\n Files 124 124 \n Lines 20626 20626 \n=======================================\n+ Hits 16093 16095 +2 \n+ Misses 4533 4531 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4619/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4619/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=footer). Last update [842588c...8588578](https://codecov.io/gh/huggingface/transformers/pull/4619?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c Can you please see this ?",
"@sgugger Can you please see it ?"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | This is same as previous [PR](https://github.com/huggingface/transformers/pull/4156) which I closed due to code styling issue. Didn't know that specific isort version was supposed to be used. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4619/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4619",
"html_url": "https://github.com/huggingface/transformers/pull/4619",
"diff_url": "https://github.com/huggingface/transformers/pull/4619.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4619.patch",
"merged_at": 1591308470000
} |
https://api.github.com/repos/huggingface/transformers/issues/4618 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4618/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4618/comments | https://api.github.com/repos/huggingface/transformers/issues/4618/events | https://github.com/huggingface/transformers/pull/4618 | 625,739,652 | MDExOlB1bGxSZXF1ZXN0NDIzODg2NzEz | 4,618 | per_device instead of per_gpu/error thrown when argument unknown | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=h1) Report\n> Merging [#4618](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/842588c12ffbbe3502c5ab4a18646ad31d9c1e34&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4618 +/- ##\n==========================================\n- Coverage 78.02% 78.01% -0.01% \n==========================================\n Files 124 124 \n Lines 20626 20635 +9 \n==========================================\n+ Hits 16093 16099 +6 \n- Misses 4533 4536 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/hf\\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `61.11% <0.00%> (-0.87%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.47% <0.00%> (ø)` | |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.40% <58.33%> (-2.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4618/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=footer). Last update [842588c...eae844b](https://codecov.io/gh/huggingface/transformers/pull/4618?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | Modified the trainer argument so that `per_device_train_batch_size` and `per_device_eval_batch_size` are preferred over `per_gpu_*`.
`per_gpu_*` still works when `per_device_*` isn't used, but is deprecated.
The trainer argument parser now throws an error if an argument is unknown, only if the `return_remaining_strings` flag is kept to `False`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4618/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4618",
"html_url": "https://github.com/huggingface/transformers/pull/4618",
"diff_url": "https://github.com/huggingface/transformers/pull/4618.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4618.patch",
"merged_at": 1590593816000
} |
https://api.github.com/repos/huggingface/transformers/issues/4617 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4617/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4617/comments | https://api.github.com/repos/huggingface/transformers/issues/4617/events | https://github.com/huggingface/transformers/issues/4617 | 625,729,545 | MDU6SXNzdWU2MjU3Mjk1NDU= | 4,617 | run evalation after every epoch in Trainer | {
"login": "prajjwal1",
"id": 24690051,
"node_id": "MDQ6VXNlcjI0NjkwMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prajjwal1",
"html_url": "https://github.com/prajjwal1",
"followers_url": "https://api.github.com/users/prajjwal1/followers",
"following_url": "https://api.github.com/users/prajjwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions",
"organizations_url": "https://api.github.com/users/prajjwal1/orgs",
"repos_url": "https://api.github.com/users/prajjwal1/repos",
"events_url": "https://api.github.com/users/prajjwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/prajjwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should use `--evaluate_during_training` which should do mostly what you're looking for",
"@prajjwal1 , you should be able to achieve this with `--evaluate_during_training` provided you set `--save_steps` to `number_of_samples/batch_size`. However, I'm currently having trouble achieving this with that option when using both `run_language_modeling.py` and `run_glue.py` as I specify in https://github.com/huggingface/transformers/issues/4630. Any ideas @julien-c ? Thanks in advance.",
"There's a problem with MNLI though. In the example, arguments are changed from `mnli` to `mnli-mm`, so running evaluation after each epoch will happen on MNLI and not the mismatched one with the current implementation. "
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🚀 Feature request
With the current Trainer implementation:
`trainer.train(..)` is called first followed by `trainer.evaluate(..)`. It would be nice if the user can pass the flag `--run_eval` (something similar) to run evaluation after every epoch. It would be nice for users who want to see how model performs on validation set as training progresses. In some cases, this is the general norm (run evaluation after every epoch). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4617/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4617/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4616 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4616/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4616/comments | https://api.github.com/repos/huggingface/transformers/issues/4616/events | https://github.com/huggingface/transformers/pull/4616 | 625,678,877 | MDExOlB1bGxSZXF1ZXN0NDIzODM4MDk2 | 4,616 | [testing] LanguageModelGenerationTests require_tf or require_torch | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=h1) Report\n> Merging [#4616](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a9aa7456ac824c9027385b149f405e4f5649273f&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4616 +/- ##\n==========================================\n+ Coverage 78.02% 78.04% +0.01% \n==========================================\n Files 124 124 \n Lines 20626 20626 \n==========================================\n+ Hits 16093 16097 +4 \n+ Misses 4533 4529 -4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4616/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4616/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4616/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=footer). Last update [a9aa745...4d147b2](https://codecov.io/gh/huggingface/transformers/pull/4616?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4616/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4616/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4616",
"html_url": "https://github.com/huggingface/transformers/pull/4616",
"diff_url": "https://github.com/huggingface/transformers/pull/4616.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4616.patch",
"merged_at": 1590585027000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4615 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4615/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4615/comments | https://api.github.com/repos/huggingface/transformers/issues/4615/events | https://github.com/huggingface/transformers/pull/4615 | 625,671,611 | MDExOlB1bGxSZXF1ZXN0NDIzODMyMzg2 | 4,615 | [Longformer] longformer in question-answering pipeline | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This works well however I noticed some discrepancy in answers generated with pipeline and without pipeline\r\n\r\nfor this example\r\n```\r\nquestion = 'Who was Jim Henson?'\r\ntext = 'Jim Henson was a nice puppet.'\r\n```\r\n\r\npipeline produces `nice puppet.`\r\n without pipeline `a nice puppet`\r\n\r\n@patrickvonplaten is this expected or there's something wrong ? ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=h1) Report\n> Merging [#4615](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8cc6807e8997b8b7404c07037bd02c578da98baf&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4615 +/- ##\n==========================================\n- Coverage 78.03% 78.02% -0.01% \n==========================================\n Files 124 124 \n Lines 20647 20647 \n==========================================\n- Hits 16111 16110 -1 \n- Misses 4536 4537 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4615/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4615/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.24% <0.00%> (-0.24%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=footer). Last update [8cc6807...8d7469b](https://codecov.io/gh/huggingface/transformers/pull/4615?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten \r\nI seem to have figured out why this is happening.\r\n\r\nThis line https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L103\r\ntokenizes the doc text into individual tokens and then this line \r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L134\r\n\r\nuses the list of those tokens for encoding\r\n\r\nWhile this works for other BERT models , for roberta and and longformer tokenizer, the final \r\nencoding results in this\r\n`'<s> Who was Jim Henson?</s></s>JimHensonwasanicepuppet</s>'`\r\n\r\nChanging `span_doc_tokens` with `example.context_text` at L134 seems to solve the problem. But I'm not sure if doing this will cause other things to break. ",
"Thanks for the PR @patil-suraj. \r\n\r\nI will put this on hold for a week though since we will most likely do some major changes very soon here.\r\n\r\n1) I think the squad preprocessing functions will probably be refactoring building on the new `nlp` library @thomwolf \r\n\r\n2) IMO, the function `squad_convert_examples_to_features` should in general not be used in the `QuestionAnsweringPipeline` since we only need `input_ids` and `attention_mask` for inference and some other values for the `score`. Also, we should also be able to use the pipeline for `TriviaQA` (with Longformer, we now have a pretrained model that works very well on TriviaQA). The pipeline should not be dataset specific. I think it might be a good idea to do a bigger refactoring of `QuestionAnsweringPipeline` and make it independent from the `squad_convert_examples_to_features` function. What do you think @julien-c @thomwolf @LysandreJik ",
"Hi @patrickvonplaten , I think we should fix this now, as the newly launched model inference api uses qa pipeline its failing or giving weird answers for longformer qa models on model hub. This might discourage the users from using them. ",
"@patrickvonplaten I agree with you, especially because that method is made for heavy processing of large data, which is not the case with pipelines. It's slow to start, and uses multiprocessing by default, something we don't necessarily want with the pipelines.",
"Also putting @mfuntowicz in cc here",
"Ok we'll have conflicts here (#5496), we need to handle with care.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I believe this issue with roberta models for QA never got fixed. Any plans to continue working here? \r\n\r\n> While this works for other BERT models, for roberta and and longformer tokenizer, the final\r\nencoding results in this \r\n< s> Who was Jim Henson?</s></s>JimHensonwasanicepuppet</s>\r\n\r\nAs mentioned by @patil-suraj, we don't respect whitespace before words in the passage. Therefore, we currently use the input id for \"ĠHenson\" in the question, but the one for \"Henson\" in the passage.\r\n\r\nThe current implementation also leads to quite poor results of our QA models. For example, F1 of `deepset/roberta-base-squad2` on SQuAD 2 dev is down to 0.69 whereas it gets 0.81 with \"whitespace preserving\" tokenization.\r\n\r\nA simple fix could be to add `add_prefix_space=True` here in the tokenizer call for Roberta tokenizers (or similar), but might not be the most elegant solution. \r\nhttps://github.com/huggingface/transformers/blob/28cf873036d078b47fb9dd38ac3421a7c874da44/src/transformers/data/processors/squad.py#L112\r\n\r\nI can to do a PR for this, if that's how you want to fix it. \r\n",
"A PR would be very welcome :-) ",
"@patrickvonplaten Added a PR https://github.com/huggingface/transformers/pull/7387"
] | 1,590 | 1,601 | 1,599 | MEMBER | null | This PR adds `LongformerForQuestionAnswering` in `QuestionAnsweringPipeline`
@patrickvonplaten @ibeltagy | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4615/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4615/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4615",
"html_url": "https://github.com/huggingface/transformers/pull/4615",
"diff_url": "https://github.com/huggingface/transformers/pull/4615.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4615.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4614 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4614/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4614/comments | https://api.github.com/repos/huggingface/transformers/issues/4614/events | https://github.com/huggingface/transformers/pull/4614 | 625,574,625 | MDExOlB1bGxSZXF1ZXN0NDIzNzU2NzQz | 4,614 | [Contributing Doc] Update version command when contributing | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=h1) Report\n> Merging [#4614](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a9aa7456ac824c9027385b149f405e4f5649273f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4614 +/- ##\n=======================================\n Coverage 78.02% 78.03% \n=======================================\n Files 124 124 \n Lines 20626 20626 \n=======================================\n+ Hits 16093 16095 +2 \n+ Misses 4533 4531 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4614/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=footer). Last update [a9aa745...cbab365](https://codecov.io/gh/huggingface/transformers/pull/4614?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good catch, thanks!"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | According to PR: #4131, the `CONTRIBUTING.md` should be updated a bit. @BramVanroy | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4614/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4614/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4614",
"html_url": "https://github.com/huggingface/transformers/pull/4614",
"diff_url": "https://github.com/huggingface/transformers/pull/4614.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4614.patch",
"merged_at": 1590592751000
} |
https://api.github.com/repos/huggingface/transformers/issues/4613 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4613/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4613/comments | https://api.github.com/repos/huggingface/transformers/issues/4613/events | https://github.com/huggingface/transformers/issues/4613 | 625,535,809 | MDU6SXNzdWU2MjU1MzU4MDk= | 4,613 | What does the output of feature-extraction pipeline represent? | {
"login": "orenpapers",
"id": 28626773,
"node_id": "MDQ6VXNlcjI4NjI2Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/28626773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orenpapers",
"html_url": "https://github.com/orenpapers",
"followers_url": "https://api.github.com/users/orenpapers/followers",
"following_url": "https://api.github.com/users/orenpapers/following{/other_user}",
"gists_url": "https://api.github.com/users/orenpapers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orenpapers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orenpapers/subscriptions",
"organizations_url": "https://api.github.com/users/orenpapers/orgs",
"repos_url": "https://api.github.com/users/orenpapers/repos",
"events_url": "https://api.github.com/users/orenpapers/events{/privacy}",
"received_events_url": "https://api.github.com/users/orenpapers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"They are embeddings generated from the model. (Bert -Base Model I guess. cause it has a hidden representation of 768 dim). You get 9 elements:- one contextual embedding for each word in your sequence. These values of embeddings represent some hidden features that are not easy to interpret.",
"So the pipeline will just return the last layer encoding of Bert?\r\nSo what is the differance with a code like\r\n\r\n```\r\ninput_ids = torch.tensor(bert_tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) \r\noutputs = bert_model(input_ids)\r\nhidden_states = outputs[-1][1:] # The last hidden-state is the first element of the output tuple\r\nlayer_hidden_state = hidden_states[n_layer]\r\nreturn layer_hidden_state\r\n```\r\nAlso, does BERT encoding have similar traits as word2vec? e.g. similar word will be closer, France - Paris = England - London , etc?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> So the pipeline will just return the last layer encoding of Bert?\r\n> So what is the differance with a code like\r\n> \r\n> ```\r\n> input_ids = torch.tensor(bert_tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) \r\n> outputs = bert_model(input_ids)\r\n> hidden_states = outputs[-1][1:] # The last hidden-state is the first element of the output tuple\r\n> layer_hidden_state = hidden_states[n_layer]\r\n> return layer_hidden_state\r\n> ```\r\n> \r\n> Also, does BERT encoding have similar traits as word2vec? e.g. similar word will be closer, France - Paris = England - London , etc?\r\n\r\nHi @orko19,\r\nDid you understand the difference from 'hidden_states' vs. 'feature-extraction pipeline'? I'd like to understand it as well\r\nThanks!",
"@merleyc I do not! Please share if you do :)",
"The outputs between \"last_hidden_state\" and \"feature-extraction pipeline\" are same, you can try by yourself\r\n\r\n\"feature-extraction pipeline\" just helps us do some jobs from tokenize words to embedding "
] | 1,590 | 1,621 | 1,596 | NONE | null | I am using the feature-extraction pipeline:
```
nlp_fe = pipeline('feature-extraction')
nlp_fe('there is a book on the desk')
```
As an output I get a list with one element - that is a list with 9 elements - that is a list of 768 features (floats).
What is the output represent? What is every element of the lists, and what is the meaning of the 768 float values?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4613/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4612/comments | https://api.github.com/repos/huggingface/transformers/issues/4612/events | https://github.com/huggingface/transformers/issues/4612 | 625,532,857 | MDU6SXNzdWU2MjU1MzI4NTc= | 4,612 | Use fill-mask pipeline to get probability of specific token | {
"login": "orenpapers",
"id": 28626773,
"node_id": "MDQ6VXNlcjI4NjI2Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/28626773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orenpapers",
"html_url": "https://github.com/orenpapers",
"followers_url": "https://api.github.com/users/orenpapers/followers",
"following_url": "https://api.github.com/users/orenpapers/following{/other_user}",
"gists_url": "https://api.github.com/users/orenpapers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orenpapers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orenpapers/subscriptions",
"organizations_url": "https://api.github.com/users/orenpapers/orgs",
"repos_url": "https://api.github.com/users/orenpapers/repos",
"events_url": "https://api.github.com/users/orenpapers/events{/privacy}",
"received_events_url": "https://api.github.com/users/orenpapers/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, the pipeline doesn't offer such a functionality yet. You're better off using the model directly. Here's an example of how you would replicate the pipeline's behavior, and get a token score at the end:\r\n\r\n```py\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilroberta-base\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"distilroberta-base\")\r\n\r\nsequence = f\"Hugging Face is a French company based in {tokenizer.mask_token}\"\r\n\r\ninput_ids = tokenizer.encode(sequence, return_tensors=\"pt\")\r\nmask_token_index = torch.where(input_ids == tokenizer.mask_token_id)[1]\r\n\r\ntoken_logits = model(input_ids)[0]\r\nmask_token_logits = token_logits[0, mask_token_index, :]\r\nmask_token_logits = torch.softmax(mask_token_logits, dim=1)\r\n\r\ntop_5 = torch.topk(mask_token_logits, 5, dim=1)\r\ntop_5_tokens = zip(top_5.indices[0].tolist(), top_5.values[0].tolist())\r\n\r\nfor token, score in top_5_tokens:\r\n print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])), f\"(score: {score})\")\r\n\r\n# Get the score of token_id\r\nsought_after_token = \"London\"\r\nsought_after_token_id = tokenizer.encode(sought_after_token, add_special_tokens=False, add_prefix_space=True)[0] # 928\r\n\r\ntoken_score = mask_token_logits[:, sought_after_token_id]\r\nprint(f\"Score of {sought_after_token}: {mask_token_logits[:, sought_after_token_id]}\")\r\n```\r\n\r\nOutputs:\r\n\r\n```\r\nHugging Face is a French company based in Paris (score: 0.2310674488544464)\r\nHugging Face is a French company based in Lyon (score: 0.08198253810405731)\r\nHugging Face is a French company based in Geneva (score: 0.04769456014037132)\r\nHugging Face is a French company based in Brussels (score: 0.047622524201869965)\r\nHugging Face is a French company based in France (score: 0.04130581393837929)\r\nScore of London: tensor([0.0343], grad_fn=<SelectBackward>)\r\n```\r\n\r\nLet me know if it helps.",
"@lavanyashukla Great thanks! \r\nAnd if I want a predicability of a whole sentence, the best way will be just to average all words scores?\r\n",
"Yes, that's one way to do it.",
"@LysandreJik I get an error:\r\n\r\n```\r\n\"NLP_engine.py\", line 120, in _word_in_sentence_prob\r\n mask_token_index = torch.where(input_ids == bert_tokenizer.mask_token_id)[1]\r\nTypeError: where(): argument 'condition' (position 1) must be Tensor, not bool\r\n\r\n```\r\nFor the code:\r\n ```\r\ndef _word_in_sentence_prob(self, sentence, word):\r\n\r\n sequence = f\"{sentence} {bert_tokenizer.mask_token}\"\r\n\r\n input_ids = bert_tokenizer.encode(sequence, bert_tokenizer=\"pt\")\r\n mask_token_index = torch.where(input_ids == bert_tokenizer.mask_token_id)[1]\r\n\r\n token_logits = bert_model(input_ids)[0]\r\n mask_token_logits = token_logits[0, mask_token_index, :]\r\n mask_token_logits = torch.softmax(mask_token_logits, dim=1)\r\n\r\n top_5 = torch.topk(mask_token_logits, 5, dim=1)\r\n top_5_tokens = zip(top_5.indices[0].tolist(), top_5.values[0].tolist())\r\n\r\n for token, score in top_5_tokens:\r\n print(sequence.replace(bert_tokenizer.mask_token, bert_tokenizer.decode([token])), f\"(score: {score})\")\r\n\r\n # Get the score of token_id\r\n sought_after_token = word\r\n sought_after_token_id = bert_tokenizer.encode(sought_after_token, add_special_tokens=False, add_prefix_space=True)[\r\n 0] # 928\r\n\r\n token_score = mask_token_logits[:, sought_after_token_id]\r\n print(f\"Score of {sought_after_token}: {mask_token_logits[:, sought_after_token_id]}\")\r\n return token_score\r\n```\r\n\r\nAny idea why?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik I also get the error:\r\n```\r\n mask_token_index = torch.where(input_ids == bert_tokenizer.mask_token_id)[1]\r\nTypeError: where(): argument 'condition' (position 1) must be Tensor, not bool\r\n```\r\nfor this code.\r\nI have torch version 1.7.1\r\nAny idea what is the problem? Might it be version-related?\r\nIf so, what changes should be made in the code? Or what version should I downgrade to?",
"For those still searching for a solution (after nearly 3 years), just convert the condition into a tensor:\r\n\r\n`mask_token_index = torch.where(torch.tensor(input_ids == tokenizer.mask_token_id))[1]`"
] | 1,590 | 1,703 | 1,596 | NONE | null | Hi,
I am trying to use the fill-mask pipeline:
```
nlp_fm = pipeline('fill-mask')
nlp_fm('Hugging Face is a French company based in <mask>')
```
And get the output:
```
[{'sequence': '<s> Hugging Face is a French company based in Paris</s>',
'score': 0.23106734454631805,
'token': 2201},
{'sequence': '<s> Hugging Face is a French company based in Lyon</s>',
'score': 0.08198195695877075,
'token': 12790},
{'sequence': '<s> Hugging Face is a French company based in Geneva</s>',
'score': 0.04769458621740341,
'token': 11559},
{'sequence': '<s> Hugging Face is a French company based in Brussels</s>',
'score': 0.04762236401438713,
'token': 6497},
{'sequence': '<s> Hugging Face is a French company based in France</s>',
'score': 0.041305914521217346,
'token': 1470}]
```
But let's say I want to get the score & rank on other word - such as London - is this possible?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4611/comments | https://api.github.com/repos/huggingface/transformers/issues/4611/events | https://github.com/huggingface/transformers/issues/4611 | 625,521,391 | MDU6SXNzdWU2MjU1MjEzOTE= | 4,611 | Key error while evaluating the Language Model finetuning | {
"login": "graviraja",
"id": 7556119,
"node_id": "MDQ6VXNlcjc1NTYxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7556119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graviraja",
"html_url": "https://github.com/graviraja",
"followers_url": "https://api.github.com/users/graviraja/followers",
"following_url": "https://api.github.com/users/graviraja/following{/other_user}",
"gists_url": "https://api.github.com/users/graviraja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graviraja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graviraja/subscriptions",
"organizations_url": "https://api.github.com/users/graviraja/orgs",
"repos_url": "https://api.github.com/users/graviraja/repos",
"events_url": "https://api.github.com/users/graviraja/events{/privacy}",
"received_events_url": "https://api.github.com/users/graviraja/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks like your training script is out of sync with the library. Can you install the library from source, as documented in https://github.com/huggingface/transformers/tree/master/examples#important-note ?",
"Thanks, @julien-c building from source, solves the issue. ",
"I also find a problem about this......\r\nIf we set the lables_name =\"labels\" in the TrainerAugments, it would be wrong.\r\nBecause lables_name must be a list in TrainerAugments. If we set the labels_name = \"labels\", the function prediction_steps() in Trainer will set has_lables equal to None. For this line:\r\nhas_labels = all(inputs.get(k) is not None for k in self.label_names) in Trainer.py 1462."
] | 1,590 | 1,606 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. python run_language_modeling.py \
--output_dir=output \
--model_type=distilbert\
--model_name_or_path=distilbert-base-uncased \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```code
05/27/2020 08:44:42 - INFO - __main__ - *** Evaluate ***
05/27/2020 08:44:42 - INFO - transformers.trainer - ***** Running Evaluation *****
05/27/2020 08:44:42 - INFO - transformers.trainer - Num examples = 329
05/27/2020 08:44:42 - INFO - transformers.trainer - Batch size = 8
Evaluation: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 42/42 [00:04<00:00, 9.09it/s]
Traceback (most recent call last):
File "run_language_modeling.py", line 281, in <module>
main()
File "run_language_modeling.py", line 259, in main
perplexity = math.exp(eval_output["eval_loss"])
KeyError: 'eval_loss'
```
## Expected behavior
Evaluation of the validation data and output the perplexity.
Upon debugging the code, the eval_output doesn't have the key `eval_loss`
```code
-> perplexity = math.exp(eval_output["eval_loss"])
(Pdb) eval_output
{'loss': 1.8573534346762157}
```
Please change the key value accordingly.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: RHEL
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4611/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4610/comments | https://api.github.com/repos/huggingface/transformers/issues/4610/events | https://github.com/huggingface/transformers/pull/4610 | 625,466,809 | MDExOlB1bGxSZXF1ZXN0NDIzNjc0Nzg0 | 4,610 | README for HooshvareLab | {
"login": "m3hrdadfi",
"id": 2601833,
"node_id": "MDQ6VXNlcjI2MDE4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m3hrdadfi",
"html_url": "https://github.com/m3hrdadfi",
"followers_url": "https://api.github.com/users/m3hrdadfi/followers",
"following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}",
"gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions",
"organizations_url": "https://api.github.com/users/m3hrdadfi/orgs",
"repos_url": "https://api.github.com/users/m3hrdadfi/repos",
"events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/m3hrdadfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=h1) Report\n> Merging [#4610](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a9aa7456ac824c9027385b149f405e4f5649273f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4610 +/- ##\n=======================================\n Coverage 78.02% 78.02% \n=======================================\n Files 124 124 \n Lines 20626 20626 \n=======================================\n+ Hits 16093 16094 +1 \n+ Misses 4533 4532 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4610/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=footer). Last update [a9aa745...3db079d](https://codecov.io/gh/huggingface/transformers/pull/4610?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | HooshvareLab/bert-base-parsbert-uncased | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4610/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4610",
"html_url": "https://github.com/huggingface/transformers/pull/4610",
"diff_url": "https://github.com/huggingface/transformers/pull/4610.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4610.patch",
"merged_at": 1590593136000
} |
https://api.github.com/repos/huggingface/transformers/issues/4609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4609/comments | https://api.github.com/repos/huggingface/transformers/issues/4609/events | https://github.com/huggingface/transformers/issues/4609 | 625,409,155 | MDU6SXNzdWU2MjU0MDkxNTU= | 4,609 | How to deal with summarization task to long sequences input? | {
"login": "laetokang",
"id": 49485939,
"node_id": "MDQ6VXNlcjQ5NDg1OTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/49485939?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laetokang",
"html_url": "https://github.com/laetokang",
"followers_url": "https://api.github.com/users/laetokang/followers",
"following_url": "https://api.github.com/users/laetokang/following{/other_user}",
"gists_url": "https://api.github.com/users/laetokang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laetokang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laetokang/subscriptions",
"organizations_url": "https://api.github.com/users/laetokang/orgs",
"repos_url": "https://api.github.com/users/laetokang/repos",
"events_url": "https://api.github.com/users/laetokang/events{/privacy}",
"received_events_url": "https://api.github.com/users/laetokang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Usually, the input is simply cut in this case. Bart cuts the input to 1024 tokens when training on CNN Daily Mail. T5 cuts the input to 512 tokens when training on CNN Daily Mail.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am going to carry out the sumarization task using the 'transformers' module you provided. But there's a problem. The sequence which i have is too long, so an error occurs when inserting it. Is there any way to summarize the entire document by sliding it?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4608/comments | https://api.github.com/repos/huggingface/transformers/issues/4608/events | https://github.com/huggingface/transformers/pull/4608 | 625,402,703 | MDExOlB1bGxSZXF1ZXN0NDIzNjI2Njc5 | 4,608 | uncased readme | {
"login": "kldarek",
"id": 15803781,
"node_id": "MDQ6VXNlcjE1ODAzNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kldarek",
"html_url": "https://github.com/kldarek",
"followers_url": "https://api.github.com/users/kldarek/followers",
"following_url": "https://api.github.com/users/kldarek/following{/other_user}",
"gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kldarek/subscriptions",
"organizations_url": "https://api.github.com/users/kldarek/orgs",
"repos_url": "https://api.github.com/users/kldarek/repos",
"events_url": "https://api.github.com/users/kldarek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kldarek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=h1) Report\n> Merging [#4608](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a9aa7456ac824c9027385b149f405e4f5649273f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4608 +/- ##\n=======================================\n Coverage 78.02% 78.02% \n=======================================\n Files 124 124 \n Lines 20626 20626 \n=======================================\n+ Hits 16093 16094 +1 \n+ Misses 4533 4532 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4608/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=footer). Last update [a9aa745...470e98f](https://codecov.io/gh/huggingface/transformers/pull/4608?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | updates to the model card for uncased model with more evaluation results and recommendation to switch to cased model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4608/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4608",
"html_url": "https://github.com/huggingface/transformers/pull/4608",
"diff_url": "https://github.com/huggingface/transformers/pull/4608.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4608.patch",
"merged_at": 1590587405000
} |
https://api.github.com/repos/huggingface/transformers/issues/4607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4607/comments | https://api.github.com/repos/huggingface/transformers/issues/4607/events | https://github.com/huggingface/transformers/pull/4607 | 625,394,688 | MDExOlB1bGxSZXF1ZXN0NDIzNjIwNjMy | 4,607 | Create README.md | {
"login": "kldarek",
"id": 15803781,
"node_id": "MDQ6VXNlcjE1ODAzNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kldarek",
"html_url": "https://github.com/kldarek",
"followers_url": "https://api.github.com/users/kldarek/followers",
"following_url": "https://api.github.com/users/kldarek/following{/other_user}",
"gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kldarek/subscriptions",
"organizations_url": "https://api.github.com/users/kldarek/orgs",
"repos_url": "https://api.github.com/users/kldarek/repos",
"events_url": "https://api.github.com/users/kldarek/events{/privacy}",
"received_events_url": "https://api.github.com/users/kldarek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=h1) Report\n> Merging [#4607](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a9aa7456ac824c9027385b149f405e4f5649273f&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4607 +/- ##\n=======================================\n Coverage 78.02% 78.02% \n=======================================\n Files 124 124 \n Lines 20626 20626 \n=======================================\n+ Hits 16093 16094 +1 \n+ Misses 4533 4532 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=footer). Last update [a9aa745...efcdd13](https://codecov.io/gh/huggingface/transformers/pull/4607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great – [model page](https://huggingface.co/dkleczek/bert-base-polish-cased-v1)"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Model card for cased model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4607",
"html_url": "https://github.com/huggingface/transformers/pull/4607",
"diff_url": "https://github.com/huggingface/transformers/pull/4607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4607.patch",
"merged_at": 1590586581000
} |
https://api.github.com/repos/huggingface/transformers/issues/4606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4606/comments | https://api.github.com/repos/huggingface/transformers/issues/4606/events | https://github.com/huggingface/transformers/issues/4606 | 625,337,409 | MDU6SXNzdWU2MjUzMzc0MDk= | 4,606 | Inconsistency in how Electra doing sentence level prediction | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Are there any updates on this issue?"
] | 1,590 | 1,617 | 1,596 | NONE | null | In `ElectraForSequenceClassification`:
We have docstring says, `ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of
the pooled output) e.g. for GLUE tasks.`
This is also what I observed in the official repository.
https://github.com/google-research/electra/blob/81f7e5fc98b0ad8bfd20b641aa8bc9e6ac00c8eb/finetune/classification/classification_tasks.py#L270
https://github.com/google-research/electra/blob/79111328070e491b287c307906701ebc61091eb2/model/modeling.py#L254
which is
```
nn.Sequential( nn.Dropout(config.hidden_dropout_prob),
nn.Linear(config.hidden_size, config.num_labels)
```
**But** the implementation of `ElectraClassificationHead` (used by `ElectraForSequenceClassification`) is
```
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.dense(x)
x = get_activation("gelu")(x) # although BERT uses tanh here, it seems Electra authors used gelu here
x = self.dropout(x)
x = self.out_proj(x)
return x
```
Is there something I overlooked in the official repository ? Hot to explain the inconsistency in doc and implementaion of `ElectraForSequenceClassification`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4606/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4606/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4605/comments | https://api.github.com/repos/huggingface/transformers/issues/4605/events | https://github.com/huggingface/transformers/pull/4605 | 625,266,179 | MDExOlB1bGxSZXF1ZXN0NDIzNTE4OTc5 | 4,605 | Glue task cleanup | {
"login": "jysohn23",
"id": 19496130,
"node_id": "MDQ6VXNlcjE5NDk2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jysohn23",
"html_url": "https://github.com/jysohn23",
"followers_url": "https://api.github.com/users/jysohn23/followers",
"following_url": "https://api.github.com/users/jysohn23/following{/other_user}",
"gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions",
"organizations_url": "https://api.github.com/users/jysohn23/orgs",
"repos_url": "https://api.github.com/users/jysohn23/repos",
"events_url": "https://api.github.com/users/jysohn23/events{/privacy}",
"received_events_url": "https://api.github.com/users/jysohn23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | COLLABORATOR | null | * Enable writing cache to cache_dir in case dataset lives in readOnly
filesystem
* Differentiate match vs mismatch for MNLI metrics
* Manually flush tensorboard writer to avoid missing metrics. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4605/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4605",
"html_url": "https://github.com/huggingface/transformers/pull/4605",
"diff_url": "https://github.com/huggingface/transformers/pull/4605.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4605.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4604/comments | https://api.github.com/repos/huggingface/transformers/issues/4604/events | https://github.com/huggingface/transformers/pull/4604 | 625,143,527 | MDExOlB1bGxSZXF1ZXN0NDIzNDE5MDMx | 4,604 | updated model cards for both models at aubmindlab | {
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Great logo!",
"link seems broken on huggingface.co but I'll fix directly",
"Thank you Julien!"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | - added AraBERT image.
- updated usage examples
- updated results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4604",
"html_url": "https://github.com/huggingface/transformers/pull/4604",
"diff_url": "https://github.com/huggingface/transformers/pull/4604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4604.patch",
"merged_at": 1590526363000
} |
https://api.github.com/repos/huggingface/transformers/issues/4603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4603/comments | https://api.github.com/repos/huggingface/transformers/issues/4603/events | https://github.com/huggingface/transformers/pull/4603 | 624,972,134 | MDExOlB1bGxSZXF1ZXN0NDIzMjc4MTcx | 4,603 | Creating a readme for ALBERT in Mongolian | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"That is awesome, thank you"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Here I am uploading Mongolian masked language model (ALBERT) on your platform.
https://en.wikipedia.org/wiki/Mongolia | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4603/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4603",
"html_url": "https://github.com/huggingface/transformers/pull/4603",
"diff_url": "https://github.com/huggingface/transformers/pull/4603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4603.patch",
"merged_at": 1590526483000
} |
https://api.github.com/repos/huggingface/transformers/issues/4602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4602/comments | https://api.github.com/repos/huggingface/transformers/issues/4602/events | https://github.com/huggingface/transformers/pull/4602 | 624,958,111 | MDExOlB1bGxSZXF1ZXN0NDIzMjY2NDk3 | 4,602 | Remove MD emojis | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=h1) Report\n> Merging [#4602](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ddd8d6531c8c49fdd281b55b93f6c81c9826f4b&el=desc) will **increase** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4602 +/- ##\n==========================================\n+ Coverage 78.03% 78.11% +0.08% \n==========================================\n Files 124 124 \n Lines 20647 20647 \n==========================================\n+ Hits 16111 16128 +17 \n+ Misses 4536 4519 -17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4602/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4602/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4602/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=footer). Last update [5ddd8d6...a5049de](https://codecov.io/gh/huggingface/transformers/pull/4602?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4602",
"html_url": "https://github.com/huggingface/transformers/pull/4602",
"diff_url": "https://github.com/huggingface/transformers/pull/4602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4602.patch",
"merged_at": 1590525519000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4601/comments | https://api.github.com/repos/huggingface/transformers/issues/4601/events | https://github.com/huggingface/transformers/issues/4601 | 624,953,220 | MDU6SXNzdWU2MjQ5NTMyMjA= | 4,601 | Which models can be using for encoder-decoder? | {
"login": "blizda",
"id": 9090456,
"node_id": "MDQ6VXNlcjkwOTA0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9090456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blizda",
"html_url": "https://github.com/blizda",
"followers_url": "https://api.github.com/users/blizda/followers",
"following_url": "https://api.github.com/users/blizda/following{/other_user}",
"gists_url": "https://api.github.com/users/blizda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blizda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blizda/subscriptions",
"organizations_url": "https://api.github.com/users/blizda/orgs",
"repos_url": "https://api.github.com/users/blizda/repos",
"events_url": "https://api.github.com/users/blizda/events{/privacy}",
"received_events_url": "https://api.github.com/users/blizda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@blizda did you find an answer to your query - \"Only Bert can be using as encoder and decoder? If so, can you add list of available models for encoder-decoder in documentation?\"? ",
"There's an error-message saying which models can be used now; \r\n\"... Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.\""
] | 1,590 | 1,619 | 1,590 | NONE | null | Hi, I trying to use EncoderDecoderModel. I tried google/electra-base-discriminator, google/electra-small-discriminator, albert-base-v2 as encoder and decoder:
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained('google/electra-small-discriminator', 'google/electra-small-discriminator')
```
but, I always gate the same error
```python
TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'
```
Only Bert can be using as encoder and decoder? If so, can you add list of available models for encoder-decoder in documentation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4601/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/4601/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4600/comments | https://api.github.com/repos/huggingface/transformers/issues/4600/events | https://github.com/huggingface/transformers/issues/4600 | 624,867,331 | MDU6SXNzdWU2MjQ4NjczMzE= | 4,600 | Functionality for addressing imbalance data points? | {
"login": "innat",
"id": 17668390,
"node_id": "MDQ6VXNlcjE3NjY4Mzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/17668390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/innat",
"html_url": "https://github.com/innat",
"followers_url": "https://api.github.com/users/innat/followers",
"following_url": "https://api.github.com/users/innat/following{/other_user}",
"gists_url": "https://api.github.com/users/innat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/innat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/innat/subscriptions",
"organizations_url": "https://api.github.com/users/innat/orgs",
"repos_url": "https://api.github.com/users/innat/repos",
"events_url": "https://api.github.com/users/innat/events{/privacy}",
"received_events_url": "https://api.github.com/users/innat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,594 | 1,594 | NONE | null | Is there yet any functionality from transformers library to address or tackle imbalance classes in data points? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4599/comments | https://api.github.com/repos/huggingface/transformers/issues/4599/events | https://github.com/huggingface/transformers/issues/4599 | 624,840,655 | MDU6SXNzdWU2MjQ4NDA2NTU= | 4,599 | ImportError: cannot import name 'AutoModelForQuestionAnswering' from 'transformers | {
"login": "OguzKircicek",
"id": 11234183,
"node_id": "MDQ6VXNlcjExMjM0MTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/11234183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OguzKircicek",
"html_url": "https://github.com/OguzKircicek",
"followers_url": "https://api.github.com/users/OguzKircicek/followers",
"following_url": "https://api.github.com/users/OguzKircicek/following{/other_user}",
"gists_url": "https://api.github.com/users/OguzKircicek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OguzKircicek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OguzKircicek/subscriptions",
"organizations_url": "https://api.github.com/users/OguzKircicek/orgs",
"repos_url": "https://api.github.com/users/OguzKircicek/repos",
"events_url": "https://api.github.com/users/OguzKircicek/events{/privacy}",
"received_events_url": "https://api.github.com/users/OguzKircicek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What is your `transformers` version? Do you have PyTorch installed?",
"Thank you. I solved.",
"Great to hear!",
"> Thank you. I solved.\r\n\r\nHow did you solve it?",
"I installed pytorch library. "
] | 1,590 | 1,592 | 1,590 | NONE | null | Hi friends,
I would like to use transformers library. But while import I received this error.
This code:
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
import torch
# LOAD MODEL
tokenizer = AutoTokenizer.from_pretrained("savasy/bert-base-turkish-squad")
model = AutoModelForQuestionAnswering.from_pretrained("savasy/bert-base-turkish-squad")
```
**Error**
**ImportError: cannot import name 'AutoModelForQuestionAnswering' from 'transformers' (C:\Users\oguzk\anaconda3\lib\site-packages\transformers\__init__.py)** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4598/comments | https://api.github.com/repos/huggingface/transformers/issues/4598/events | https://github.com/huggingface/transformers/pull/4598 | 624,835,267 | MDExOlB1bGxSZXF1ZXN0NDIzMTY2ODA3 | 4,598 | [Reformer] automate axial_pos_shape | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @flozi00,\r\n\r\nThanks for the PR! To be honest I don't think we should merge this. A couple of reasons:\r\n\r\n1. In contrast to `num_buckets` which is said in the paper to be always around ~ `2 * sequence_length / chunk_length` `axial_pos_shape` can be freely set by the user. \r\n2. We are trying to add as little automatic settings that are not visible to the user as possible (also @thomwolf here). The reason is that it can later lead to errors that are hard to understand for the user. In this case, I don't think the user should use AxialPositionEmbeddings before having read the docs and understood how it works. Automatically setting the `num_buckets` is already suboptimal in this sense. ",
"BTW, I just answered your email - sorry I forgot about this"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | This PR automates the calculation of the axial_pos_shape.
checked that every combination of 2**n works using this sheet: https://docs.google.com/spreadsheets/d/19gnP1ve2fT2F59LNiky44SPpmtmOegfIlk5_3OFXjyU/edit?usp=sharing
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4598/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4598",
"html_url": "https://github.com/huggingface/transformers/pull/4598",
"diff_url": "https://github.com/huggingface/transformers/pull/4598.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4598.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4597/comments | https://api.github.com/repos/huggingface/transformers/issues/4597/events | https://github.com/huggingface/transformers/pull/4597 | 624,830,156 | MDExOlB1bGxSZXF1ZXN0NDIzMTYyNjY0 | 4,597 | [Draft] Bharaax outputattentions | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,591 | 1,591 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4597/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4597",
"html_url": "https://github.com/huggingface/transformers/pull/4597",
"diff_url": "https://github.com/huggingface/transformers/pull/4597.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4597.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4596/comments | https://api.github.com/repos/huggingface/transformers/issues/4596/events | https://github.com/huggingface/transformers/issues/4596 | 624,805,729 | MDU6SXNzdWU2MjQ4MDU3Mjk= | 4,596 | AttributeError: 'Namespace' object has no attribute 'to_json_string' | {
"login": "manhlab",
"id": 47383746,
"node_id": "MDQ6VXNlcjQ3MzgzNzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/47383746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manhlab",
"html_url": "https://github.com/manhlab",
"followers_url": "https://api.github.com/users/manhlab/followers",
"following_url": "https://api.github.com/users/manhlab/following{/other_user}",
"gists_url": "https://api.github.com/users/manhlab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manhlab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manhlab/subscriptions",
"organizations_url": "https://api.github.com/users/manhlab/orgs",
"repos_url": "https://api.github.com/users/manhlab/repos",
"events_url": "https://api.github.com/users/manhlab/events{/privacy}",
"received_events_url": "https://api.github.com/users/manhlab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure I understand what you're trying to do. Do you mind explaining? Showing the code you're using would help as well.",
"```\r\ntraining_args = dict( num_cores= 8, model_name_or_path= 't5-base',\r\n max_len= 512 ,target_max_len= 2, output_dir= './models',\r\n overwrite_output_dir= True,\r\n per_gpu_train_batch_size= 8,\r\n per_gpu_eval_batch_size= 8,\r\n gradient_accumulation_steps= 4,\r\n learning_rate= 1e-4,\r\n tpu_num_cores= 8,\r\n logging_dir='/log',\r\n do_train= True, weight_decay=0.00,\r\n device='xla',local_rank=-1,\r\n max_steps=10000, adam_epsilon=1e-8,\r\n warmup_steps=0,\r\n train_batch_size=8,\r\n eval_batch_size=8,\r\n num_train_epochs=1,\r\n early_stop_callback=False,\r\n fp_16=False, # if you want to enable 16-bit training then install apex and set this to true\r\n opt_level='O1', # you can find out more on optimisation levels here https://nvidia.github.io/apex/amp.html#opt-levels-and-properties\r\n max_grad_norm=1.0, # if you enable 16-bit training then set this to a sensible value, 0.5 is a good default\r\n seed=42, fp16=False, n_gpu=0,SummaryWriter=None)\r\n from transformers import Trainer\r\ntrainer = Trainer(\r\n model=model,\r\n args=argparse.Namespace(**training_args),\r\n train_dataset=train_dataset,\r\n data_collator=T2TDataCollator(),\r\n prediction_loss_only=True, tb_writer=None\r\n )\r\n\r\ntrainer.train()\r\n\r\n````",
"here is my code and i got \r\nAttributeError: 'Namespace' object has no attribute 'to_json_string'",
"Trainer's args should be a `TrainingArguments` instance, not a dict or a Namespace.\r\n\r\nTry:\r\n```python\r\nfrom transformers import TrainingArguments\r\n```"
] | 1,590 | 1,590 | 1,590 | NONE | null | trainer.train()
don't know how to set parameters self.args.to_json_string()
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4595/comments | https://api.github.com/repos/huggingface/transformers/issues/4595/events | https://github.com/huggingface/transformers/issues/4595 | 624,801,242 | MDU6SXNzdWU2MjQ4MDEyNDI= | 4,595 | KeyError when loading a trained EncoderDecoder model | {
"login": "gustavscholin",
"id": 35476152,
"node_id": "MDQ6VXNlcjM1NDc2MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/35476152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gustavscholin",
"html_url": "https://github.com/gustavscholin",
"followers_url": "https://api.github.com/users/gustavscholin/followers",
"following_url": "https://api.github.com/users/gustavscholin/following{/other_user}",
"gists_url": "https://api.github.com/users/gustavscholin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gustavscholin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gustavscholin/subscriptions",
"organizations_url": "https://api.github.com/users/gustavscholin/orgs",
"repos_url": "https://api.github.com/users/gustavscholin/repos",
"events_url": "https://api.github.com/users/gustavscholin/events{/privacy}",
"received_events_url": "https://api.github.com/users/gustavscholin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @gustavscholin, \r\n\r\nThanks for you issue!\r\nCould you please provide a code example so that I can reproduce the error? ",
"Hi, @patrickvonplaten @gustavscholin \r\nFor me, setting \"base_model_prefix\" in modeling_encoder_decoder.py fixed this problem, as finding params is based on self.base_model_prefix. \r\n\r\nIs it fundamental solution? or just short-sighted? ",
"@patrickvonplaten, here's a colab notebook to reproduce the error:\r\n\r\nhttps://colab.research.google.com/drive/102U7pJJcyw__Yq0PERxAKvKPSx3bvNSi?usp=sharing",
"> Hi, @patrickvonplaten @gustavscholin\r\n> For me, setting \"base_model_prefix\" in modeling_encoder_decoder.py fixed this problem, as finding params is based on self.base_model_prefix.\r\n> \r\n> Is it fundamental solution? or just short-sighted?\r\n\r\nNo that was the right solution :-) I did exactly the same in this fix: #4680",
"> @patrickvonplaten, here's a colab notebook to reproduce the error:\r\n> \r\n> https://colab.research.google.com/drive/102U7pJJcyw__Yq0PERxAKvKPSx3bvNSi?usp=sharing\r\n\r\nA saved encoder-decoder model will always only be saved in a single folder. A single folder can always be loaded with `.from_pretrained()`. So to make your notebook work, you simply have to replace this line:\r\n\r\n```python \r\nsaved_model = EncoderDecoderModel.from_encoder_decoder_pretrained('test_run', 'test_run')\r\n```\r\n\r\nby \r\n\r\n```python \r\nsaved_model = EncoderDecoderModel.from_pretrained('test_run')\r\n```"
] | 1,590 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Error when loading a trained EncoderDecoder model. When loading the config the in `configuration_auto.py` the `model_type` is expected on the form `encoder-decoder` but in `configuration_encoder_decoder.py` `model_type` is on the form `encoder_decoder` which raises a KeyError.
The hyphen version seems to be convention in the other model configuration files.
I guess this is something for @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4594/comments | https://api.github.com/repos/huggingface/transformers/issues/4594/events | https://github.com/huggingface/transformers/issues/4594 | 624,774,931 | MDU6SXNzdWU2MjQ3NzQ5MzE= | 4,594 | KeyError: "Unable to open object (object 'bias:0' doesn't exist)" | {
"login": "yuimo",
"id": 22741826,
"node_id": "MDQ6VXNlcjIyNzQxODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/22741826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuimo",
"html_url": "https://github.com/yuimo",
"followers_url": "https://api.github.com/users/yuimo/followers",
"following_url": "https://api.github.com/users/yuimo/following{/other_user}",
"gists_url": "https://api.github.com/users/yuimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuimo/subscriptions",
"organizations_url": "https://api.github.com/users/yuimo/orgs",
"repos_url": "https://api.github.com/users/yuimo/repos",
"events_url": "https://api.github.com/users/yuimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuimo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi, i tried again, and write the class of ClsNerModel to model_tf_bert.py, and then reinstall transformers again. it works!!\r\nso, if i create my class outside the lib of transformers, what else should i do to make it work, and avoid the mistakes above\r\nthanks a lot!"
] | 1,590 | 1,590 | 1,590 | NONE | null | hi, i have create a new class named ClsNerModel inherited from TFBertPreTrainedModel
i have trained it successfully and save the model using model.save_pretrained to some dir
however, when i loaded from that dir using ClsNerModel.from_pretrained, it fails and report some error below.
sorry to report this, and any advice is appreciated.
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 410, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 250, in load_weights
return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1264, in load_weights
f, self.layers, skip_mismatch=skip_mismatch)
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 753, in load_weights_from_hdf5_group_by_name
weight_values = [np.asarray(g[weight_name]) for weight_name in weight_names]
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 753, in <listcomp>
weight_values = [np.asarray(g[weight_name]) for weight_name in weight_names]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/yuanhao/anaconda3/envs/tf2/lib/python3.7/site-packages/h5py/_hl/group.py", line 264, in __getitem__
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'bias:0' doesn't exist)"
my code is something like this:
##############################################################################
class ClsNerModel(TFBertPreTrainedModel):
def __init__(self, config, *inputs, cls_num_labels:int=2, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.cls_num_labels = cls_num_labels
self.bert = TFBertMainLayer(config, name="bert")
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(
config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier"
)
self.cls_dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.cls_classifier = tf.keras.layers.Dense(
self.cls_num_labels, kernel_initializer=get_initializer(config.initializer_range), name="cls_classifier"
)
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
sequence_output = outputs[0] # (b, t, d)
pool_output = outputs[1] # (b, d) only for cls token
sequence_output = self.dropout(sequence_output, training=kwargs.get("training", False))
token_logits = self.classifier(sequence_output)
pool_output = self.cls_dropout(pool_output, training=kwargs.get("training", False))
cls_logits = self.cls_classifier(pool_output)
outputs = (token_logits, cls_logits) + outputs[2:] # add hidden states and attention if they are here
return outputs # scores, (hidden_states), (attentions)
##############################################################################
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
id2label=label_map,
label2id={label: i for i, label in enumerate(label_map)},
cache_dir=model_args.cache_dir,
)
model = ClsNerModel.from_pretrained(
model_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
output_loading_info=True,
config=config,
cls_num_labels=cls_num_labels,
cache_dir=model_args.cache_dir,
) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4594/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4593/comments | https://api.github.com/repos/huggingface/transformers/issues/4593/events | https://github.com/huggingface/transformers/pull/4593 | 624,739,183 | MDExOlB1bGxSZXF1ZXN0NDIzMDkwNzAz | 4,593 | [Longformer For Question Answering] Conversion script, doc, small fixes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=h1) Report\n> Merging [#4593](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4593 +/- ##\n=======================================\n Coverage 78.09% 78.09% \n=======================================\n Files 123 123 \n Lines 20624 20625 +1 \n=======================================\n+ Hits 16106 16108 +2 \n+ Misses 4518 4517 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.41% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4593/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=footer). Last update [b86e42e...d8d4187](https://codecov.io/gh/huggingface/transformers/pull/4593?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds
- Longformer For Question Answering doc
- Adds the link to the (official) uploaded model: https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa
- Some minor refactoring
- Conversion script
@ibeltagy @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4593/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4593",
"html_url": "https://github.com/huggingface/transformers/pull/4593",
"diff_url": "https://github.com/huggingface/transformers/pull/4593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4593.patch",
"merged_at": 1590497928000
} |
https://api.github.com/repos/huggingface/transformers/issues/4592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4592/comments | https://api.github.com/repos/huggingface/transformers/issues/4592/events | https://github.com/huggingface/transformers/issues/4592 | 624,727,423 | MDU6SXNzdWU2MjQ3Mjc0MjM= | 4,592 | IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | {
"login": "VaibhavSxn",
"id": 48293555,
"node_id": "MDQ6VXNlcjQ4MjkzNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48293555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VaibhavSxn",
"html_url": "https://github.com/VaibhavSxn",
"followers_url": "https://api.github.com/users/VaibhavSxn/followers",
"following_url": "https://api.github.com/users/VaibhavSxn/following{/other_user}",
"gists_url": "https://api.github.com/users/VaibhavSxn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VaibhavSxn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VaibhavSxn/subscriptions",
"organizations_url": "https://api.github.com/users/VaibhavSxn/orgs",
"repos_url": "https://api.github.com/users/VaibhavSxn/repos",
"events_url": "https://api.github.com/users/VaibhavSxn/events{/privacy}",
"received_events_url": "https://api.github.com/users/VaibhavSxn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, what's your transformers and pytorch version? Is that the entire code you're using?\r\n\r\nYour code doesn't crash here but crashes at the line\r\n\r\n```py\r\nanswers = nlp(question=ques0, context=abstract, topk = 10)\r\n```\r\n\r\nDo you mind providing the question and the context you're using?",
"Sorry, its a mistake on my part. Thanks for your reply."
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
I am running Huggingface's nlp pipeline. The code is below:
```
nlp = pipeline("question-answering",
model = 'distilbert-base-cased-distilled-squad',
tokenizer='distilbert-base-cased-distilled-squad')
```
Model I am using is pipeline.
I try run an example using docker and I get the following error:
```
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 360.86it/s]
add example index and unique id: 100%|██████████| 1/1 [00:00<00:00, 3120.76it/s]
myimage_1 | [2020-05-26 08:38:22,060] ERROR in app: Exception on /deep/search [POST]
myimage_1 | Traceback (most recent call last):
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
myimage_1 | response = self.full_dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
myimage_1 | rv = self.handle_user_exception(e)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
myimage_1 | reraise(exc_type, exc_value, tb)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
myimage_1 | raise value
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
myimage_1 | rv = self.dispatch_request()
myimage_1 | File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
myimage_1 | return self.view_functions[rule.endpoint](**req.view_args)
myimage_1 | File "main.py", line 50, in launch_app
myimage_1 | answers = nlp(question=ques0, context=abstract, topk = 10)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/pipelines.py", line 1010, in __call__
myimage_1 | start, end = self.model(**fw_args)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
myimage_1 | result = self.forward(*input, **kwargs)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 720, in forward
myimage_1 | input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds
myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
myimage_1 | result = self.forward(*input, **kwargs)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 482, in forward
myimage_1 | inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
myimage_1 | result = self.forward(*input, **kwargs)
myimage_1 | File "/usr/local/lib/python3.7/site-packages/transformers/modeling_distilbert.py", line 86, in forward
myimage_1 | seq_length = input_ids.size(1)
myimage_1 | IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
```
My python version in 3.7.4.
Please help in fixing this.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4591/comments | https://api.github.com/repos/huggingface/transformers/issues/4591/events | https://github.com/huggingface/transformers/pull/4591 | 624,694,960 | MDExOlB1bGxSZXF1ZXN0NDIzMDU4NTQz | 4,591 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=h1) Report\n> Merging [#4591](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4591 +/- ##\n=======================================\n Coverage 78.09% 78.09% \n=======================================\n Files 123 123 \n Lines 20624 20624 \n=======================================\n+ Hits 16106 16107 +1 \n+ Misses 4518 4517 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4591/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=footer). Last update [b86e42e...1e39227](https://codecov.io/gh/huggingface/transformers/pull/4591?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4591/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4591",
"html_url": "https://github.com/huggingface/transformers/pull/4591",
"diff_url": "https://github.com/huggingface/transformers/pull/4591.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4591.patch",
"merged_at": 1590526209000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4590/comments | https://api.github.com/repos/huggingface/transformers/issues/4590/events | https://github.com/huggingface/transformers/issues/4590 | 624,669,986 | MDU6SXNzdWU2MjQ2Njk5ODY= | 4,590 | [Model hub web parsing MD code error] | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, that is a GitHub-only (GFM) feature, while we use marked.js (https://marked.js.org/) for markdown parsing.\r\n\r\nYou'll have to use the actual emojis for now"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Hi guys!
If you navigate to https://github.com/huggingface/transformers/blob/master/model_cards/mrm8488/bert-italian-finedtuned-squadv1-it-alfa/README.md
You will see the emojis without problem. But if you go to its HTML page on the model hub: https://huggingface.co/mrm8488/bert-italian-finedtuned-squadv1-it-alfa
Emojis are not shown | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4589/comments | https://api.github.com/repos/huggingface/transformers/issues/4589/events | https://github.com/huggingface/transformers/pull/4589 | 624,669,663 | MDExOlB1bGxSZXF1ZXN0NDIzMDM5NzMz | 4,589 | [LongformerForQuestionAnswering] fix qa example in docstring | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=h1) Report\n> Merging [#4589](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4589 +/- ##\n=======================================\n Coverage 78.09% 78.09% \n=======================================\n Files 123 123 \n Lines 20624 20624 \n=======================================\n Hits 16106 16106 \n Misses 4518 4518 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4589/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <ø> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=footer). Last update [b86e42e...368ce8e](https://codecov.io/gh/huggingface/transformers/pull/4589?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the PR! I actually just uploaded a pretrained question answering model from allen ai and changed the docs accordingly. So I think we don't need this PR anymore ;-). See #4593"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4589/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4589",
"html_url": "https://github.com/huggingface/transformers/pull/4589",
"diff_url": "https://github.com/huggingface/transformers/pull/4589.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4589.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4588/comments | https://api.github.com/repos/huggingface/transformers/issues/4588/events | https://github.com/huggingface/transformers/issues/4588 | 624,570,550 | MDU6SXNzdWU2MjQ1NzA1NTA= | 4,588 | Help Wanted: Predict Next Two Tokens | {
"login": "BigSalmon2",
"id": 61605789,
"node_id": "MDQ6VXNlcjYxNjA1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigSalmon2",
"html_url": "https://github.com/BigSalmon2",
"followers_url": "https://api.github.com/users/BigSalmon2/followers",
"following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}",
"gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions",
"organizations_url": "https://api.github.com/users/BigSalmon2/orgs",
"repos_url": "https://api.github.com/users/BigSalmon2/repos",
"events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigSalmon2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | Is it possible to change this in order to predct the next two tokens?
```
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch.nn.functional as F
import re
def grow_branches(sentence_so_far, probs, input_probability,past, h):
#recursive function to find all sentence completions
global branch_list
global leaf_list
global complete_list
global model
sorted_probability_list = sorted(enumerate(probs), key=lambda x: x[1], reverse=True)
has_children = False
for (this_token,this_probability) in sorted_probability_list:
next_probability = this_probability * input_probability
out_sentence = sentence_so_far.copy()
sentence_and_probability = (out_sentence, input_probability)
pattern = ' [A-Z]{1,1}'
pattern2 = '[A-Z]{1,1}'
test_string = tokenizer.decode(out_sentence[-1])
result = re.match(pattern, test_string) or re.match(pattern2, test_string)
if not (result or (out_sentence[-1] in {1583,1770,6997,19090,9074,7504})) and (this_token == 13):
#if the next token is going to be a period, then no need to carry out that step.
#except allow Mr., Dr., Mrs., Ms., Lt., Sgt., Jr. or single initials.
sentence_and_probability = (out_sentence, next_probability)
complete_list.append(sentence_and_probability)
return
if next_probability < h:
if has_children == True:
branch_list.append(sentence_and_probability)
else:
leaf_list.append(sentence_and_probability)
return
else:
has_children = True
next_sentence = sentence_so_far.copy()
next_sentence.append(this_token)
(next_probability_list,next_past) = expand_node(next_sentence,past)
grow_branches(next_sentence,next_probability_list, next_probability, next_past, h)
def expand_node(sentence, past):
#finds probabilities for the next token using gpt-2
global model
if past == None:
input_ids = torch.tensor(sentence).unsqueeze(0)
else:
input_ids = torch.tensor([sentence[-1]]).unsqueeze(0)
inputs = {'input_ids': input_ids}
with torch.no_grad():
logits, past = model(**inputs, past=past)
logits = logits[:, -1, :]
probs = F.softmax(logits, dim=-1).tolist()[0]
return (probs, past)
# globals here
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
leaf_list = []
branch_list = []
complete_list = []
probability_threshhold=float(input("probability cutoff (e.g. .001 or less):"))
raw_prompt = input("partial sentence to complete:")
prompt=tokenizer.encode(raw_prompt)
(probs, past) = expand_node(prompt, None)
grow_branches(prompt,probs,1,past,probability_threshhold)
sorted_complete_list = sorted(complete_list, reverse=True,key=lambda x: x[1])
sorted_leaf_list = sorted(leaf_list, reverse=True,key=lambda x: x[1])
sorted_branch_list = sorted(branch_list, reverse=True,key=lambda x: x[1])
# to get the most probable completed sentence:
#tokenizer.decode(sorted_complete_list[0])
#print just the completions
for (sentence, prob) in sorted_complete_list:
#print(round(prob,6),end=':')
if prob>probability_threshhold - 1:
print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='|')
else:
print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='\\')
for (sentence, prob) in sorted_leaf_list:
if prob>probability_threshhold:
print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='|')
else:
print(repr(tokenizer.decode(sentence[len(prompt):])).strip("'"),end='\\')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4587/comments | https://api.github.com/repos/huggingface/transformers/issues/4587/events | https://github.com/huggingface/transformers/pull/4587 | 624,557,258 | MDExOlB1bGxSZXF1ZXN0NDIyOTUyMDIw | 4,587 | ensure_ascii=False | {
"login": "Traeyee",
"id": 12761196,
"node_id": "MDQ6VXNlcjEyNzYxMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12761196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Traeyee",
"html_url": "https://github.com/Traeyee",
"followers_url": "https://api.github.com/users/Traeyee/followers",
"following_url": "https://api.github.com/users/Traeyee/following{/other_user}",
"gists_url": "https://api.github.com/users/Traeyee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Traeyee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Traeyee/subscriptions",
"organizations_url": "https://api.github.com/users/Traeyee/orgs",
"repos_url": "https://api.github.com/users/Traeyee/repos",
"events_url": "https://api.github.com/users/Traeyee/events{/privacy}",
"received_events_url": "https://api.github.com/users/Traeyee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4587/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4587",
"html_url": "https://github.com/huggingface/transformers/pull/4587",
"diff_url": "https://github.com/huggingface/transformers/pull/4587.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4587.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4586/comments | https://api.github.com/repos/huggingface/transformers/issues/4586/events | https://github.com/huggingface/transformers/issues/4586 | 624,507,513 | MDU6SXNzdWU2MjQ1MDc1MTM= | 4,586 | T5Model in fp16 still yield nan with more complex examples | {
"login": "rpowalski",
"id": 10357417,
"node_id": "MDQ6VXNlcjEwMzU3NDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/10357417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rpowalski",
"html_url": "https://github.com/rpowalski",
"followers_url": "https://api.github.com/users/rpowalski/followers",
"following_url": "https://api.github.com/users/rpowalski/following{/other_user}",
"gists_url": "https://api.github.com/users/rpowalski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rpowalski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rpowalski/subscriptions",
"organizations_url": "https://api.github.com/users/rpowalski/orgs",
"repos_url": "https://api.github.com/users/rpowalski/repos",
"events_url": "https://api.github.com/users/rpowalski/events{/privacy}",
"received_events_url": "https://api.github.com/users/rpowalski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I got the same issue - seems to happen with the larger models (t5 small is fine)",
"I can reproduce the error - will investigate :-) ",
"Okey this took me quite some time to figure out...\r\n\r\nSo what happens is the following. When setting **all** modules in half as is done in the code snippet above, the following happens. At some point in line:\r\nhttps://github.com/huggingface/transformers/blob/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6/src/transformers/modeling_t5.py#L188\r\nthe tensor `layer_output` contains `inf` values and then later in:\r\nhttps://github.com/huggingface/transformers/blob/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6/src/transformers/modeling_t5.py#L156\r\n`nan` values enter the game... \r\n\r\nI don't really think this is a bug in T5, but it's just due to T5's rather unstable architecture. `model.half()` essentially corresponds to an apex level O3: https://nvidia.github.io/apex/amp.html#o3-fp16-training which in itself tends to become unstable...\r\n\r\nSo using your code above and using the `apex` package instead of calling `half()` on the model, you can notice the following. The code snippet which is essentially the same as yours:\r\n\r\n```python\r\nfrom transformers import T5Model\r\nfrom apex import amp\r\nimport torch\r\n\r\nmodel = T5Model.from_pretrained(\"t5-base\").cuda().eval()\r\nmodel = amp.initialize(model, opt_level=\"O3\") \r\n\r\ninputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda()\r\ndecoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda()\r\n\r\nout = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n# encoder outputs\r\nout[2][:,:2] # nan output\r\n```\r\n\r\nyields the same output consisting of `nan` values. The same happens for `opt_level` O2. \r\nUsing the recommended O1 level of optimization:\r\n\r\n```python\r\nfrom transformers import T5Model\r\nfrom apex import amp\r\nimport torch\r\n\r\nmodel = T5Model.from_pretrained(\"t5-base\").cuda().eval()\r\nmodel = amp.initialize(model, opt_level=\"O1\") \r\n\r\ninputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda()\r\ndecoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda()\r\n\r\nout = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n# encoder outputs\r\nout[2][:,:2] # valid output\r\n```\r\n\r\nhowever does not produce any `nan` values. As far as I know O1 is also the recommended setting: https://nvidia.github.io/apex/amp.html#o1-mixed-precision-recommended-for-typical-use .\r\nAs far as I know O1 can already greatly speed up your calculations and save quite some memory, so that I would recommend going for this.\r\n\r\nAlso pinging @mfuntowicz, @julien-c and @LysandreJik for verification",
"@patrickvonplaten Even with O1 I tried fine-tuning T5-base, and in less than 100 iterations, it will converge to nan values quickly. Seems like the stability of this model is poor. Perhaps first few iterations of fine-tuning require FP32.",
"~I am having issues even in fp32 with everything besides t5-small.~\r\nI am having issues in `O1` with t5-large and t5-base.\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Having the same issue with loss going to `nan` when fine-tuning tf-base with fp16. tf-small works fine though.",
"Ran into this issue and found a workaround to get FP16 training working. \r\nT5DenseGatedGeluDense doesn't play nice with FP16, specifically the final dense layer to resize from d_ff to d_model.\r\nI used pytorch's autocast/gradscaler mixed precision implementation and created an exception for that specific dense layer.\r\n\r\n```\r\nclass T5DenseGatedGeluDense(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n self.gelu_act = ACT2FN[\"gelu_new\"]\r\n\r\n def forward(self, hidden_states):\r\n hidden_gelu = self.gelu_act(self.wi_0(hidden_states))\r\n hidden_linear = self.wi_1(hidden_states)\r\n hidden_states = hidden_gelu * hidden_linear\r\n hidden_states = self.dropout(hidden_states)\r\n with autocast(enabled=False):\r\n hidden_states = self.wo(hidden_states)\r\n return hidden_states\r\n```",
"@leecming Have you also tried the fix with `T5DenseReluDense`?",
"Great qusetion @j-min - I actually didn't find the time yet to test the \"new\" t5 model with fp16. It might very well be that the following models work fine with fp16:\r\nhttps://huggingface.co/models?search=mt5\r\nand \r\nhttps://huggingface.co/models?search=t5-v1",
"@patrickvonplaten @leecming I'm trying the fix as below.\r\n```python3\r\nclass T5DenseReluDense(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.wi = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n\r\n def forward(self, hidden_states):\r\n hidden_states = self.wi(hidden_states)\r\n hidden_states = F.relu(hidden_states)\r\n hidden_states = self.dropout(hidden_states)\r\n with autocast(enabled=False):\r\n hidden_states = self.wo(hidden_states)\r\n return hidden_states\r\n\r\n\r\nclass T5DenseGatedGeluDense(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n self.gelu_act = ACT2FN[\"gelu_new\"]\r\n\r\n def forward(self, hidden_states):\r\n hidden_gelu = self.gelu_act(self.wi_0(hidden_states))\r\n hidden_linear = self.wi_1(hidden_states)\r\n hidden_states = hidden_gelu * hidden_linear\r\n hidden_states = self.dropout(hidden_states)\r\n with autocast(enabled=False):\r\n hidden_states = self.wo(hidden_states)\r\n return hidden_states\r\n```\r\n\r\nBtw it results in the error `expected scalar type Half but found Float`, since `hidden_states` parameters are float while self.wo parameters are half.\r\nCould you please guide how I bypass the error?\r\n```python3\r\nimport torch\r\nfrom torch.cuda.amp import autocast\r\nfrom transformers import T5Model\r\n\r\nmodel = T5Model.from_pretrained(\"t5-base\").cuda().eval()\r\ninputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda()\r\ndecoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda()\r\n\r\nout = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n# encoder outputs\r\nout[2][:,:2]\r\n\r\nwith autocast():\r\n out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n loss = out.last_hidden_state.exp().mean()\r\n```\r\n\r\n",
"Oh adding `hidden_states = hidden_states.to(torch.float32)` worked, never mind.\r\nIs there a more concrete script to check if this fixes T5's fp16 training? @patrickvonplaten \r\n\r\n```python3\r\nclass T5DenseReluDense(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.wi = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n\r\n def forward(self, hidden_states):\r\n hidden_states = self.wi(hidden_states)\r\n hidden_states = F.relu(hidden_states)\r\n hidden_states = self.dropout(hidden_states)\r\n with autocast(enabled=False):\r\n hidden_states = hidden_states.to(torch.float32)\r\n hidden_states = self.wo(hidden_states)\r\n return hidden_states\r\n\r\n\r\nclass T5DenseGatedGeluDense(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False)\r\n self.wo = nn.Linear(config.d_ff, config.d_model, bias=False)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n self.gelu_act = ACT2FN[\"gelu_new\"]\r\n\r\n def forward(self, hidden_states):\r\n hidden_gelu = self.gelu_act(self.wi_0(hidden_states))\r\n hidden_linear = self.wi_1(hidden_states)\r\n hidden_states = hidden_gelu * hidden_linear\r\n hidden_states = self.dropout(hidden_states)\r\n with autocast(enabled=False):\r\n hidden_states = hidden_states.to(torch.float32)\r\n hidden_states = self.wo(hidden_states)\r\n return hidden_states\r\n```\r\n\r\n```python3\r\nimport torch\r\nfrom torch.cuda.amp import autocast\r\nfrom transformers import T5Model\r\n\r\nmodel = T5Model.from_pretrained(\"t5-base\").cuda().eval()\r\ninputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda()\r\ndecoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda()\r\n\r\nout = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n# encoder outputs\r\nout[2][:,:2]\r\n\r\nwith autocast():\r\n out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)\r\n loss = out.last_hidden_state.exp().mean()\r\n\r\nprint(loss)\r\n>>> tensor(1.1017, device='cuda:0', grad_fn=<MeanBackward0>)\r\n```\r\n",
"This is actually a topic I wanted to look into more closely and didn't manage to do so time-wise...maybe next week. \r\n\r\nBut in short, one should try to train a whole T5 model with your suggested fix.\r\n\r\nWhat I would recommend doing is to take your guys' fix from above and open a PR with it. Then with this PR we should fine-tune a whole t5 model on some task, *e.g.* using the Seq2SeqTrainer.\r\n\r\nE.g. one could adapt this script:https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing and instead of using a `Bert2Bert` model one could just use a `google/t5v1_1-small` or base model and see whether there are any problem in training. \r\n\r\nalso cc @patil-suraj in case he has better pointers/ideas",
"I'll try to do a run next week though :-) ",
"It’s not a good fix since it relies on a specific AMP implementation (autocast) and wouldn’t work on others (e.g., Nvidia APEX). It also uses more memory than a clean AMP implementation.\r\n\r\nA cleaner quick fix would be to copy BERT’s gradient checkpointing code and train in FP32 mode with checkpointing. \r\n\r\nAlso, Nvidia with the latest Ampere cards has started supporting bf16 which is good news.",
"I am having the same issue with mt5-small getting nan with deepspeed, I really appreciate any advice on this. I am having really a hard time with it, thanks a lot \r\n@patrickvonplaten @patil-suraj @sgugger Do you mind sharing the current state of mt5 training with fp16? thanks a lot",
"see: https://github.com/huggingface/transformers/issues/10830",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"anyone coming after some years, try this https://huggingface.co/google/umt5-small instead",
"no luck with https://huggingface.co/google/umt5-small as well even though I was training using `FP32`",
"I got into this w/ T5-3b https://huggingface.co/t5-3b/tree/main, using the more recent T5ForSequenceClassification head. I thought it was due to that newer head but now I'm seeing the issue's been more profound. \r\n\r\nI'll see what my fp32 fine-tuning gives tomorrow, as I believe no other comprehensive solution has been put into place just yet. "
] | 1,590 | 1,696 | 1,619 | NONE | null | # 🐛 Bug
Hello, thank you for the recent [PR](https://github.com/huggingface/transformers/pull/4436) with fp16 fixes. It seems to work well with short inputs, but once the model is fed with some more complex data it still yields nans.
## Information
Model I am using: T5
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Run the code:
```
from transformers import T5Model
import torch
model = T5Model.from_pretrained("t5-base").cuda().half().eval()
inputs = torch.tensor([[37,423,215,1504,13,8,1186,10670,11,10449,49,1152,11363,15465,1514,5,4433,399,7863,24766,15,17,965,594,5386,14286,28,8,6,5,755,5781,32099,993,3744,21,8,2367,18,458,53,16616,32098,16,32097,7660,16409,77,19,3,107,13164,1054,32096,993,1970,9368,948,147,8,15465,5861,87,25481,788,12,8,32095,1300,61,37,423,215,1504,13,3,24151,40,3,19668,594,5386,14286,28,8,3,115,13164]]).cuda()
decoder_input_ids = torch.tensor([[21820, 296, 55]]).cuda()
out = model(input_ids=inputs, decoder_input_ids=decoder_input_ids)
# encoder outputs
out[2][:,:2]
```
output:
```
tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]], device='cuda:0',
dtype=torch.float16, grad_fn=<SliceBackward>)
```
## Expected behavior
Output with non-nan values.
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-4.15.0-88-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4585/comments | https://api.github.com/repos/huggingface/transformers/issues/4585/events | https://github.com/huggingface/transformers/pull/4585 | 624,505,323 | MDExOlB1bGxSZXF1ZXN0NDIyOTEyNDMy | 4,585 | Introduce a new tensor type for return_tensors on tokenizer for NumPy | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=h1) Report\n> Merging [#4585](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e3e552125e86824239e445dd3c659df0aea4db9&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `94.11%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4585 +/- ##\n==========================================\n- Coverage 78.09% 78.09% -0.01% \n==========================================\n Files 123 123 \n Lines 20624 20622 -2 \n==========================================\n- Hits 16106 16104 -2 \n Misses 4518 4518 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <93.93%> (+0.32%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.01% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=footer). Last update [3e3e552...7f19c32](https://codecov.io/gh/huggingface/transformers/pull/4585?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@LysandreJik I pushed a initial version of `test_np_encode_plus_sent_to_model` which converts input to numpy tensor. \r\n\r\nFor the moment we don't have any model to forward through (JAX/Flax PR is not merged). I added a note to complete the unittests when we have the full pipeline available."
] | 1,590 | 1,591 | 1,591 | MEMBER | null | Two changes in this PR:
- As we're introducing more than two tensor backend alternatives I created an enum `TensorType` listing all the possible tensor we can create `TensorType.TENSORFLOW`, `TensorType.PYTORCH`, `TensorType.NUMPY`. This might help newcomers who don't know about `"tf"`, `"pt"`.
_->Note: TensorType are compatible with previous `"tf"`, `"pt"` and now `"np"` str to allow backward compatbility (+unittest)_
- Numpy is now a possible target when creating tensors. This is usefull for JAX :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4585/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4585",
"html_url": "https://github.com/huggingface/transformers/pull/4585",
"diff_url": "https://github.com/huggingface/transformers/pull/4585.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4585.patch",
"merged_at": 1591246621000
} |
https://api.github.com/repos/huggingface/transformers/issues/4584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4584/comments | https://api.github.com/repos/huggingface/transformers/issues/4584/events | https://github.com/huggingface/transformers/pull/4584 | 624,502,888 | MDExOlB1bGxSZXF1ZXN0NDIyOTEwNTE3 | 4,584 | [ci] fix 3 remaining slow GPU failures | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Failure is `tests/test_hf_api.py::HfApiEndpointsTest::test_presign_and_upload`, which seems unrelated, so going to merge."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4584/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4584",
"html_url": "https://github.com/huggingface/transformers/pull/4584",
"diff_url": "https://github.com/huggingface/transformers/pull/4584.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4584.patch",
"merged_at": 1590448851000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4583/comments | https://api.github.com/repos/huggingface/transformers/issues/4583/events | https://github.com/huggingface/transformers/issues/4583 | 624,493,156 | MDU6SXNzdWU2MjQ0OTMxNTY= | 4,583 | Provide simple way to train a new translation model from scratch | {
"login": "eraoul",
"id": 1067070,
"node_id": "MDQ6VXNlcjEwNjcwNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1067070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eraoul",
"html_url": "https://github.com/eraoul",
"followers_url": "https://api.github.com/users/eraoul/followers",
"following_url": "https://api.github.com/users/eraoul/following{/other_user}",
"gists_url": "https://api.github.com/users/eraoul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eraoul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eraoul/subscriptions",
"organizations_url": "https://api.github.com/users/eraoul/orgs",
"repos_url": "https://api.github.com/users/eraoul/repos",
"events_url": "https://api.github.com/users/eraoul/events{/privacy}",
"received_events_url": "https://api.github.com/users/eraoul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"unstale",
"@sshleifer I was wondering whether there is any activity on this issue. I have trained some models with MarianMT, but I am really interested in training a model from scratch with the transformers library. ",
"This isn't supported by default, but is definitely possible.\r\n\r\n\r\nRough steps would be:\r\n1) Make a local directory with your intialized model and tokenizer.\r\n\r\n2) Run a command like [this](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh) where `$m` is the path your your local dir.\r\n\r\ncc @patil-suraj",
"@sshleifer could you please repost the command that the web-page does not exist anymore?",
"https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/distil_marian_no_teacher.sh",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"unstale",
"unstale"
] | 1,590 | 1,675 | 1,621 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
Huggingface just released a huge pile of pretrained translation models. I just want to train a completely custom model on a custom language pair, without pretraining etc.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4583/reactions",
"total_count": 17,
"+1": 17,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4582/comments | https://api.github.com/repos/huggingface/transformers/issues/4582/events | https://github.com/huggingface/transformers/pull/4582 | 624,492,175 | MDExOlB1bGxSZXF1ZXN0NDIyOTAyNDUy | 4,582 | Improve model card for Tereveni-AI/gpt2-124M-uk-fiction | {
"login": "obsh",
"id": 1974420,
"node_id": "MDQ6VXNlcjE5NzQ0MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1974420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/obsh",
"html_url": "https://github.com/obsh",
"followers_url": "https://api.github.com/users/obsh/followers",
"following_url": "https://api.github.com/users/obsh/following{/other_user}",
"gists_url": "https://api.github.com/users/obsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/obsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/obsh/subscriptions",
"organizations_url": "https://api.github.com/users/obsh/orgs",
"repos_url": "https://api.github.com/users/obsh/repos",
"events_url": "https://api.github.com/users/obsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/obsh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Add language metadata, training and evaluation corpora details.
Add example output. Fix inconsistent use of quotes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4582/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4582/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4582",
"html_url": "https://github.com/huggingface/transformers/pull/4582",
"diff_url": "https://github.com/huggingface/transformers/pull/4582.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4582.patch",
"merged_at": 1590526301000
} |
https://api.github.com/repos/huggingface/transformers/issues/4581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4581/comments | https://api.github.com/repos/huggingface/transformers/issues/4581/events | https://github.com/huggingface/transformers/pull/4581 | 624,459,316 | MDExOlB1bGxSZXF1ZXN0NDIyODc2NTkz | 4,581 | [GPT2, CTRL] Allow input of input_ids and past of variable length | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=h1) Report\n> Merging [#4581](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4c6b21805647f3a96737a50390a4c3e9463d8ef7&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4581 +/- ##\n=======================================\n Coverage 78.09% 78.09% \n=======================================\n Files 123 123 \n Lines 20617 20596 -21 \n=======================================\n- Hits 16100 16084 -16 \n+ Misses 4517 4512 -5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <ø> (+0.83%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <ø> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.66% <ø> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4581/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=footer). Last update [4c6b218...8350637](https://codecov.io/gh/huggingface/transformers/pull/4581?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> LGTM, awesome!\r\n> \r\n> Should we fix some examples or pipeline accordingly?\r\n\r\nThe generation method works fine, since `prepare_input_ids` for GPT2 and CTRL only took the last input_ids anyways. So all methods relying on `generate()` are fine including the pipeline and `run_generation` examples => so we should be good!"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | ## Description
This PR reverts the automatic cutting of input ids as introduced in PR: https://github.com/huggingface/transformers/pull/3734 and fixes issue https://github.com/huggingface/transformers/issues/4368 .
Currently, when `past` is used in combination with `input_ids`, the `input_ids` are cut to just the last token. This breaks certain functionality as explained in Issue: #4368.
Also, the documentation is made more precise for GPT2 and CTRL.
## Backward Compatibility
This PR slightly breaks backward compatibility since `input_ids` now have to be input according to `past` and are **not** cut automatically for example for automatic language generation. So the functionality is as it was before version 2.8.0. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4581",
"html_url": "https://github.com/huggingface/transformers/pull/4581",
"diff_url": "https://github.com/huggingface/transformers/pull/4581.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4581.patch",
"merged_at": 1590515039000
} |
https://api.github.com/repos/huggingface/transformers/issues/4580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4580/comments | https://api.github.com/repos/huggingface/transformers/issues/4580/events | https://github.com/huggingface/transformers/pull/4580 | 624,432,748 | MDExOlB1bGxSZXF1ZXN0NDIyODU2MTU5 | 4,580 | LongformerForSequenceClassification | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=h1) Report\n> Merging [#4580](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8cc6807e8997b8b7404c07037bd02c578da98baf&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `92.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4580 +/- ##\n==========================================\n+ Coverage 78.03% 78.05% +0.02% \n==========================================\n Files 124 124 \n Lines 20647 20688 +41 \n==========================================\n+ Hits 16111 16148 +37 \n- Misses 4536 4540 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `96.85% <92.85%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4580/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=footer). Last update [8cc6807...a9afa7b](https://codecov.io/gh/huggingface/transformers/pull/4580?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This is great. Thanks, @patil-suraj",
"Is it better to use pooled output for sequence classification like in BertForSequenceClassification? @ibeltagy @patil-suraj \r\n\r\n```\r\npooled_output = outputs[1]\r\npooled_output = self.dropout(pooled_output)\r\nlogits = self.classifier(pooled_output)\r\n```",
"@leslyarun \r\n`LongformerClassificationHead` does the pooling ",
"> @leslyarun\r\n> `LongformerClassificationHead` does the pooling\r\n\r\nThat's great. Fine then 👍 ",
"Awesome thanks @patil-suraj! \r\n\r\nMerging",
"@patil-suraj Thanks for this! I'm working on a multi-task version of `LongformerForSequenceClassification`. For my context, why did you decide to implement pooling separately from the [pooling done in `LongformerModel`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/longformer/modeling_longformer.py#L1377-L1383)? It seems like the key differences between the pooling done in `LongformerClassificationHead` vs. `LongformerPooler` are:\r\n\r\n1. a dropout layer before the dense layer ([source](https://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/models/longformer/modeling_longformer.py#L2004))\r\n2. additional dropout and dense layers ([source](https://github.com/huggingface/transformers/blob/9ade58f0555430cec851e307c83c3a56c4a77d0b/src/transformers/models/longformer/modeling_longformer.py#L2007-L2008))\r\n\r\nI see that this mimics the [`RobertaForSequenceClassification` implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py#L1449-L1468). Is the goal to avoid the pooler parameters learned during pre-training a `LongformerModel`? I see that this topic has been discussed in general (https://github.com/huggingface/transformers/issues/1328), but I am curious to learn more specifically for Longformer!"
] | 1,590 | 1,683 | 1,590 | MEMBER | null | This PR adds `LongformerForSequenceClassification`
@patrickvonplaten @ibeltagy
All the changes here are as per we discussed in `LongformerForQuestionAnswering`.
`forward` method automatically sets global attention on cls token. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4580/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4580/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4580",
"html_url": "https://github.com/huggingface/transformers/pull/4580",
"diff_url": "https://github.com/huggingface/transformers/pull/4580.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4580.patch",
"merged_at": 1590611401000
} |
https://api.github.com/repos/huggingface/transformers/issues/4579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4579/comments | https://api.github.com/repos/huggingface/transformers/issues/4579/events | https://github.com/huggingface/transformers/issues/4579 | 624,409,674 | MDU6SXNzdWU2MjQ0MDk2NzQ= | 4,579 | How to save tokenize data when training from scratch | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"There is a method to save tokenizer. Check this notebook: https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb\r\n\r\n",
"Thats what I am using.\r\nits saving it in the `dataset` variable not in any file. ByTokenize data I mean pretraining data.",
"You can look at serialization practices, you should able to do with torch at least.\r\nhttps://huggingface.co/transformers/serialization.html#serialization-best-practices",
"well thats for all required model files. I am not getting how to save pretraining data",
"Once you have your data you can pickle it or use `torch.save` to save it to your disk and reload it later.",
"That worked @LysandreJik thanks! \r\nI still not getting how you can prepare pretraining data on the fly while training. I got large training data and don't want to wait until it gets prepared for training.",
"Have you taken a look at PyTorch's Dataset/Dataloader utilities? I recommend taking a look at [loading hude data functionality](https://discuss.pytorch.org/t/loading-huge-data-functionality/346) or [how to use a dataset larger than memory](https://discuss.pytorch.org/t/how-to-use-dataset-larger-than-memory/37785/8).\r\n\r\nI personnally prefer using IterableDatasets when loading large files, as I find the API easier to use to limit large memory usage. This [tutorial](https://medium.com/swlh/how-to-use-pytorch-dataloaders-to-work-with-enormously-large-text-files-bbd672e955a0) is interesting on that subject.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
I am training Allbert from scratch following the blog post by hugging face. As it mentions that :
> If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step.
How this can be done any suggestions?
As of now , using the method given in the notebook:
```
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="./oscar.eo.txt",
block_size=128,
)
```
there is no method to save tokenize data, can anyone suggest how to save that as its already taking long enough before starting training.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4578/comments | https://api.github.com/repos/huggingface/transformers/issues/4578/events | https://github.com/huggingface/transformers/pull/4578 | 624,395,058 | MDExOlB1bGxSZXF1ZXN0NDIyODI3MTMx | 4,578 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=h1) Report\n> Merging [#4578](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4578 +/- ##\n==========================================\n- Coverage 77.87% 77.86% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16014 -2 \n- Misses 4550 4552 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=footer). Last update [a34a989...03b9ffc](https://codecov.io/gh/huggingface/transformers/pull/4578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4578",
"html_url": "https://github.com/huggingface/transformers/pull/4578",
"diff_url": "https://github.com/huggingface/transformers/pull/4578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4578.patch",
"merged_at": 1590434911000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4577/comments | https://api.github.com/repos/huggingface/transformers/issues/4577/events | https://github.com/huggingface/transformers/issues/4577 | 624,336,072 | MDU6SXNzdWU2MjQzMzYwNzI= | 4,577 | Using whole word masking on training LM from scratch | {
"login": "uunal",
"id": 2520197,
"node_id": "MDQ6VXNlcjI1MjAxOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2520197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/uunal",
"html_url": "https://github.com/uunal",
"followers_url": "https://api.github.com/users/uunal/followers",
"following_url": "https://api.github.com/users/uunal/following{/other_user}",
"gists_url": "https://api.github.com/users/uunal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/uunal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uunal/subscriptions",
"organizations_url": "https://api.github.com/users/uunal/orgs",
"repos_url": "https://api.github.com/users/uunal/repos",
"events_url": "https://api.github.com/users/uunal/events{/privacy}",
"received_events_url": "https://api.github.com/users/uunal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think it's not implemented yet.\r\n\r\n@julien-c any suggestion/thoughts for pretraining with wwm?",
"NVIDIA/Megatron-LM does wwm on the fly in __ getitem __\r\n\r\nWe can do something similar in DataCollatorForLanguageModeling or in the dataset \r\n\r\nhttps://github.com/NVIDIA/Megatron-LM/blob/22c0e300670672e4e0a8604bd6ab89bc28c970a6/megatron/data/bert_dataset.py#L148",
"Thanks for the suggestion, I'll look into it.",
"@usuyama The Megatron example is for the BERT dataset which uses wordpiece tokenization. Any suggestions how to do wwm for GPT2 tokenizer?",
"related #6491",
"Check if still looking for an answer:\r\nhttps://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/data/data_collator.py#L301"
] | 1,590 | 1,604 | 1,604 | NONE | null | # ❓ Questions & Help
## Details
Hello everyone,
I wanted to use _whole-word-masking_ in training LM from scratch. I could not have found how to apply this option using Trainer.
I thought this option should be managed in "class DataCollatorForLanguageModeling", but I could not find options for _whole-word-masking._
Am I looking at wrong place OR it is not implemented yet?
If not, is it possible to do with run_language_modeling.py?
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62061578/how-to-use-whole-word-masking-on-training-lm-from-scratch
Any help is appreciated!
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4577/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4577/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4576/comments | https://api.github.com/repos/huggingface/transformers/issues/4576/events | https://github.com/huggingface/transformers/issues/4576 | 624,330,916 | MDU6SXNzdWU2MjQzMzA5MTY= | 4,576 | OSError: Model name 'transfo-xl-wt103' was not found in tokenizers model name list (transfo-xl-wt103). We assumed 'transfo-xl-wt103' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.bin', 'vocab.txt'] but couldn't find such vocabulary files at this path or url. | {
"login": "Amling2017",
"id": 31263594,
"node_id": "MDQ6VXNlcjMxMjYzNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/31263594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amling2017",
"html_url": "https://github.com/Amling2017",
"followers_url": "https://api.github.com/users/Amling2017/followers",
"following_url": "https://api.github.com/users/Amling2017/following{/other_user}",
"gists_url": "https://api.github.com/users/Amling2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Amling2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amling2017/subscriptions",
"organizations_url": "https://api.github.com/users/Amling2017/orgs",
"repos_url": "https://api.github.com/users/Amling2017/repos",
"events_url": "https://api.github.com/users/Amling2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/Amling2017/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: ubantu
- Python version:3.6
- PyTorch version (GPU?):yes
- Tensorflow version (GPU?):yes
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4575/comments | https://api.github.com/repos/huggingface/transformers/issues/4575/events | https://github.com/huggingface/transformers/issues/4575 | 624,329,536 | MDU6SXNzdWU2MjQzMjk1MzY= | 4,575 | Onnx notebook problem | {
"login": "amy-hyunji",
"id": 44370759,
"node_id": "MDQ6VXNlcjQ0MzcwNzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/44370759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amy-hyunji",
"html_url": "https://github.com/amy-hyunji",
"followers_url": "https://api.github.com/users/amy-hyunji/followers",
"following_url": "https://api.github.com/users/amy-hyunji/following{/other_user}",
"gists_url": "https://api.github.com/users/amy-hyunji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amy-hyunji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amy-hyunji/subscriptions",
"organizations_url": "https://api.github.com/users/amy-hyunji/orgs",
"repos_url": "https://api.github.com/users/amy-hyunji/repos",
"events_url": "https://api.github.com/users/amy-hyunji/events{/privacy}",
"received_events_url": "https://api.github.com/users/amy-hyunji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can try newer version of PyTorch (like 1.3~1.5). The problem shall be resolved.",
"Hi thanks for the reply\r\nI tried with pytorch 1.4 but I got another error on the below\r\n\r\ndo you have any idea about this one?\r\nthanks!\r\n",
"@amy-hyunji, this option (use_external_data_format) need PyTorch 1.5. This option is not needed for model < 2GB.\r\n\r\nIf you do not want to upgrade to PyTorch 1.5. You can install transformers from source, and modify the convert_graph_to_onnx.py (by removing the parameter during calling onnx.export function).",
"@tianleiwu Thanks a lot :)"
] | 1,590 | 1,590 | 1,590 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I saw the issue related to mine in #260 so I changed env to python 3.6 and torch 1.1 but I didn't help.
When I run onnx notebook I get an error [TypeError: export() got an unexpected keyword argument 'dynamic_axes']
Does anyone have guess what's wrong?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4575/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4574/comments | https://api.github.com/repos/huggingface/transformers/issues/4574/events | https://github.com/huggingface/transformers/pull/4574 | 624,275,836 | MDExOlB1bGxSZXF1ZXN0NDIyNzMxNjEw | 4,574 | Fix longformer attention mask type casting when using apex | {
"login": "wfangtw",
"id": 8427857,
"node_id": "MDQ6VXNlcjg0Mjc4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8427857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wfangtw",
"html_url": "https://github.com/wfangtw",
"followers_url": "https://api.github.com/users/wfangtw/followers",
"following_url": "https://api.github.com/users/wfangtw/following{/other_user}",
"gists_url": "https://api.github.com/users/wfangtw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wfangtw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wfangtw/subscriptions",
"organizations_url": "https://api.github.com/users/wfangtw/orgs",
"repos_url": "https://api.github.com/users/wfangtw/repos",
"events_url": "https://api.github.com/users/wfangtw/events{/privacy}",
"received_events_url": "https://api.github.com/users/wfangtw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=h1) Report\n> Merging [#4574](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b86e42e0ac1b59f21f0eccf351d3346bbe3ed4eb&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4574 +/- ##\n==========================================\n- Coverage 78.09% 78.08% -0.01% \n==========================================\n Files 123 123 \n Lines 20624 20624 \n==========================================\n- Hits 16106 16105 -1 \n- Misses 4518 4519 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `97.40% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4574/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=footer). Last update [b86e42e...8528090](https://codecov.io/gh/huggingface/transformers/pull/4574?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Fix for issue [#4525](https://github.com/huggingface/transformers/issues/4525). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4574",
"html_url": "https://github.com/huggingface/transformers/pull/4574",
"diff_url": "https://github.com/huggingface/transformers/pull/4574.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4574.patch",
"merged_at": 1590768811000
} |
https://api.github.com/repos/huggingface/transformers/issues/4573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4573/comments | https://api.github.com/repos/huggingface/transformers/issues/4573/events | https://github.com/huggingface/transformers/issues/4573 | 624,269,517 | MDU6SXNzdWU2MjQyNjk1MTc= | 4,573 | Transformers' trainer sequence classification problem | {
"login": "minhtriet",
"id": 2603847,
"node_id": "MDQ6VXNlcjI2MDM4NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2603847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhtriet",
"html_url": "https://github.com/minhtriet",
"followers_url": "https://api.github.com/users/minhtriet/followers",
"following_url": "https://api.github.com/users/minhtriet/following{/other_user}",
"gists_url": "https://api.github.com/users/minhtriet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhtriet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhtriet/subscriptions",
"organizations_url": "https://api.github.com/users/minhtriet/orgs",
"repos_url": "https://api.github.com/users/minhtriet/repos",
"events_url": "https://api.github.com/users/minhtriet/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhtriet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I ended up using the vanilla way to train",
"I think compute_metrics should return a dictionary string to metric values. That is how it is written in the docstring of the train function"
] | 1,590 | 1,602 | 1,593 | CONTRIBUTOR | null | # ❓ Transformers' trainer sequence classification problem
## Details
I wanted to use `XLMRobertaForSequenceClassification` to classify a sequence into `1` or `0`.
```python
MODEL_NAME = 'xlm-roberta-base'
def multilingual_model(max_seq_length=SEQUENCE_LENGTH, trainable=False):
"""Build and return a multilingual BERT model and tokenizer."""
model = XLMRobertaForSequenceClassification.from_pretrained(
MODEL_NAME,
num_labels = 2,
output_attentions = False,
output_hidden_states = False,
)
return model
```
The trainer is
```python
from transformers import Trainer
model = multilingual_model()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=part_train_dataset,
eval_dataset=part_valid_dataset,
compute_metrics=compute_metrics)
```
`training_args`
```python
from transformers import TrainingArguments
BATCH_SIZE = 32
DEVICE = torch.device("cpu")
training_args = TrainingArguments("/kaggle/working")
training_args.do_train = True
training_args.evaluate_during_training = True
training_args.adam_epsilon = 1e-8
training_args.learning_rate = 1e-5
training_args.per_gpu_train_batch_size = BATCH_SIZE
training_args.num_train_epochs=TRAIN_EPOCH
```
`compute_metrics`
```python
from transformers import EvalPrediction
from typing import Dict
import numpy as np
def compute_metrics(p: EvalPrediction) -> Dict:
preds = np.argmax(p.predictions, axis=1)
return metrics.roc_auc_score(preds, p.label_ids)
```
An exerpt of `part_train_dataset`
```
[InputFeatures(input_ids=[0, 99070, 1159, 11050, 8108, 398, 6244, 7, 10932, 98, 759, 4488, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=None, label=1),
InputFeatures(input_ids=[0, 28192, 2367, 83, 442, 22120, 2367, 83, 442, 142, 97629, 21115, 111, 3060, 102172, 20397, 761, 7, 2750, 621, 4127, 99, 163684, 214, 15970, 6, 140545, 297, 7398, 1419, 2750, 2], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], token_type_ids=None, label=1)
```
Similarly, one of `part_valid_dataset`:
```
[InputFeatures(input_ids=[0, 99070, 1159, 11050, 8108, 398, 6244, 7, 10932, 98, 759, 4488, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], token_type_ids=None, label=1),
InputFeatures(input_ids=[0, 28192, 2367, 83, 442, 22120, 2367, 83, 442, 142, 97629, 21115, 111, 3060, 102172, 20397, 761, 7, 2750, 621, 4127, 99, 163684, 214, 15970, 6, 140545, 297, 7398, 1419, 2750, 2], attention_mask=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], token_type_ids=None, label=1),
```
When running `trainer.train()`, I received the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-11-3435b262f1ae> in <module>
----> 1 trainer.train()
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path)
380 continue
381
--> 382 tr_loss += self._training_step(model, inputs, optimizer)
383
384 if (step + 1) % self.args.gradient_accumulation_steps == 0 or (
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer)
465 inputs[k] = v.to(self.args.device)
466
--> 467 outputs = model(**inputs)
468 loss = outputs[0] # model outputs are always tuple in transformers (see doc)
469
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
355 else:
356 loss_fct = CrossEntropyLoss()
--> 357 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
358 outputs = (loss,) + outputs
359
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
930 def forward(self, input, target):
931 return F.cross_entropy(input, target, weight=self.weight,
--> 932 ignore_index=self.ignore_index, reduction=self.reduction)
933
934
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2315 if size_average is not None or reduce is not None:
2316 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2318
2319
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2113 .format(input.size(0), target.size(0)))
2114 if dim == 2:
-> 2115 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2116 elif dim == 4:
2117 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: expected scalar type Long but found Float
```
which does not exist if `num_labels` is 1. From `transformers`'s github, it seems that 2 labels is standard for binary classification.
Beside how to fix the error, I wanted to ask why there are zeroes in `attention_mask` in `part_train`/`valid_dataset`
[**Link to original question on Stack Overflow**:](https://stackoverflow.com/questions/61987904/transformers-trainer-sequence-classification-problem)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4573/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4572/comments | https://api.github.com/repos/huggingface/transformers/issues/4572/events | https://github.com/huggingface/transformers/issues/4572 | 624,249,526 | MDU6SXNzdWU2MjQyNDk1MjY= | 4,572 | Typo in GPT2 documentation | {
"login": "ELotfi",
"id": 59686647,
"node_id": "MDQ6VXNlcjU5Njg2NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/59686647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ELotfi",
"html_url": "https://github.com/ELotfi",
"followers_url": "https://api.github.com/users/ELotfi/followers",
"following_url": "https://api.github.com/users/ELotfi/following{/other_user}",
"gists_url": "https://api.github.com/users/ELotfi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ELotfi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ELotfi/subscriptions",
"organizations_url": "https://api.github.com/users/ELotfi/orgs",
"repos_url": "https://api.github.com/users/ELotfi/repos",
"events_url": "https://api.github.com/users/ELotfi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ELotfi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
In the GPT2 documentation [page](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel), in the parameters of both LMHead and DoubleHeads model, 'inputs_embeds' is registered as 'input_embeds' which leads to an error upon implementation.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4571/comments | https://api.github.com/repos/huggingface/transformers/issues/4571/events | https://github.com/huggingface/transformers/issues/4571 | 624,190,467 | MDU6SXNzdWU2MjQxOTA0Njc= | 4,571 | cannot import name 'TFElectraModel' from 'transformers' | {
"login": "lijinfeng0713",
"id": 9656113,
"node_id": "MDQ6VXNlcjk2NTYxMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9656113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lijinfeng0713",
"html_url": "https://github.com/lijinfeng0713",
"followers_url": "https://api.github.com/users/lijinfeng0713/followers",
"following_url": "https://api.github.com/users/lijinfeng0713/following{/other_user}",
"gists_url": "https://api.github.com/users/lijinfeng0713/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lijinfeng0713/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lijinfeng0713/subscriptions",
"organizations_url": "https://api.github.com/users/lijinfeng0713/orgs",
"repos_url": "https://api.github.com/users/lijinfeng0713/repos",
"events_url": "https://api.github.com/users/lijinfeng0713/events{/privacy}",
"received_events_url": "https://api.github.com/users/lijinfeng0713/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello !\r\n\r\nWhich version of the lib do you use?",
"Hello!\r\n\r\nI think it's a problem with your GPU rather than from transformers. Check your TensorFlow whether \"Failed to load the native TensorFlow runtime.\" appears once it is imported.",
"That's probably because you don't have `tensorflow>=2.0` installed while you're trying to load a TensorFlow model.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
hi, thanks for your nice tool for NLP.
However, when I use transformers on MacOs to load Electra model I get an import error
> ImportError: cannot import name 'TFElectraModel' from 'transformers'
How can I fix this issue?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4570/comments | https://api.github.com/repos/huggingface/transformers/issues/4570/events | https://github.com/huggingface/transformers/pull/4570 | 624,134,115 | MDExOlB1bGxSZXF1ZXN0NDIyNjE2MTI3 | 4,570 | Model card: Updated the link to the paper | {
"login": "oliverguhr",
"id": 3495355,
"node_id": "MDQ6VXNlcjM0OTUzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3495355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverguhr",
"html_url": "https://github.com/oliverguhr",
"followers_url": "https://api.github.com/users/oliverguhr/followers",
"following_url": "https://api.github.com/users/oliverguhr/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverguhr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverguhr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverguhr/subscriptions",
"organizations_url": "https://api.github.com/users/oliverguhr/orgs",
"repos_url": "https://api.github.com/users/oliverguhr/repos",
"events_url": "https://api.github.com/users/oliverguhr/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverguhr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=h1) Report\n> Merging [#4570](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4570 +/- ##\n==========================================\n- Coverage 77.87% 77.87% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16015 -1 \n- Misses 4550 4551 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4570/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=footer). Last update [a34a989...78cf772](https://codecov.io/gh/huggingface/transformers/pull/4570?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | The conference has changed the link to the paper, so I updated it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4570",
"html_url": "https://github.com/huggingface/transformers/pull/4570",
"diff_url": "https://github.com/huggingface/transformers/pull/4570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4570.patch",
"merged_at": 1590434991000
} |
https://api.github.com/repos/huggingface/transformers/issues/4569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4569/comments | https://api.github.com/repos/huggingface/transformers/issues/4569/events | https://github.com/huggingface/transformers/issues/4569 | 624,107,500 | MDU6SXNzdWU2MjQxMDc1MDA= | 4,569 | bert embedding make OOM in albert | {
"login": "urekalion",
"id": 4244158,
"node_id": "MDQ6VXNlcjQyNDQxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4244158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urekalion",
"html_url": "https://github.com/urekalion",
"followers_url": "https://api.github.com/users/urekalion/followers",
"following_url": "https://api.github.com/users/urekalion/following{/other_user}",
"gists_url": "https://api.github.com/users/urekalion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urekalion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urekalion/subscriptions",
"organizations_url": "https://api.github.com/users/urekalion/orgs",
"repos_url": "https://api.github.com/users/urekalion/repos",
"events_url": "https://api.github.com/users/urekalion/events{/privacy}",
"received_events_url": "https://api.github.com/users/urekalion/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
## Details
hi, i made item collaborative filtering model by albert model.
i set vocab size to 600k, it made OOM.
(after initialized model, memory used 3 GB. but During initialization, it takes 11 GB)
i tracked source code and i found this:
albert model initialize albert word embedding layer after initializing bert word embedding layer.
embedding layer is initialized twice. so it made OOM.
- bert word embedding : vocab size x hidden layer size(4048)
- albert word embedding : vocab size x embedding size(128) => bert word embedding free
is there any problem if i fix AlbertEmbeddings to nn.Module?
thanks
`
class AlbertEmbeddings(BertEmbeddings):
"""
Construct the embeddings from word, position and token_type embeddings.
"""
def __init__(self, config):
super().__init__(config)
self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size)
self.LayerNorm = torch.nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps)
`
`
class BertEmbeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings.
"""
def __init__(self, config):
super().__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4568/comments | https://api.github.com/repos/huggingface/transformers/issues/4568/events | https://github.com/huggingface/transformers/issues/4568 | 624,101,901 | MDU6SXNzdWU2MjQxMDE5MDE= | 4,568 | ❓ [BART] Different embedding sizes between pre-trained / fine-tuned checkpoint | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Good catch. There is no mask token in the second checkpoint. I believe that is the same as the authors' implementation.\r\n\r\nCompletely off topic: if you still have the xsum data you used I would love a copy. I'm sam [at] huggingface.co . ",
"Thanks for your fast answer ! \r\n\r\nDo you know why there is no mask token in the second checkpoint ? And if it has any impact on score ?",
"I have a hunch the there is no `<mask>` token because of fairseq's `--find-unused-parameters` clarg, but I'm not certain.\r\n\r\nI would guess no impact on score because `<mask>` does not show up in the finetuning data.\r\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
Running this code :
```python
from transformers import BartModel
x = BartModel.from_pretrained('bart-large')
x2 = BartModel.from_pretrained('bart-large-cnn')
print(x.shared)
print(x2.shared)
```
Gives :
>Embedding(50265, 1024, padding_idx=1)
Embedding(50264, 1024, padding_idx=1)
---
Why the vocabulary size is different ? Isn't it supposed to be the same ? Is it just from the original authors' checkpoint ?
@sshleifer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4568/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4567/comments | https://api.github.com/repos/huggingface/transformers/issues/4567/events | https://github.com/huggingface/transformers/issues/4567 | 624,058,797 | MDU6SXNzdWU2MjQwNTg3OTc= | 4,567 | ❓ [BART] Why using bias for LM head if not trained ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Great Q! The point of using them is that `MarianMTModel`, which inherits from `BartForConditionalGeneration` uses them. You're correct for the bart checkpoints they stay 0. If you think there is a comment or different approach that would be clearer, I'm very open to PR/other ideas."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
As I understood, BART is not using a regular Linear layer as LM head, but instead reuse the weights of the shared embeddings.
As show here, biases are added :
https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_bart.py#L876
But these biases are registered as buffer, not as parameter. **Since they are not trained, they will always stay 0 ?**
If they stay 0, what's the point of having bias at all ?
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4567/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4566/comments | https://api.github.com/repos/huggingface/transformers/issues/4566/events | https://github.com/huggingface/transformers/pull/4566 | 624,053,746 | MDExOlB1bGxSZXF1ZXN0NDIyNTUxNzg4 | 4,566 | variable name changes for Issue #4141 | {
"login": "NSanjay",
"id": 36938597,
"node_id": "MDQ6VXNlcjM2OTM4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/36938597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NSanjay",
"html_url": "https://github.com/NSanjay",
"followers_url": "https://api.github.com/users/NSanjay/followers",
"following_url": "https://api.github.com/users/NSanjay/following{/other_user}",
"gists_url": "https://api.github.com/users/NSanjay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NSanjay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NSanjay/subscriptions",
"organizations_url": "https://api.github.com/users/NSanjay/orgs",
"repos_url": "https://api.github.com/users/NSanjay/repos",
"events_url": "https://api.github.com/users/NSanjay/events{/privacy}",
"received_events_url": "https://api.github.com/users/NSanjay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=h1) Report\n> Merging [#4566](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4566 +/- ##\n=======================================\n Coverage 77.87% 77.87% \n=======================================\n Files 123 123 \n Lines 20566 20569 +3 \n=======================================\n+ Hits 16016 16019 +3 \n Misses 4550 4550 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.82% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.25% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.84% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `78.74% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `99.06% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.86% <100.00%> (ø)` | |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/4566/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=footer). Last update [a34a989...e31fdfa](https://codecov.io/gh/huggingface/transformers/pull/4566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, I'm not sure we want to change this. I do agree that `adder` is more explicit than `attention_mask` and better models what it does. However, imo the API isn't limited to high-level modules but even to lower level modules such as `AlbertAttention` and `AlbertTransformer`.\r\n\r\nThese modules may be used by users in their specific applications: `from transformers.modeling_albert import AlbertTransformer`.\r\n\r\nI think the gain here is not worth the loss of compatibility with all previous versions. What do you think @patrickvonplaten, @thomwolf, @julien-c ?",
"Yes, I agree...sorry @NSanjay I didn't think this fully through when answering here: https://github.com/huggingface/transformers/issues/4141#issuecomment-629875496 :-/ ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | Hi
Let me know if additional changes are required for Issue #4141. Thank you for this awesome repository. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4566",
"html_url": "https://github.com/huggingface/transformers/pull/4566",
"diff_url": "https://github.com/huggingface/transformers/pull/4566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4566.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4565/comments | https://api.github.com/repos/huggingface/transformers/issues/4565/events | https://github.com/huggingface/transformers/issues/4565 | 624,006,767 | MDU6SXNzdWU2MjQwMDY3Njc= | 4,565 | changing config.axial_pos_shape for 'ReformerModelWithLMHead' when fine-tuning | {
"login": "D-i-l-r-u-k-s-h-i",
"id": 47185867,
"node_id": "MDQ6VXNlcjQ3MTg1ODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i",
"html_url": "https://github.com/D-i-l-r-u-k-s-h-i",
"followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers",
"following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}",
"gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions",
"organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs",
"repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos",
"events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}",
"received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 2052904485,
"node_id": "MDU6TGFiZWwyMDUyOTA0NDg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/reformer",
"name": "reformer",
"color": "5319e7",
"default": false,
"description": "Everything related to the reformer model"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I would not recommend to set `axial_pos_shape` to (512 * 1024). In the notebook I just used that to demonstrate how far the limits can be pushed for Reformer. Half a million token is extremely long and usually unnecessary. \r\n\r\nMake sure you have read and understood how AxialPostionEmbeddings work: https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings . \r\n\r\nFor \"normal\" language modeling it might make much more sense to start from the Reformer-wiken8 model and finetune it: https://huggingface.co/google/reformer-enwik8",
"Greetings,\r\nWould fine tuning https://huggingface.co/google/reformer-enwik8 work normally with run_language_modeling.py script?\r\nThanks",
"Hmm, for the most part but you will have to define your own tokenzer function as can be seen here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8\r\n",
"So instead of sticking to the script, I would recommend slightly changing this notebook: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb. Instead of creating the dataset by using a tokenizer, you should use the function linked above. Does that make sense? Also linking: https://github.com/huggingface/transformers/pull/4480. If someone has an easy script for Reformer Char LM it'd be great to post it here or add a notebook. ",
"Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_\r\nfunction (in the enwik8 model card), am I following right?",
"> I would not recommend to set `axial_pos_shape` to (512 * 1024). In the notebook I just used that to demonstrate how far the limits can be pushed for Reformer. Half a million token is extremely long and usually unnecessary.\r\n> \r\nI've been using 'google/reformer-crime-and-punishment' model from [https://huggingface.co/transformers/model_doc/reformer.html#reformermodelwithlmhead](url)\r\n\r\nI get this error after I padded the sequence lengths to be a multiple of least common multiple chunk_length 64.\r\n\r\n```\r\n...\r\nfor epoch in range(EPOCHS):\r\n print(f\"EPOCH {epoch} started\" + '=' * 30)\r\n for idx,article in tqdm_notebook(enumerate(article_loader)):\r\n \r\n article_tens = tokenizer.encode(article[0], return_tensors='pt').to(device)\r\n \r\n print(article_tens.shape)\r\n #multiple of least common multiple chunk_length 64.\r\n pads_to_be_filled=getNoOfPads(article_tens.size()[1])\r\n \r\n padded_tens= torch.cat((article_tens[0],Variable(torch.zeros((pads_to_be_filled),dtype=torch.long).cuda())) )\r\n \r\n print(padded_tens.unsqueeze(0).shape)\r\n\r\n outputs = model(padded_tens.unsqueeze(0), labels=padded_tens.unsqueeze(0))[0]\r\n ...\r\n```\r\n```\r\n\r\nEPOCH 0 started==============================\r\n0/? [00:00<?, ?it/s]\r\ntorch.Size([1, 131])\r\ntorch.Size([1, 192])\r\n\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-11-81c445097515> in <module>()\r\n 29 print(padded_tens.unsqueeze(0).shape)\r\n 30 \r\n---> 31 outputs = model(padded_tens.unsqueeze(0), labels=padded_tens.unsqueeze(0))[0]\r\n 32 print(outputs)\r\n 33 \r\n\r\n7 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/modeling_reformer.py in forward(self, position_ids)\r\n 127 reduce(mul, self.axial_pos_shape) == sequence_length\r\n 128 ), \"If training, make sure that config.axial_pos_shape factors: {} multiply to sequence length. Got prod({}) != sequence_length: {}. You might want to consider padding your sequence length to {} or changing config.axial_pos_shape.\".format(\r\n--> 129 self.axial_pos_shape, self.axial_pos_shape, sequence_length, reduce(mul, self.axial_pos_shape)\r\n 130 )\r\n 131 if self.dropout > 0:\r\n\r\nAssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 192. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.\r\n\r\n```\r\n\r\n>If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 384. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.\r\n\r\nSo I guess that is because its default set to (512, 1024), and if so, how can I change it to a smaller value? \r\n\r\nReformerConfig {\r\n \"architectures\": [\r\n \"ReformerModelWithLMHead\"\r\n ],\r\n \"attention_head_size\": 64,\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"attn_layers\": [\r\n \"local\",\r\n \"lsh\",\r\n \"local\",\r\n \"lsh\",\r\n \"local\",\r\n \"lsh\"\r\n ],\r\n \"axial_norm_std\": 1.0,\r\n \"axial_pos_embds\": true,\r\n \"axial_pos_embds_dim\": [\r\n 64,\r\n 192\r\n ],\r\n \"axial_pos_shape\": [\r\n 512,\r\n 1024\r\n ],\r\n \"chunk_size_feed_forward\": 0,\r\n \"chunk_size_lm_head\": 0,\r\n \"eos_token_id\": 2,\r\n \"feed_forward_size\": 512,\r\n \"hash_seed\": null,\r\n \"hidden_act\": \"relu\",\r\n \"hidden_dropout_prob\": 0.05,\r\n \"hidden_size\": 256,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": true,\r\n \"layer_norm_eps\": 1e-12,\r\n \"local_attention_probs_dropout_prob\": 0.05,\r\n \"local_attn_chunk_length\": 64,\r\n \"local_num_chunks_after\": 0,\r\n \"local_num_chunks_before\": 1,\r\n \"lsh_attention_probs_dropout_prob\": 0.0,\r\n \"lsh_attn_chunk_length\": 64,\r\n \"lsh_num_chunks_after\": 0,\r\n \"lsh_num_chunks_before\": 1,\r\n \"max_position_embeddings\": 524288,\r\n \"model_type\": \"reformer\",\r\n \"num_attention_heads\": 2,\r\n \"num_buckets\": [\r\n 64,\r\n 128\r\n ],\r\n \"num_chunks_after\": 0,\r\n \"num_chunks_before\": 1,\r\n \"num_hashes\": 1,\r\n \"num_hidden_layers\": 6,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"task_specific_params\": {\r\n \"text-generation\": {\r\n \"do_sample\": true,\r\n \"max_length\": 100\r\n }\r\n },\r\n \"vocab_size\": 320\r\n}\r\n\r\nGiven above is the default configuration of the model before training/finetuning\r\n\r\n> For \"normal\" language modeling it might make much more sense to start from the Reformer-wiken8 model and finetune it: https://huggingface.co/google/reformer-enwik8\r\n\r\nWill try that too.\r\n\r\nThank you.\r\n\r\n",
"yeah the google/crime-and-punishment is not a good model for fine-tuning. It assumes you use a sequence length of > 500K tokens, which is not really reasonable.",
"> Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_\r\n> function (in the enwik8 model card), am I following right?\r\n\r\nexactly. You should be able to just enwik8 function I linked above. The enwik8 model has a maximum length of ~65K tokens, which is very long but very feasible for reformer.",
"> yeah the google/crime-and-punishment is not a good model for fine-tuning. It assumes you use a sequence length of > 500K tokens, which is not really reasonable.\r\n\r\nOh okay. Thank you very much for the clarification. Will try finetuning reformer-enwik8.",
"It would be awesome if you could upload your training script here - people seem very interested in it :-) ",
"\r\n @patrickvonplaten, Sure, will do when everything is sorted.",
"> \r\n> \r\n> > Ok, thanks. So the function _flatten_and_tokenize_ (in the notebook) shall be replaced by the _encode_\r\n> > function (in the enwik8 model card), am I following right?\r\n> \r\n> exactly. You should be able to just enwik8 function I linked above. The enwik8 model has a maximum length of ~65K tokens, which is very long but very feasible for reformer.\r\n\r\nFrom the notebook am struggling to adapt the DataCollator, how to define it properly in this context? \r\nThanks",
"Someone effectivelly fine tune on enwiki8 pre trained model? using colab with P100 gpu i was not able to load model yet due to memory limitation",
"Unfortunately facing the same issue now.",
"Can you add a link to your notebook here @lucashueda ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@lucashueda Did you manage to fine-tune enwiki8 pre trained model ? or other datasets ? Would you mind sharing your Colab? ",
"@epetros did you manage to perform the fine-tuning ? ",
"@patrickvonplaten any update on this? \r\n\r\nOs is there a notebook where we can pre-train this model from wiki or huge corpus ourselves and then fine-tune it to downstream tasks? ",
"Hi, I'm trying to fine-tune the ReformerModelWithLMHead (google/reformer-enwik8) for NER. I used the padding sequence length same as in the encode method (max_length = max([len(string) for string in list_of_strings])) along with attention_masks. And I got this error: \r\n\r\n**ValueError:** If training, make sure that config.axial_pos_shape factors: (128, 512) multiply to sequence length. Got prod((128, 512)) != sequence_length: 2248. You might want to consider padding your sequence length to 65536 or changing config.axial_pos_shape.\r\n\r\n1) When I changed the sequence length to 65536, my colab session crashed by getting all the inputs of 65536 lengths. \r\n2) According to the second option(changing config.axial_pos_shape), I cannot change it. \r\n\r\nI would like to know, Is there any chance to change config.axial_pos_shape while fine-tuning the model? Or I'm missing something in encoding the input strings for reformer-enwik8? Are there any additional steps to forward the input to the model after encoding?\r\n\r\nThanks!\r\n\r\n"
] | 1,590 | 1,628 | 1,598 | NONE | null | # ❓ Questions & Help
I'm trying to fine-tune the Reformer for the language generation task and I padded the sequence lengths to be a multiple of least common multiple chunk_length 64, and now I'm asked to pad the sequence to 524288(512 * 1024), which will give me an out of memory error.
I would like to know a workaround for this, since the error message also gives an alternative to 'pad_to_max_length', which is 'changing config.axial_pos_shape' and specially since this is known to be a memory efficient transformer. Thank you.
**A link to original question on Stack Overflow**: [https://stackoverflow.com/questions/61986452/fine-tuning-reformer-gives-out-of-memory-error-when-sequence-length-is-padded-t](https://stackoverflow.com/questions/61986452/fine-tuning-reformer-gives-out-of-memory-error-when-sequence-length-is-padded-t)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4565/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4564/comments | https://api.github.com/repos/huggingface/transformers/issues/4564/events | https://github.com/huggingface/transformers/pull/4564 | 623,923,843 | MDExOlB1bGxSZXF1ZXN0NDIyNDU0NjYy | 4,564 | [Reformer] fix reformer num buckets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=h1) Report\n> Merging [#4564](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `14.28%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4564 +/- ##\n==========================================\n- Coverage 77.87% 77.86% -0.02% \n==========================================\n Files 123 123 \n Lines 20566 20569 +3 \n==========================================\n Hits 16016 16016 \n- Misses 4550 4553 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.94% <14.28%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=footer). Last update [a34a989...7b40493](https://codecov.io/gh/huggingface/transformers/pull/4564?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | Fix automatic setting of `num_buckets` by making sure `num_buckets` is always a power of 2 and set as default.
The idea behind this whole function is that `num_buckets` should not be set by the user, but calculated on the fly to a good value before training (`num_buckets` ~ 2 * sequence length / chunk length as recommend in the paper). This value will then be saved in the config and can be reapplied for inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4564/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4564",
"html_url": "https://github.com/huggingface/transformers/pull/4564",
"diff_url": "https://github.com/huggingface/transformers/pull/4564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4564.patch",
"merged_at": 1590437085000
} |
https://api.github.com/repos/huggingface/transformers/issues/4563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4563/comments | https://api.github.com/repos/huggingface/transformers/issues/4563/events | https://github.com/huggingface/transformers/issues/4563 | 623,919,777 | MDU6SXNzdWU2MjM5MTk3Nzc= | 4,563 | Decoding with DistilmBERT to generate text in different languages | {
"login": "javismiles",
"id": 1000452,
"node_id": "MDQ6VXNlcjEwMDA0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/javismiles",
"html_url": "https://github.com/javismiles",
"followers_url": "https://api.github.com/users/javismiles/followers",
"following_url": "https://api.github.com/users/javismiles/following{/other_user}",
"gists_url": "https://api.github.com/users/javismiles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/javismiles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/javismiles/subscriptions",
"organizations_url": "https://api.github.com/users/javismiles/orgs",
"repos_url": "https://api.github.com/users/javismiles/repos",
"events_url": "https://api.github.com/users/javismiles/events{/privacy}",
"received_events_url": "https://api.github.com/users/javismiles/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"So I can get the generation working well with distilgpt2, the thing is that I would like to do it multilingual using the light multilingual model DistilmBERT (distilbert-base-multilingual-cased), any tips? thank you :)\r\n\r\n```py\r\nimport torch\r\nfrom transformers import *\r\nfrom transformers import TFGPT2LMHeadModel, GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\ninput_ids = torch.tensor(tokenizer.encode(\"Once upon a time\")).unsqueeze(0)\r\nmodel = GPT2LMHeadModel.from_pretrained(\"distilgpt2\", pad_token_id=tokenizer.eos_token_id)\r\ngreedy_output = model.generate(input_ids, max_length=50) #greedy search\r\n\r\nsample_outputs = model.generate(\r\n input_ids,\r\n do_sample=True, \r\n max_length=50, \r\n top_k=50, \r\n top_p=0.95, \r\n temperature=1,\r\n num_return_sequences=3\r\n)\r\n\r\nprint(\"Output:\\n\" + 100 * '-')\r\nfor i, sample_output in enumerate(sample_outputs):\r\n print(\"{}: {}\".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))\r\n```",
"Hi, I took the liberty of editing your comments with triple backticks ```py\\`\\`\\` to be more readable.\r\n\r\nUnfortunately DistilmBERT can't be used for generation. This is due to the way the original BERT models were pre-trained, using masked language modeling (MLM). It therefore attends to both the left and right contexts (tokens on the left and right of the token you're trying to generate), while for generation the model only has access to the left context.\r\n\r\nGPT-2 was trained with causal language modeling (CLM), which is why it can generate such coherent sequences. We implement the `generation` method only for CLM models, as MLM models do not generate anything coherent. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | Good day and congrats for your great library
If I want to decode and get new generated text with the GPT2 heads, that works great like you suggest:
```py
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_ids = torch.tensor(tokenizer.encode("Once upon a time there was")).unsqueeze(0)
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
greedy_output = model.generate(input_ids, max_length=50)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))
```
but my issue is that now I want to do the same but with the smaller simpler DistilmBERT model which is also multilingual in 104 languages, so I want to generate text in for example Spanish and English and with this lighter model. So I do this:
```py
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased')
model = DistilBertForMaskedLM.from_pretrained('distilbert-base-multilingual-cased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
```
but now, how do I get the continuation of the phrase at that point? I tried to apply tokenizer.decode with no luck there, thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4563/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4562/comments | https://api.github.com/repos/huggingface/transformers/issues/4562/events | https://github.com/huggingface/transformers/issues/4562 | 623,908,303 | MDU6SXNzdWU2MjM5MDgzMDM= | 4,562 | implementation of transformers for abstractive summarization task | {
"login": "fatihbeyhan",
"id": 48058209,
"node_id": "MDQ6VXNlcjQ4MDU4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/48058209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fatihbeyhan",
"html_url": "https://github.com/fatihbeyhan",
"followers_url": "https://api.github.com/users/fatihbeyhan/followers",
"following_url": "https://api.github.com/users/fatihbeyhan/following{/other_user}",
"gists_url": "https://api.github.com/users/fatihbeyhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fatihbeyhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fatihbeyhan/subscriptions",
"organizations_url": "https://api.github.com/users/fatihbeyhan/orgs",
"repos_url": "https://api.github.com/users/fatihbeyhan/repos",
"events_url": "https://api.github.com/users/fatihbeyhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/fatihbeyhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer might be able to help you here. @sshleifer - hope it's fine that I link you here :-) ",
"This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7",
"> This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7\r\n\r\nThank you very much. I will look and try to implement and let you know about the result!",
"> This might also help: https://github.com/huggingface/transformers/pull/4539/files?short_path=3a2ba7b#diff-3a2ba7b492f00029d14cec3994b73ac7\r\n\r\nit seems working! thank you!"
] | 1,590 | 1,590 | 1,590 | NONE | null | Hello, I am new to whole NLP world and PyTorch. I am trying to learn the concepts and that is taking some time for a rookie. I have a project to finish and I want to implement transformers & BERT on my abstractive summarization project. I tried to find some implementation tutorial on this topic but I could not find any. Do you guys have any suggestions about clear implementation of any pre-trained model that I can fine-tune on my dataset to get some solid results. I am not looking for this just for finishing up the project but also learn how to implement. Therefore, I need a clear tutorial.
Data:
I am using 0.25 of XSum dataset so I have 45k news and their one-sentence summary.
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4562/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4561/comments | https://api.github.com/repos/huggingface/transformers/issues/4561/events | https://github.com/huggingface/transformers/pull/4561 | 623,881,735 | MDExOlB1bGxSZXF1ZXN0NDIyNDI1NDIx | 4,561 | Fix the example command for SQuAD | {
"login": "kaniblu",
"id": 938822,
"node_id": "MDQ6VXNlcjkzODgyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/938822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaniblu",
"html_url": "https://github.com/kaniblu",
"followers_url": "https://api.github.com/users/kaniblu/followers",
"following_url": "https://api.github.com/users/kaniblu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaniblu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaniblu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaniblu/subscriptions",
"organizations_url": "https://api.github.com/users/kaniblu/orgs",
"repos_url": "https://api.github.com/users/kaniblu/repos",
"events_url": "https://api.github.com/users/kaniblu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaniblu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=h1) Report\n> Merging [#4561](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4561 +/- ##\n==========================================\n- Coverage 77.87% 77.86% -0.02% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16013 -3 \n- Misses 4550 4553 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4561/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=footer). Last update [a34a989...b44c742](https://codecov.io/gh/huggingface/transformers/pull/4561?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Can you add the do_lower_case flag for all other instances in that README that use bert uncased? Thanks!",
"Sure. Presumably for the tf version, the script will handle the casings automatically.",
"From the code it would seem that way, but I am not sure actually. \r\n\r\ncc @thomwolf @LysandreJik: `do_lower_case` was missing in the commands to run squad with bert-base-uncased. Is this flag also necessary in the Tensorflow version? It's not present in the code, so I would assume not.\r\n\r\nThe other changes LGTM!",
"Closed by #4245 (we still need to investigate why the lowercasing is not properly populated by the model's config)"
] | 1,590 | 1,592 | 1,590 | NONE | null | Issue #4549: added the missing argument `--do_lower_case` for reproducing the intended results. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4561/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4561",
"html_url": "https://github.com/huggingface/transformers/pull/4561",
"diff_url": "https://github.com/huggingface/transformers/pull/4561.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4561.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4560/comments | https://api.github.com/repos/huggingface/transformers/issues/4560/events | https://github.com/huggingface/transformers/issues/4560 | 623,877,661 | MDU6SXNzdWU2MjM4Nzc2NjE= | 4,560 | Albert Tokenizer hangs | {
"login": "LeonieWeissweiler",
"id": 30300891,
"node_id": "MDQ6VXNlcjMwMzAwODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeonieWeissweiler",
"html_url": "https://github.com/LeonieWeissweiler",
"followers_url": "https://api.github.com/users/LeonieWeissweiler/followers",
"following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}",
"gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions",
"organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs",
"repos_url": "https://api.github.com/users/LeonieWeissweiler/repos",
"events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Does this also happen when you use the slow tokenizers?\r\n\r\n```python\r\ntokenizer = AlbertTokenizer.from_pretrained(\"albert-base-v2\", use_fast=False)\r\n```",
"Thanks for the suggestion @BramVanroy . I just tried this and the tokenizer has now been running for 70 minutes, so I think that's a yes, it also happens when I use slow mode.",
"cc @mfuntowicz AlbertTokenizer seems to hang in fast mode but not in slow mode",
"Hi, there is no fast mode for `AlbertTokenizer`, it's a SentencePiece based tokenizer which is not currently supported by `tokenizers`.\r\n\r\nDo you think you can find more information about where the tokenizer actually hang?\r\nCan you reproduce the behavior with a shorter input?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,598 | 1,598 | NONE | null | # 🐛 Bug
## Information
I am following the language modeling tutorial to train a LM on a simple wikipedia corpus from scratch. I am trying to use Albert instead of Roberta. As I couldn't find information on how to train an Albert Tokenizer from scratch, I'm loading the albert-base-v2 tokenizer. The Dataset creation doesn't work, it hangs for ages and when I stop it, I can see that it is always stuck in tokenization_albert.py, line 193:
```python
outputs = "".join([c for c in outputs if not unicodedata.combining(c)])
```
A week ago, it crashed consistently in this line due to large RAM allocations, but I can't reproduce that behaviour right now.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
from transformers import TextDataset
train_set = TextDataset(
tokenizer=tokenizer,
file_path="./drive/My Drive/datasets/simplewiki_train.txt",
block_size=128,
)
```
## Expected behavior
I expected the tokenizer to run through in less than an hour for 100MB input.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Colab
- Python version: 3.6
Is anyone else experiencing this? I read in another issue that Albert should work with run_language_modeling out of the box.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4559/comments | https://api.github.com/repos/huggingface/transformers/issues/4559/events | https://github.com/huggingface/transformers/issues/4559 | 623,876,175 | MDU6SXNzdWU2MjM4NzYxNzU= | 4,559 | XLnet loss and accuracy not decreasing | {
"login": "jetjodh",
"id": 20591775,
"node_id": "MDQ6VXNlcjIwNTkxNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/20591775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jetjodh",
"html_url": "https://github.com/jetjodh",
"followers_url": "https://api.github.com/users/jetjodh/followers",
"following_url": "https://api.github.com/users/jetjodh/following{/other_user}",
"gists_url": "https://api.github.com/users/jetjodh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jetjodh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jetjodh/subscriptions",
"organizations_url": "https://api.github.com/users/jetjodh/orgs",
"repos_url": "https://api.github.com/users/jetjodh/repos",
"events_url": "https://api.github.com/users/jetjodh/events{/privacy}",
"received_events_url": "https://api.github.com/users/jetjodh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I recommend to change your preproccessing to:\r\n\r\n```python\r\nfrom transformers import XLNetTokenizer \r\nfrom keras.preprocessing.sequence import pad_sequences\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')\r\n\r\nencoded_sent = tokenizer.encode_plus(\r\n text, # Sentence to encode.\r\n return_tensors='pt',\r\n max_length=128)\r\n# ...\r\noutput = model(**encoded_sent )\r\n```\r\n\r\nHowever, the real problem is probably due to hyperparameters. You cannot simply use different models with the same hyperparameters and immediately expect results. You'll have to fiddle with the hyperparameters and find something that works for your case. \r\n",
"I am converting the text to tensor in a later step.\r\nI also tried changing the lr ranging from 0.05 to 5e-8 but still loss did not changing and also applied a lr sheduler. Maybe I should try other optimizers other than AdamW?",
"Since this is not a bug I am closing this. It is impossible for us to help with this any further since hyperparamter optimization is different for each task. This is something that you have to test yourself.\r\n\r\nFor a starting point, you can have a look at Table 8 in [the original paper](https://arxiv.org/pdf/1906.08237.pdf) where they suggest some good hyperparameter settings. But again, even then it depends on your specific case what would help and what wouldn't. Try and test!"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLnet base cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Can check the notebook below:
https://colab.research.google.com/drive/132r5kb1G5oG0yi-qnymBsMBPCGP5Gu85
Can only give access to a few people.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Binary classification.
## To reproduce
Steps to reproduce the behavior:
Preprocessing:
```
from transformers import XLNetTokenizer
from keras.preprocessing.sequence import pad_sequences
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
encoded_sent = tokenizer.encode(
text, # Sentence to encode.
add_special_tokens = True,
)
MAX_LEN = 128
encoded_sent = pad_sequences([encoded_sent], maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post")
attention_masks=[]
att_mask = [int(token_id > 0) for token_id in encoded_sent[0]]
attention_masks.append(att_mask)
```
Model definition:
```
from transformers import BertForSequenceClassification, XLNetForSequenceClassification, AdamW, BertConfig
# bert = BertForSequenceClassification.from_pretrained(
# "bert-base-uncased",
# num_labels = 2, .
# output_attentions = False,
# output_hidden_states = False,
xlnet = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased',
num_labels = 2,
output_attentions = False,
output_hidden_states = False,
)
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.features2 = xlnet
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x2, x3):
x2 = x2.to(torch.int64)
x2 = self.features2(x2,x3)[0]
x = self.softmax(x2)
return x
model = MyModel()
torch.cuda.empty_cache()
model.to('cuda')
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer = optim.AdamW(model.parameters(), lr=0.0005)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Loss and accuracy should decrease but are not changing at all( both training and valid).
This script worked while training BERT model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Latest
- Platform: Colab
- Python version: 3.6
- PyTorch version (GPU?): Latest
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4558/comments | https://api.github.com/repos/huggingface/transformers/issues/4558/events | https://github.com/huggingface/transformers/pull/4558 | 623,875,045 | MDExOlB1bGxSZXF1ZXN0NDIyNDIwNzI1 | 4,558 | Add DistilBERT to supported run_language_modeling models | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=h1) Report\n> Merging [#4558](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4558 +/- ##\n==========================================\n- Coverage 77.87% 77.86% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16014 -2 \n- Misses 4550 4552 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=footer). Last update [a34a989...ce96482](https://codecov.io/gh/huggingface/transformers/pull/4558?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! That's correct\r\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | As per the code, distilbert is indeed supported by the `run_language_modeling.py` script, even though the README states otherwise. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4558",
"html_url": "https://github.com/huggingface/transformers/pull/4558",
"diff_url": "https://github.com/huggingface/transformers/pull/4558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4558.patch",
"merged_at": 1590432645000
} |
https://api.github.com/repos/huggingface/transformers/issues/4557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4557/comments | https://api.github.com/repos/huggingface/transformers/issues/4557/events | https://github.com/huggingface/transformers/pull/4557 | 623,849,855 | MDExOlB1bGxSZXF1ZXN0NDIyNDAyNDg3 | 4,557 | Cleaner warning when loading pretrained models | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=h1) Report\n> Merging [#4557](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `86.95%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4557 +/- ##\n=======================================\n Coverage 77.87% 77.87% \n=======================================\n Files 123 123 \n Lines 20566 20580 +14 \n=======================================\n+ Hits 16016 16027 +11 \n- Misses 4550 4553 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <78.57%> (-0.63%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.70% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.56% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=footer). Last update [a34a989...0323876](https://codecov.io/gh/huggingface/transformers/pull/4557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good stuff!"
] | 1,590 | 1,592 | 1,592 | MEMBER | null | Give more explicit logging messages when using the various `from_pretrained` methods in the lib.
Also makes these messages as `logging.warning` because it's a common source of silent mistakes.
cc @BramVanroy
Happy to improve the language further if people have advice. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4557",
"html_url": "https://github.com/huggingface/transformers/pull/4557",
"diff_url": "https://github.com/huggingface/transformers/pull/4557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4557.patch",
"merged_at": 1592855928000
} |
https://api.github.com/repos/huggingface/transformers/issues/4556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4556/comments | https://api.github.com/repos/huggingface/transformers/issues/4556/events | https://github.com/huggingface/transformers/pull/4556 | 623,832,900 | MDExOlB1bGxSZXF1ZXN0NDIyMzkwMzUz | 4,556 | Added reference to use for citing this model | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=h1) Report\n> Merging [#4556](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4556 +/- ##\n==========================================\n+ Coverage 77.87% 77.93% +0.05% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n+ Hits 16016 16028 +12 \n+ Misses 4550 4538 -12 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.84% <0.00%> (-0.83%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `34.07% <0.00%> (+5.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=footer). Last update [a34a989...433d479](https://codecov.io/gh/huggingface/transformers/pull/4556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4556/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4556",
"html_url": "https://github.com/huggingface/transformers/pull/4556",
"diff_url": "https://github.com/huggingface/transformers/pull/4556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4556.patch",
"merged_at": 1590433943000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4549/comments | https://api.github.com/repos/huggingface/transformers/issues/4549/events | https://github.com/huggingface/transformers/issues/4549 | 623,818,001 | MDU6SXNzdWU2MjM4MTgwMDE= | 4,549 | Example script for SQuAD question answering unable to reproduce the claimed performance | {
"login": "kaniblu",
"id": 938822,
"node_id": "MDQ6VXNlcjkzODgyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/938822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaniblu",
"html_url": "https://github.com/kaniblu",
"followers_url": "https://api.github.com/users/kaniblu/followers",
"following_url": "https://api.github.com/users/kaniblu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaniblu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaniblu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaniblu/subscriptions",
"organizations_url": "https://api.github.com/users/kaniblu/orgs",
"repos_url": "https://api.github.com/users/kaniblu/repos",
"events_url": "https://api.github.com/users/kaniblu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaniblu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052333,
"node_id": "MDU6TGFiZWwxODM0MDUyMzMz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Question%20Answering",
"name": "Ex: Question Answering",
"color": "86FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"My results are different as well: \r\n\r\n\"exact_match\": 71.92999053926206\r\n\"f1\": 80.70949484221217\r\n\r\nMy guess is that this occurs because we are not using a fixed seed. The runs are not deterministic so difference _will_ occur.",
"Possibly but the difference of 7~8 points in f1 and EM scores is way above the usual variance due to random seeds.",
"Found the bug. `--do_lower_case` was missing in the script arguments.\r\n\r\nNow the results are pretty close to the ones mentioned in the tutorial.\r\n\r\n05/24/2020 23:50:04 - INFO - __main__ - Results: {'exact': 80.26490066225166, 'f1': 88.01726518927101, 'total': 10570, 'HasAns_exact': 80.26490066225166, 'HasAns_f1': 88.01726518927101, 'HasAns_total': 10570, 'best_exact': 80.26490066225166, 'best_exact_thresh': 0.0, 'best_f1': 88.01726518927101, 'best_f1_thresh': 0.0}",
"> Possibly but the difference of 7~8 points in f1 and EM scores is way above the usual variance due to random seeds.\r\n\r\nUnfortunately not. Have a look at these experiments by my friends over at NLP Town. They did sentiment analyses and ran the experiments ten times (each time with a different seed). https://www.linkedin.com/posts/nlp-town_sentimentanalysis-camembert-xlm-activity-6605379961111007232-KJy3 \r\n\r\nThat being said, I do think you are right, good catch! ",
"Closing this b/c #4245 was merged\r\n\r\n(we still need to investigate why the lowercasing is not properly populated by the model's config)\r\n"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
The example script for SQuAD question answering (`examples/question-answering/run-squad.py`) fails to produce the correct results as claimed in the tutorial.
The correct performance is around f1 = 88.52, exact_match = 81.22 on SQuAD v1.1, but the script produces f1 = 81.97 and exact match = 73.80 instead.
## To reproduce
Steps to reproduce the behavior:
1. Install with the latest commit (a34a989)
2. Download the SQuAD v1.1 dataset.
3. Run `examples/question-answering/run-squad.py`. with the exact same arguments as seen in the tutorial.
```
export SQUAD_DIR=/path/to/SQUAD
python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
Following is the final result.
05/24/2020 16:10:09 - INFO - __main__ - ***** Running evaluation *****
05/24/2020 16:10:09 - INFO - __main__ - Num examples = 10789
05/24/2020 16:10:09 - INFO - __main__ - Batch size = 8
Evaluating: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1349/1349 [01:31<00:00, 14.81it/s]
05/24/2020 16:11:41 - INFO - __main__ - Evaluation done in total 91.079697 secs (0.008442 sec per example)
05/24/2020 16:11:41 - INFO - transformers.data.metrics.squad_metrics - Writing predictions to: out-noamp/predictions_.json
05/24/2020 16:11:41 - INFO - transformers.data.metrics.squad_metrics - Writing nbest to: out-noamp/nbest_predictions_.json
05/24/2020 16:12:09 - INFO - __main__ - Results: {'exact': 73.80321665089878, 'f1': 81.96651715123286, 'total': 10570, 'HasAns_exact': 73.80321665089878, 'HasAns_f1': 81.96651715123286, 'HasAns_total': 10570, 'best_exact': 73.80321665089878, 'best_exact_thresh': 0.0, 'best_f1': 81.96651715123286, 'best_f1_thresh': 0.0}
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The script should produce f1 = 88.52, exact_match = 81.22.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Linux-4.15.0-99-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4549/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4548/comments | https://api.github.com/repos/huggingface/transformers/issues/4548/events | https://github.com/huggingface/transformers/pull/4548 | 623,815,835 | MDExOlB1bGxSZXF1ZXN0NDIyMzc4NDM1 | 4,548 | [WIP] Replace instances of `config.output_hidden_states` with function argument `output_hidden_states` in all possible models. | {
"login": "drjosephliu",
"id": 22230085,
"node_id": "MDQ6VXNlcjIyMjMwMDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22230085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drjosephliu",
"html_url": "https://github.com/drjosephliu",
"followers_url": "https://api.github.com/users/drjosephliu/followers",
"following_url": "https://api.github.com/users/drjosephliu/following{/other_user}",
"gists_url": "https://api.github.com/users/drjosephliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drjosephliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drjosephliu/subscriptions",
"organizations_url": "https://api.github.com/users/drjosephliu/orgs",
"repos_url": "https://api.github.com/users/drjosephliu/repos",
"events_url": "https://api.github.com/users/drjosephliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/drjosephliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @drjosephliu - thanks so much for opening this PR! \r\nSorry for being lazy here - could you check out the comments I added in PR #4538 - I think they apply 1-to-1 the same way here.",
"No problem. I'm a bit busy this week, but will aim to get it done by the end of the week.",
"> No problem. I'm a bit busy this week, but will aim to get it done by the end of the week.\r\n\r\nSure, take your time :-) ",
"I'm just in the middle of fixing up the tests and I've noticed that the `ReformerModel` has a different method signature than other models because it takes in `do_output_hidden_states`:\r\n\r\n```\r\n def forward(\r\n self,\r\n input_ids=None,\r\n attention_mask=None,\r\n position_ids=None,\r\n head_mask=None,\r\n inputs_embeds=None,\r\n num_hashes=None,\r\n do_output_hidden_states=False,\r\n do_output_attentions=False,\r\n ):\r\n```\r\n\r\nShould I be converting it to `output_hidden_states` here?",
"> I'm just in the middle of fixing up the tests and I've noticed that the `ReformerModel` has a different method signature than other models because it takes in `do_output_hidden_states`:\r\n> \r\n> ```\r\n> def forward(\r\n> self,\r\n> input_ids=None,\r\n> attention_mask=None,\r\n> position_ids=None,\r\n> head_mask=None,\r\n> inputs_embeds=None,\r\n> num_hashes=None,\r\n> do_output_hidden_states=False,\r\n> do_output_attentions=False,\r\n> ):\r\n> ```\r\n> \r\n> Should I be converting it to `output_hidden_states` here?\r\n\r\nyes please!",
"Before starting on the TF implementation, it might be a good idea how it is handled in `modeling_tf_bert.py` of PR: #4538 :-) ",
"So all pytorch tests are passing, but I still haven't really figured out the TF ones. I copied what you did for `output_attention_heads` applied to `output_hidden_states` to TF Bert almost verbatim, but I'm still getting some failing tests",
"Hey @drjosephliu,\r\n\r\nThis looks great already :-) We will probably have a lot of conflicts with master once #4538 is merged (I should have thought about this before ... ). Would it be ok for you to wait a couple of days on this PR until we merge #4538. Then I can rebase this PR to master and it will be easier to work from then on :-) \r\n\r\nCan you do the following change to the branch `hidden_states` of your fork, so that I can commit directly to your branch? :-)\r\n\r\nhttps://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork\r\n\r\nIf this doesn't work, maybe just send me a collaboration invite to @patrickvonplaten :-)\r\n\r\nLooking forward to merge this soon :-) ",
"Hey, the checkbox \"Allow edits by maintainer\" is already checked - I guess I should leave it checked right?",
"Hey @drjosephliu,\r\n\r\nI tried to rebase to main, but it's just not reasonable. We have to many merge conflicts with `output_attentions` (in every file in every forward function) and not just for one commit, but for 7 commits :-/ At this point it would be much faster to just open a new PR. \r\n\r\nI should have seen this coming, so that you guys would not have started at the same time :-/ \r\nSuper sorry about that! \r\n\r\nThere are two solutions:\r\n1) You can open a new PR and sadly starting from scratch (Recommended):\r\nIf you look at the merged PR here: https://github.com/huggingface/transformers/pull/4538, you can see that we actually decided to keep `config.output_attentions` for now and only overwrite it with the arguments of the forward function. This means that the high-level forward functions all have the argument `output_hidden_states = None`. \r\n\r\n2) You can try to go through the whole rebase yourself (Not recommended):\r\n- In your repo if you run `git rebase main/master` from this branch, you see a bunch of merge conflicts arising (and that's just for the first commit of this branch). You could try to solve them correctly one-by-one, but you have to be careful to not introduce new bugs here. So I strongly advise against this.\r\n\r\n3) You don't feel like doing the same work again (Very much understandable :D). I would totally understand it, if you don't want to do the same work again. In this case I would re-open the issue to the community or do it myself. I would totally understand this - we have other \"good first issue\" tags.\r\n\r\nI hope, you give it a try with 1) :-) Very sorry about the merge conflicts again - let me know what you think",
"Not a problem. I'm quite familiar with the codebase now so it shouldn't take too long.",
"Saw your new PR @drjosephliu ! Thanks a lot - it's great that you tackle this :-) I will take a look tomorrow at the new PR :-) "
] | 1,590 | 1,592 | 1,592 | CONTRIBUTOR | null | Attempts to close #3879 by refactoring `config.output_hidden_states` as an argument,`output_hidden_states`, to functions `forward()` and `call()`. Affects all PT and TF models that output hidden states.
Currently it is failing the following tests: `run_tests_tf`, `run_tests_torch` and `run_tests_torch_and_tf` because they are still using `config.output_hidden_states`. Please advise on how I should go about testing this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4548/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4548",
"html_url": "https://github.com/huggingface/transformers/pull/4548",
"diff_url": "https://github.com/huggingface/transformers/pull/4548.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4548.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4547/comments | https://api.github.com/repos/huggingface/transformers/issues/4547/events | https://github.com/huggingface/transformers/pull/4547 | 623,804,462 | MDExOlB1bGxSZXF1ZXN0NDIyMzcwNDA0 | 4,547 | LongformerTokenizerFast | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=h1) Report\n> Merging [#4547](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4547 +/- ##\n=======================================\n Coverage 77.87% 77.87% \n=======================================\n Files 123 123 \n Lines 20566 20569 +3 \n=======================================\n+ Hits 16016 16019 +3 \n Misses 4550 4550 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.13% <100.00%> (ø)` | |\n| [src/transformers/tokenization\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbG9uZ2Zvcm1lci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4547/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=footer). Last update [a34a989...453d496](https://codecov.io/gh/huggingface/transformers/pull/4547?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM, but adding @mfuntowicz here, since I'm not very familiar with the FastTokenizers. @mfuntowicz - do we also have to apply changes on the Rust side for this? ",
"LGTM if the tokenizer doesn't have any different pre-processing / post-processing than the current Roberta Tokenizer 👍 ",
"Great I think it's good to merge then :-) "
] | 1,590 | 1,590 | 1,590 | MEMBER | null | This PR adds `LongformerTokenizerFast` by sub-classing `RobertaTokenizerFast`.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4547/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4547",
"html_url": "https://github.com/huggingface/transformers/pull/4547",
"diff_url": "https://github.com/huggingface/transformers/pull/4547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4547.patch",
"merged_at": 1590437036000
} |
https://api.github.com/repos/huggingface/transformers/issues/4546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4546/comments | https://api.github.com/repos/huggingface/transformers/issues/4546/events | https://github.com/huggingface/transformers/pull/4546 | 623,803,732 | MDExOlB1bGxSZXF1ZXN0NDIyMzY5ODUz | 4,546 | Fix two bugs on MNLI dataset and SST-2 respectively. | {
"login": "stdcoutzyx",
"id": 1142862,
"node_id": "MDQ6VXNlcjExNDI4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1142862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stdcoutzyx",
"html_url": "https://github.com/stdcoutzyx",
"followers_url": "https://api.github.com/users/stdcoutzyx/followers",
"following_url": "https://api.github.com/users/stdcoutzyx/following{/other_user}",
"gists_url": "https://api.github.com/users/stdcoutzyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stdcoutzyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stdcoutzyx/subscriptions",
"organizations_url": "https://api.github.com/users/stdcoutzyx/orgs",
"repos_url": "https://api.github.com/users/stdcoutzyx/repos",
"events_url": "https://api.github.com/users/stdcoutzyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/stdcoutzyx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=h1) Report\n> Merging [#4546](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `42.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4546 +/- ##\n==========================================\n- Coverage 77.87% 77.85% -0.02% \n==========================================\n Files 123 123 \n Lines 20566 20568 +2 \n==========================================\n- Hits 16016 16014 -2 \n- Misses 4550 4554 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (-0.19%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.36% <60.00%> (+0.20%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4546/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=footer). Last update [a34a989...6e60ed9](https://codecov.io/gh/huggingface/transformers/pull/4546?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | The text index of test data of SST-2 are 1 rather than 0.
The label of MNLI task has a tricky swap on MNLI dataset, which should also be involved in get_labels() of dataset for correctness. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4546/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4546",
"html_url": "https://github.com/huggingface/transformers/pull/4546",
"diff_url": "https://github.com/huggingface/transformers/pull/4546.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4546.patch",
"merged_at": 1590765145000
} |
https://api.github.com/repos/huggingface/transformers/issues/4545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4545/comments | https://api.github.com/repos/huggingface/transformers/issues/4545/events | https://github.com/huggingface/transformers/issues/4545 | 623,800,183 | MDU6SXNzdWU2MjM4MDAxODM= | 4,545 | pass lowercase to fast tokenizer | {
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/tokenization_gpt2.py#L338-L343
only passes four parameters without `lowercase` to
https://github.com/huggingface/tokenizers/blob/704cf3fdd2f607ead58a561b892b510b49c301db/bindings/python/tokenizers/implementations/byte_level_bpe.py#L15 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4545/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4544/comments | https://api.github.com/repos/huggingface/transformers/issues/4544/events | https://github.com/huggingface/transformers/issues/4544 | 623,786,946 | MDU6SXNzdWU2MjM3ODY5NDY= | 4,544 | seems that run_ner.py cannot handle the situation when example length exceed max_length? | {
"login": "ZihaoZheng98",
"id": 22414831,
"node_id": "MDQ6VXNlcjIyNDE0ODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/22414831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihaoZheng98",
"html_url": "https://github.com/ZihaoZheng98",
"followers_url": "https://api.github.com/users/ZihaoZheng98/followers",
"following_url": "https://api.github.com/users/ZihaoZheng98/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihaoZheng98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihaoZheng98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihaoZheng98/subscriptions",
"organizations_url": "https://api.github.com/users/ZihaoZheng98/orgs",
"repos_url": "https://api.github.com/users/ZihaoZheng98/repos",
"events_url": "https://api.github.com/users/ZihaoZheng98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihaoZheng98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Do you mind filling-in the template? It will help us help you better!",
"> Hi! Do you mind filling-in the template? It will help us help you better!\r\n\r\nOK!I think this will be a great practice for me",
"Great, thanks"
] | 1,590 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4544/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4543/comments | https://api.github.com/repos/huggingface/transformers/issues/4543/events | https://github.com/huggingface/transformers/issues/4543 | 623,786,631 | MDU6SXNzdWU2MjM3ODY2MzE= | 4,543 | Automatically setting number of LSH buckets in Reformer may give invalid value | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2052904485,
"node_id": "MDU6TGFiZWwyMDUyOTA0NDg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/reformer",
"name": "reformer",
"color": "5319e7",
"default": false,
"description": "Everything related to the reformer model"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @erickrf, \r\n\r\nThanks a lot for catching the error. The linked PR should solve it by making sure `num_buckets` is always a power of 2."
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Reformer
transformers version: 2.9.0
When using a Reformer model such that `config.num_buckets` is set to `None` (as recommended), the model automatically determines the number of necessary buckets. However, depending on some hyperparameters, it may compute an odd number of buckets, which is invalid.
It happens at this line, because of the +1 in the second element:
https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_reformer.py#L541
This triggers the assertion in `_hash_vectors`: https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_reformer.py#L454
I think a simple fix is just to check if the number is odd, and add one in that case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4543/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4542/comments | https://api.github.com/repos/huggingface/transformers/issues/4542/events | https://github.com/huggingface/transformers/issues/4542 | 623,785,492 | MDU6SXNzdWU2MjM3ODU0OTI= | 4,542 | RuntimeError: The size of tensor a (1025) must match the size of tensor b (1024) at non-singleton dimension 3 | {
"login": "rautnikita77",
"id": 48254334,
"node_id": "MDQ6VXNlcjQ4MjU0MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48254334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rautnikita77",
"html_url": "https://github.com/rautnikita77",
"followers_url": "https://api.github.com/users/rautnikita77/followers",
"following_url": "https://api.github.com/users/rautnikita77/following{/other_user}",
"gists_url": "https://api.github.com/users/rautnikita77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rautnikita77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rautnikita77/subscriptions",
"organizations_url": "https://api.github.com/users/rautnikita77/orgs",
"repos_url": "https://api.github.com/users/rautnikita77/repos",
"events_url": "https://api.github.com/users/rautnikita77/events{/privacy}",
"received_events_url": "https://api.github.com/users/rautnikita77/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Length of 2000 is too long for GPT2, this won't be possible. \r\nYou can do two things here:\r\n\r\n1) You chunk your generation which means that you first produce a length of up to 1000 and then use a bit of that (100 or so tokens) as context to generate the next 900 tokens and the same again until you hit 2000.\r\n\r\n2) You can use this Reformer model: https://huggingface.co/google/reformer-enwik8 which can handle sequences up to 65000. Currently generation with Reformer is painfully slow though :-/ This should be improved in the coming weeks :-) "
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): gpt2
Language I am using the model on (English, Chinese ...): english
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I want to generate samples with length = 2000
## To reproduce
Steps to reproduce the behavior:
1. run the command
python run_generation.py --model_type=gpt2 --model_name_or_path=<output_dir_of_finetuned_model> --length=2000 --num_return_sequences=10 --stop_token='<|endoftext|>
'
## Expected behavior
The error that I am getting is
w = torch.where(mask.bool(), w, self.masked_bias.to(w.dtype))
RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Python 3.8
- Python version:
- PyTorch version (GPU?): 10.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4541/comments | https://api.github.com/repos/huggingface/transformers/issues/4541/events | https://github.com/huggingface/transformers/issues/4541 | 623,785,341 | MDU6SXNzdWU2MjM3ODUzNDE= | 4,541 | 'use_fast=True' results in 'TypeError' when trying to save tokenizer via AutoTokenizer | {
"login": "lingdoc",
"id": 15827864,
"node_id": "MDQ6VXNlcjE1ODI3ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/15827864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lingdoc",
"html_url": "https://github.com/lingdoc",
"followers_url": "https://api.github.com/users/lingdoc/followers",
"following_url": "https://api.github.com/users/lingdoc/following{/other_user}",
"gists_url": "https://api.github.com/users/lingdoc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lingdoc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lingdoc/subscriptions",
"organizations_url": "https://api.github.com/users/lingdoc/orgs",
"repos_url": "https://api.github.com/users/lingdoc/repos",
"events_url": "https://api.github.com/users/lingdoc/events{/privacy}",
"received_events_url": "https://api.github.com/users/lingdoc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @lingdoc, \r\n\r\nThanks for reporting this. Unfortunately, I'm not able to reproduce currently ... Loading, then training and finally saving works as espected on my side, with various tokenizers.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)\r\n\r\n# Some training ...\r\n\r\ntokenizer.save_pretrained(\"test_bert_tokenizer\")\r\n('test_bert_tokenizer\\\\vocab.txt',\r\n 'test_bert_tokenizer\\\\special_tokens_map.json',\r\n 'test_bert_tokenizer\\\\added_tokens.json')\r\n```\r\n\r\nCan you give the path you're trying to save ? Just to make sure we're ending having a `None` somewhere in the `save_pretrained` that would explain the `TypeError` raised.\r\n\r\nThanks!",
"Hm, this is really strange. I can't reproduce it either. Maybe something was wonky with my virtualenv - now it works fine! Next time I'll try a restart & run before I post.",
"Oops, I closed it too soon. I'm still getting the issue. Models I have tried:\r\n\r\n`bert-base-uncased`\r\n`distibert-base-uncased`\r\n`google/electra-small-discriminator`",
"The path I am trying to save to is `\"output/model_out\"` - but it's generated using `Path()`, in case that makes a difference (not sure why it would make a difference for saving the `fast` tokenizer and not the regular one though).",
"Ok, that seems to be the issue after all - when I explicitly cast the `Path()`-generated path to `str`, it saves fine. I guess the regular tokenizer/save function does this somehow but the `fast` version doesn't..",
"Thanks for digging this further. \r\n\r\nI'll check what is the behaviour discrepancy between both version of the tokenizers when using `Path()` and I'll post here 👍 ",
"For what it is worth, I have this exact same issue with the `\"distilroberta-base\"` tokenizer when `use_fast=True`. Casting my `Path` object to a `str` solved the issue."
] | 1,590 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using: all/any
Language I am using the model on: English
The problem arises when using:
* [x] the official example scripts: `AutoTokenizer.from_pretrained([model], use_fast=True)`
After updating to Transformers v2.10.0, when setting `use_fast=True` as in `tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)`, when trying to save the model by using `tokenizer.save_pretrained(path)` I get the following error and the process quits:
```
File "../python3.6/site-packages/transformers/tokenization_utils.py", line 1117,
in save_pretrained
vocab_files = self.save_vocabulary(save_directory)
File "../python3.6/site-packages/transformers/tokenization_utils.py", line 2657,
in save_vocabulary
files = self._tokenizer.save(save_directory)
File "../python3.6/site-packages/tokenizers/implementations/base_tokenizer.py",
line 328, in save
return self._tokenizer.model.save(directory, name=name)
TypeError
```
When I omit the `use_fast=True` flag, the tokenizer saves fine.
The tasks I am working on is:
* [x] my own task or dataset: Text classification
## To reproduce
Steps to reproduce the behavior:
1. Upgrade to `transformers==2.10.0` (requires `tokenizers==0.7.0`)
2. Load a tokenizer using `AutoTokenizer.from_pretrained()` with flag `use_fast=True`
3. Train for one epoch on any dataset, then try to save the tokenizer.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The tokenizer/file should save into the chosen path, as it does with the regular tokenizer.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0+cu100 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4541/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4540/comments | https://api.github.com/repos/huggingface/transformers/issues/4540/events | https://github.com/huggingface/transformers/issues/4540 | 623,759,614 | MDU6SXNzdWU2MjM3NTk2MTQ= | 4,540 | InvalidArgumentError while using GRU layer in custom training loop | {
"login": "dimitreOliveira",
"id": 16668746,
"node_id": "MDQ6VXNlcjE2NjY4NzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/16668746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimitreOliveira",
"html_url": "https://github.com/dimitreOliveira",
"followers_url": "https://api.github.com/users/dimitreOliveira/followers",
"following_url": "https://api.github.com/users/dimitreOliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/dimitreOliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimitreOliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimitreOliveira/subscriptions",
"organizations_url": "https://api.github.com/users/dimitreOliveira/orgs",
"repos_url": "https://api.github.com/users/dimitreOliveira/repos",
"events_url": "https://api.github.com/users/dimitreOliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimitreOliveira/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"This is not enough information to help us. Can you post a minimal reproducible example, or at least how you construct the model. Also show where the error is triggered. (And just my two cents, but adding RNNs on top of transformer-based models seems redundant... But I guess you can try it out!)",
"Hi @BramVanroy , thanks for the tips, much appreciated, I was just trying different things to see how it performs, this is the model architecture:\r\n\r\n```\r\nmodule_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)\r\n\r\ndef model_fn(MAX_LEN):\r\n input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')\r\n attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')\r\n \r\n base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name=\"base_model\")\r\n last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})\r\n \r\n x = layers.Bidirectional(layers.LSTM(128, return_sequences=True))(last_hidden_state)\r\n x = layers.Dropout(.1)(x)\r\n \r\n x_start = layers.TimeDistributed(layers.Dense(1))(x)\r\n x_start = layers.Flatten()(x_start)\r\n y_start = layers.Activation('softmax', name='y_start')(x_start)\r\n \r\n x_end = layers.TimeDistributed(layers.Dense(1))(x)\r\n x_end = layers.Flatten()(x_end)\r\n y_end = layers.Activation('softmax', name='y_end')(x_end)\r\n\r\n model = Model(inputs=[input_ids, attention_mask], outputs=[y_start, y_end])\r\n \r\n return model\r\n```\r\n\r\nAnd I was using it for a QA problem, so I also did:\r\n\r\n```\r\nmodel.compile(optimizer, loss={'y_start': losses.CategoricalCrossentropy(),\r\n 'y_end': losses.CategoricalCrossentropy()})\r\n```\r\n\r\nThe tricky part is that is jsut happens inside a custom training loop. Here are some of the code I have used.\r\n\r\n```\r\n# Step functions\r\n @tf.function\r\n def train_step(data_iter):\r\n def train_step_fn(x, y):\r\n with tf.GradientTape() as tape:\r\n probabilities = model(x, training=True)\r\n loss_start = loss_fn_start(y['y_start'], probabilities[0])\r\n loss_end = loss_fn_end(y['y_end'], probabilities[1])\r\n loss = tf.math.add(loss_start, loss_end)\r\n grads = tape.gradient(loss, model.trainable_variables)\r\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\r\n # update metrics\r\n train_acc_start.update_state(y['y_start'], probabilities)\r\n train_acc_end.update_state(y['y_end'], probabilities)\r\n train_loss.update_state(loss)\r\n train_loss_start.update_state(loss_start)\r\n train_loss_end.update_state(loss_end)\r\n for _ in tf.range(step_size):\r\n strategy.experimental_run_v2(train_step_fn, next(data_iter))\r\n\r\nloss_fn_start = losses.categorical_crossentropy\r\nloss_fn_end = losses.categorical_crossentropy\r\n\r\ntrain_acc_start = metrics.CategoricalAccuracy()\r\ntrain_acc_end = metrics.CategoricalAccuracy()\r\ntrain_loss = metrics.Sum()\r\ntrain_loss_start = metrics.Sum()\r\ntrain_loss_end = metrics.Sum()\r\n```\r\n\r\nLet me know if you need any more information.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | CONTRIBUTOR | null | **System information**
- TensorFlow version `2.1.0`
- Python version: `3`
- GPU model and memory: `NVIDIA Tesla P100`
- CUDA Version: `10.1`
- Environment: This happens both on Kaggle and Colab
**Describe the current behavior**
I'm trying to train a Hugging face transformer model (roBERTa base) with a custom training loop, and got the error below:
```
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: InstantiateOptions.input_devices must have the same length as the number of arguments: input_devices length = 23 number of arguments = 24
[[{{node while/body/_1/StatefulPartitionedCall}}]]
(1) Invalid argument: InstantiateOptions.input_devices must have the same length as the number of arguments: input_devices length = 23 number of arguments = 24
[[{{node while/body/_1/StatefulPartitionedCall}}]]
[[while/body/_1/Adam/Cast_6/ReadVariableOp/_30]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_step_35635]
Function call stack:
train_step -> train_step
```
The thing is I can run the same model using `model.fit()` API, and this error only happens when I use a LSTM or GRU layer on top of the transformer
**Describe the expected behavior**
Training should go normal | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4539/comments | https://api.github.com/repos/huggingface/transformers/issues/4539/events | https://github.com/huggingface/transformers/pull/4539 | 623,743,812 | MDExOlB1bGxSZXF1ZXN0NDIyMzI4MDY0 | 4,539 | Add BART fine-tuning summarization community notebook | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=h1) Report\n> Merging [#4539](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4539 +/- ##\n==========================================\n- Coverage 77.87% 77.87% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16015 -1 \n- Misses 4550 4551 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4539/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4539/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=footer). Last update [a34a989...e8c7951](https://codecov.io/gh/huggingface/transformers/pull/4539?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome thanks for the great notebook. Did a tiny change in the github link :-) "
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | An example of how to train, evaluate, deploy a BART summarization model with fastai using the blurr library | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4539/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4539",
"html_url": "https://github.com/huggingface/transformers/pull/4539",
"diff_url": "https://github.com/huggingface/transformers/pull/4539.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4539.patch",
"merged_at": 1590504222000
} |
https://api.github.com/repos/huggingface/transformers/issues/4538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4538/comments | https://api.github.com/repos/huggingface/transformers/issues/4538/events | https://github.com/huggingface/transformers/pull/4538 | 623,708,960 | MDExOlB1bGxSZXF1ZXN0NDIyMzA1NzA3 | 4,538 | [All models] Extend config.output_attentions with output_attentions function arguments | {
"login": "bharatr21",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharatr21",
"html_url": "https://github.com/bharatr21",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also removed the tests for `output_attentions` since all of them were fetching values from the config",
"@Bharat123rox - that's awesome, thanks a lot! It's a lot of manual work, but it will be super useful once it's merged :-)\r\n\r\nI added a bunch of comments - let me know if something is unclear.\r\n\r\nRegarding the tests, let's try to first make all torch tests pass and then check the TF tests. \r\n\r\nRegarding the torch tests:\r\n\r\nMost tests that fail are those where you removed `config.output_attentions` in the test, but didn't set `output_attentions=True` for the forward call. These tests previously were outputting the attentions but don't do this anymore. You should fix these tests if you set `output_attentions=True` in the forward pass. \r\n\r\nLooking forward to have this merged soon :-) Let me know if something is unclear!",
" Now, most of the tests are giving new `AssertionError`",
"OK! Let's maybe try first to fix all the test `test_attention_outputs` tests. You can run this test for a specific model using the following command: \r\n```\r\npytest tests/test_modeling_openai.py::OpenAIGPTModelTest::test_attention_outputs\r\n```\r\nI fixed this test as an example for `openai` on this commit: https://github.com/huggingface/transformers/pull/4597/commits/e8efd72fce1be304043863cbab4cd7a61a39e434\r\n\r\nIt's gonna be a longer process to fix all tests. Let's try to start with the `test_attention_outputs` tests for all PyTorch models :-) \r\n\r\nBtw, can you do the following change to the branch `outputattentions` of your fork, so that I can commit directly to your branch? :-) \r\n\r\nhttps://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork",
"https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork was not clear as I've already checked the \"Allow edits by maintainers\" option, however I have sent a collaboration invite to you @patrickvonplaten which I think should give enough permissions",
"@patrickvonplaten @thomwolf please help me in fixing the remaining PyTorch and TensorFlow test failures, they are of various kinds, mostly `AssertionErrors`",
"This is really great work! I will take a look at the failing tests :-) It would be great if you can rename some of the variables as mentioned above.",
"Ok great, I can take a look into the remaining tests now :-) ",
"> Ok great, I can take a look into the remaining tests now :-)\r\n\r\nYes, please do, thank you! there are only 3 Assertion failures in Torch and hopefully all failures in TF are also similar 🤞 ",
"Hey @Bharat123rox,\r\n\r\nI think from the PyTorch side we are ready now :-) \r\nRegarding the TF side, my last commit shows how to implement it for TF Models - could you give it a try for the other models? You always have to pay attention to points 1) - 3) as mentioned above to make sure that the TF Models can be trained, complised and serialized with keras.\r\n\r\nBe careful to make a `git fetch` and `git pull` on your branch now before continuing to work since I added two commits. \r\n\r\nLet me know if you have any questions! :-) \r\nReally great work so far, I think we are almost finished :-) ",
"@patrickvonplaten Most of the TF tests are fixed, and the remaining seem to be different `AssertionErrors`, please help with the remaining TF test failures",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=h1) Report\n> Merging [#4538](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c58e6c129a153ca1a5021e5d7e642d00bf011e20&el=desc) will **increase** coverage by `1.39%`.\n> The diff coverage is `91.38%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4538 +/- ##\n==========================================\n+ Coverage 74.52% 75.91% +1.39% \n==========================================\n Files 128 128 \n Lines 21497 21515 +18 \n==========================================\n+ Hits 16021 16334 +313 \n+ Misses 5476 5181 -295 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <ø> (ø)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.73% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.65% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `75.85% <73.33%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.33% <75.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.30% <76.19%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.00% <80.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `93.71% <86.66%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.57% <88.88%> (-0.31%)` | :arrow_down: |\n| ... and [33 more](https://codecov.io/gh/huggingface/transformers/pull/4538/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=footer). Last update [c58e6c1...b541a08](https://codecov.io/gh/huggingface/transformers/pull/4538?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I got an email with a comment from @patrickvonplaten (which I can't find) expressing the opinion that \r\n>IMO use_cache, output_attentions and output_hidden_states all are not really parameters of the model or its decoding strategy, but switches for all (or some for use_cache)models that can be turned on and off, but essentially don't influence the model's output logits. In contrast the other config attributes (\"max_length\", \"do_sample\", ...) do influence the output of the model and therefore should stay in the config. Therefore I would be ok with removing use_cache, output_attentions and output_hidden_states completely from the config.\r\n\r\nAnd I completely agree with that conclusion! We should make sure to highlight it in release notes.",
"> I got an email with a comment from @patrickvonplaten (which I can't find) expressing the opinion that\r\n> \r\n> > IMO use_cache, output_attentions and output_hidden_states all are not really parameters of the model or its decoding strategy, but switches for all (or some for use_cache)models that can be turned on and off, but essentially don't influence the model's output logits. In contrast the other config attributes (\"max_length\", \"do_sample\", ...) do influence the output of the model and therefore should stay in the config. Therefore I would be ok with removing use_cache, output_attentions and output_hidden_states completely from the config.\r\n> \r\n> And I completely agree with that conclusion! We should make sure to highlight it in release notes.\r\n\r\n@sshleifer \r\nI just deleted this comment :D I rethought this a bit. I think the better solution is what we have now: Have `output_attentions` in both the config and as a forward argument. This way we can still use the keras serialize function, don't break backward compatibility and have the same logic as we do in the generate() method. ",
"@Bharat123rox - thanks a million for your work here! It was a lot of manual work in a lot of files! This PR is VERY useful for the library! "
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | Attempts to close #3880 by refactoring `config.output_attentions` => `output_attentions` in `forward()`/`call()` functions
**UPDATE** from Patrick
@thomwolf @LysandreJik
This PR adds the argument `output_attentions` to every forward function for more flexibility. This PR makes it possible to easily switch the output attention on/off without having to instantiate a new model every time.
The logic is the following. If 'output_attentions` is configured in the `forwrd()` fn => use that. If not => use `config.output_attentions`.
IMPORTANT: This PR does **not** change backward compatibility since output_attentions can still be configured using the config.
TESTS:
An additional test is added to the `test_output_attentions()` common test.
FUTURE PR:
- [ ] Clean the documentation. We still need to add this argument to the docs of all models and make sure the docs are clean. Lemme know @Bharat123rox if you want to tackle this in a new PR or if I should do it :-) It's not the most interesting PR, so I fully understand if you don't feel like doing it anymore ;-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4538",
"html_url": "https://github.com/huggingface/transformers/pull/4538",
"diff_url": "https://github.com/huggingface/transformers/pull/4538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4538.patch",
"merged_at": 1591738747000
} |
https://api.github.com/repos/huggingface/transformers/issues/4537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4537/comments | https://api.github.com/repos/huggingface/transformers/issues/4537/events | https://github.com/huggingface/transformers/pull/4537 | 623,708,016 | MDExOlB1bGxSZXF1ZXN0NDIyMzA1MDc1 | 4,537 | DOC: Make `import torch` explicit for "Quick tour TF 2.0" example | {
"login": "petervandenabeele",
"id": 55656,
"node_id": "MDQ6VXNlcjU1NjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/55656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petervandenabeele",
"html_url": "https://github.com/petervandenabeele",
"followers_url": "https://api.github.com/users/petervandenabeele/followers",
"following_url": "https://api.github.com/users/petervandenabeele/following{/other_user}",
"gists_url": "https://api.github.com/users/petervandenabeele/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petervandenabeele/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petervandenabeele/subscriptions",
"organizations_url": "https://api.github.com/users/petervandenabeele/orgs",
"repos_url": "https://api.github.com/users/petervandenabeele/repos",
"events_url": "https://api.github.com/users/petervandenabeele/events{/privacy}",
"received_events_url": "https://api.github.com/users/petervandenabeele/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=h1) Report\n> Merging [#4537](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34a9896ac2a4a33ff9cd805c76eed914c8d8965&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4537 +/- ##\n==========================================\n- Coverage 77.87% 77.86% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16016 16014 -2 \n- Misses 4550 4552 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.41% <0.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4537/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=footer). Last update [a34a989...fc189e7](https://codecov.io/gh/huggingface/transformers/pull/4537?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | I tried to run the Quick Tour example with only the `tensorflow` and the `transformers` imports (as shown literally in the code snippet), and _obviously_ (in hint sight) this fails with:
```
pytorch_model = BertForSequenceClassification.from_pretrained('./models/', from_tf=True)
NameError: name 'BertForSequenceClassification' is not defined
```
The trivial fix was to add `import torch` to the snippet.
When running all examples in sequence, this is not an issue, but I
was running the `tensorflow 2` example in a separate project.
Adding this line may avoid this confusion for the next newcomer :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4537/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4537",
"html_url": "https://github.com/huggingface/transformers/pull/4537",
"diff_url": "https://github.com/huggingface/transformers/pull/4537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4537.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4536/comments | https://api.github.com/repos/huggingface/transformers/issues/4536/events | https://github.com/huggingface/transformers/issues/4536 | 623,624,571 | MDU6SXNzdWU2MjM2MjQ1NzE= | 4,536 | resize_token_embeddings not implemented for TFGPT2LMHeadModel | {
"login": "virginianegri",
"id": 15633036,
"node_id": "MDQ6VXNlcjE1NjMzMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/15633036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/virginianegri",
"html_url": "https://github.com/virginianegri",
"followers_url": "https://api.github.com/users/virginianegri/followers",
"following_url": "https://api.github.com/users/virginianegri/following{/other_user}",
"gists_url": "https://api.github.com/users/virginianegri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/virginianegri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/virginianegri/subscriptions",
"organizations_url": "https://api.github.com/users/virginianegri/orgs",
"repos_url": "https://api.github.com/users/virginianegri/repos",
"events_url": "https://api.github.com/users/virginianegri/events{/privacy}",
"received_events_url": "https://api.github.com/users/virginianegri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @virginianegri, \r\n\r\nCould you specific for which model the method `resize_token_embeddings` does not work? \r\nCan you add a code snippet that reproduces the error?",
"The model is the TFGPT2LMHeadModel. This is my code:\r\n```\r\nfrom transformers import GPT2Tokenizer, TFGPT2LMHeadModel \r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nspecial_tokens_dict = {'cls_token': '<CLS>'}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\n\r\ngpt2 = TFGPT2LMHeadModel.from_pretrained('gpt2')\r\ngpt2.resize_token_embeddings(len(tokenizer)) \r\n```\r\n\r\nWhen running the resize_token_embeddings method it launches a NotImplementedError",
"Yes we should implement this soon!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | I am using a TFGPT2LMHeadModel pretrained model and new special tokens to the gpt tokenizer.
However, the method resize_token_embeddings is not implemented in all gpt2 tf models.
Will it be added? Or are there any workarounds?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4535/comments | https://api.github.com/repos/huggingface/transformers/issues/4535/events | https://github.com/huggingface/transformers/issues/4535 | 623,605,657 | MDU6SXNzdWU2MjM2MDU2NTc= | 4,535 | How to speed up inference step in BertQuestionAnswering? | {
"login": "minhnguyenth",
"id": 4896054,
"node_id": "MDQ6VXNlcjQ4OTYwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4896054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhnguyenth",
"html_url": "https://github.com/minhnguyenth",
"followers_url": "https://api.github.com/users/minhnguyenth/followers",
"following_url": "https://api.github.com/users/minhnguyenth/following{/other_user}",
"gists_url": "https://api.github.com/users/minhnguyenth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhnguyenth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhnguyenth/subscriptions",
"organizations_url": "https://api.github.com/users/minhnguyenth/orgs",
"repos_url": "https://api.github.com/users/minhnguyenth/repos",
"events_url": "https://api.github.com/users/minhnguyenth/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhnguyenth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"To improve inference speed you can use ONNX (also see here: https://github.com/huggingface/transformers/issues/260). In addition, you can opt for a distilled model rather than the full model. ",
"Hi! I've used this:\r\n\r\n\r\ndistilbert-base-uncased-distilled-squad | \r\n-- | --\r\nor \r\n\r\ndistilbert-base-cased-distilled-squad | \r\n-- | --\r\n\r\nIt improved quite a bit! \r\n",
"@ZordoC how much speed improvement did you observe?"
] | 1,590 | 1,590 | 1,590 | NONE | null | I'm working on a QA system which uses a pre-trained BertQA model. At this point, even if I use GPU, the step generating start_scores and end_scores for a set of 20 candidate passages till takes a few seconds which is the bottleneck of my application.
I just wonder do we have any strategies/tricks to speed up this step? So far, it seems using multiple GPUs at the inference step does not help at all. Any advice greatly appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4534/comments | https://api.github.com/repos/huggingface/transformers/issues/4534/events | https://github.com/huggingface/transformers/pull/4534 | 623,590,056 | MDExOlB1bGxSZXF1ZXN0NDIyMjI3OTkx | 4,534 | DOC: Fix typos in modeling_auto | {
"login": "bharatr21",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharatr21",
"html_url": "https://github.com/bharatr21",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=h1) Report\n> Merging [#4534](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4534 +/- ##\n==========================================\n- Coverage 77.86% 77.86% -0.01% \n==========================================\n Files 123 123 \n Lines 20566 20566 \n==========================================\n- Hits 16014 16013 -1 \n- Misses 4552 4553 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4534/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.57% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4534/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=footer). Last update [e19b978...847fb92](https://codecov.io/gh/huggingface/transformers/pull/4534?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Fix typo of word `dictionnary` => `dictionary` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4534/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4534",
"html_url": "https://github.com/huggingface/transformers/pull/4534",
"diff_url": "https://github.com/huggingface/transformers/pull/4534.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4534.patch",
"merged_at": 1590241260000
} |
https://api.github.com/repos/huggingface/transformers/issues/4533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4533/comments | https://api.github.com/repos/huggingface/transformers/issues/4533/events | https://github.com/huggingface/transformers/pull/4533 | 623,580,180 | MDExOlB1bGxSZXF1ZXN0NDIyMjIwNjQ4 | 4,533 | Add nn.Module as superclass | {
"login": "shoarora",
"id": 16643856,
"node_id": "MDQ6VXNlcjE2NjQzODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16643856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoarora",
"html_url": "https://github.com/shoarora",
"followers_url": "https://api.github.com/users/shoarora/followers",
"following_url": "https://api.github.com/users/shoarora/following{/other_user}",
"gists_url": "https://api.github.com/users/shoarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoarora/subscriptions",
"organizations_url": "https://api.github.com/users/shoarora/orgs",
"repos_url": "https://api.github.com/users/shoarora/repos",
"events_url": "https://api.github.com/users/shoarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoarora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are there upstream issues? I can't see how my one-liner is breaking these tests",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=h1) Report\n> Merging [#4533](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4533 +/- ##\n=======================================\n Coverage 77.86% 77.87% \n=======================================\n Files 123 123 \n Lines 20566 20566 \n=======================================\n+ Hits 16014 16015 +1 \n+ Misses 4552 4551 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4533/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=footer). Last update [e19b978...42d544b](https://codecov.io/gh/huggingface/transformers/pull/4533?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Re-ran the tests and they pass, this was a transient (connectivity?) error.\r\n\r\nThis PR looks reasonable to me. I'll just cc @suvrat96 for information"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | Add nn.Module as superclass of `MMBTModel` to fix #4532 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4533",
"html_url": "https://github.com/huggingface/transformers/pull/4533",
"diff_url": "https://github.com/huggingface/transformers/pull/4533.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4533.patch",
"merged_at": 1590434973000
} |
https://api.github.com/repos/huggingface/transformers/issues/4532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4532/comments | https://api.github.com/repos/huggingface/transformers/issues/4532/events | https://github.com/huggingface/transformers/issues/4532 | 623,579,477 | MDU6SXNzdWU2MjM1Nzk0Nzc= | 4,532 | MMBT doesn't inherit from nn.Module | {
"login": "shoarora",
"id": 16643856,
"node_id": "MDQ6VXNlcjE2NjQzODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16643856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoarora",
"html_url": "https://github.com/shoarora",
"followers_url": "https://api.github.com/users/shoarora/followers",
"following_url": "https://api.github.com/users/shoarora/following{/other_user}",
"gists_url": "https://api.github.com/users/shoarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoarora/subscriptions",
"organizations_url": "https://api.github.com/users/shoarora/orgs",
"repos_url": "https://api.github.com/users/shoarora/repos",
"events_url": "https://api.github.com/users/shoarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoarora/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MMBT
Language I am using the model on (English, Chinese ...): not related
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Minimal reproduction:
```python
from transformers import MMBTConfig, MMBTModel, AutoConfig, AutoModel
electra_config = AutoConfig.from_pretrained("google/electra-small-discriminator")
mmbt_config = MMBTConfig(electra_config)
model = AutoModel.from_config(electra_config)
mmbt = MMBTModel(mmbt_config, model, None)
mmbt()
```
output:
```
Traceback (most recent call last):
File "mmbt_debug.py", line 11, in <module>
mmbt()
TypeError: 'MMBTModel' object is not callable
```
You can see in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_mmbt.py#L152) that it's currently only inheriting from `ModuleUtilsMixin`, but not `torch.nn.Module`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
We should be seeing a downstream error since I didn't pass in a real modal encoder or any input. It should at least call `forward()`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1 (also tried 2.10.0)
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no (doesn't matter)
- Using distributed or parallel set-up in script?: no (doesn't matter)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4532/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4531/comments | https://api.github.com/repos/huggingface/transformers/issues/4531/events | https://github.com/huggingface/transformers/pull/4531 | 623,538,577 | MDExOlB1bGxSZXF1ZXN0NDIyMTg2Nzgy | 4,531 | Fix add_special_tokens on fast tokenizers | {
"login": "n1t0",
"id": 1217986,
"node_id": "MDQ6VXNlcjEyMTc5ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1217986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n1t0",
"html_url": "https://github.com/n1t0",
"followers_url": "https://api.github.com/users/n1t0/followers",
"following_url": "https://api.github.com/users/n1t0/following{/other_user}",
"gists_url": "https://api.github.com/users/n1t0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n1t0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n1t0/subscriptions",
"organizations_url": "https://api.github.com/users/n1t0/orgs",
"repos_url": "https://api.github.com/users/n1t0/repos",
"events_url": "https://api.github.com/users/n1t0/events{/privacy}",
"received_events_url": "https://api.github.com/users/n1t0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=h1) Report\n> Merging [#4531](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e19b978151419fe0756ba852b145fccfc96dbeb4&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4531 +/- ##\n==========================================\n+ Coverage 77.86% 77.88% +0.01% \n==========================================\n Files 123 123 \n Lines 20566 20570 +4 \n==========================================\n+ Hits 16014 16020 +6 \n+ Misses 4552 4550 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.55% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4531/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=footer). Last update [e19b978...afcf0c9](https://codecov.io/gh/huggingface/transformers/pull/4531?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | MEMBER | null | Fix #4457
By using `flatten`, the following
```
dict_values(['[EOS]', '[BOS]'])
```
was being transformed into this:
```
['[', 'E', 'O', 'S', ']', '[', 'B', 'O', 'S', ']']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4531",
"html_url": "https://github.com/huggingface/transformers/pull/4531",
"diff_url": "https://github.com/huggingface/transformers/pull/4531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4531.patch",
"merged_at": 1590677686000
} |
https://api.github.com/repos/huggingface/transformers/issues/4530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4530/comments | https://api.github.com/repos/huggingface/transformers/issues/4530/events | https://github.com/huggingface/transformers/pull/4530 | 623,509,197 | MDExOlB1bGxSZXF1ZXN0NDIyMTYzODMz | 4,530 | Tensorflow improvements | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=h1) Report\n> Merging [#4530](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d976ef262e0b2c52363d201b2e14e5ecc42abbb3&el=desc) will **increase** coverage by `0.38%`.\n> The diff coverage is `41.45%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4530 +/- ##\n==========================================\n+ Coverage 75.63% 76.01% +0.38% \n==========================================\n Files 128 128 \n Lines 20979 21417 +438 \n==========================================\n+ Hits 15867 16280 +413 \n- Misses 5112 5137 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <ø> (ø)` | |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `51.16% <ø> (-4.16%)` | :arrow_down: |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.86% <17.94%> (+0.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `76.10% <27.47%> (-14.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `80.53% <27.50%> (-9.80%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `82.88% <32.00%> (-12.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `74.74% <34.21%> (-25.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `91.17% <38.70%> (-7.89%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.39% <45.45%> (-3.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <50.00%> (-1.60%)` | :arrow_down: |\n| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/4530/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=footer). Last update [d976ef2...5b456e2](https://codecov.io/gh/huggingface/transformers/pull/4530?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Some commits are missing... I think it is due to the high number of error rate from Github.",
"Thanks @LysandreJik for your constructive comments!\r\n\r\nFor the second point, before to answer in order to be sure, you mean that it would be more convenient that the output of the `call(...)` methods in the TF tasks model returns the same tuple `(loss), logits, (hidden_states), (attentions)` than the `forward(...)` methods in PT tasks model? ",
"Yes, that's what I mean. I think having this to be the same as the PyTorch API would make sense. It wouldn't be a breaking change either, as it would require the `labels` to be passed to the model.\r\n\r\nI think doing this could still leverage Mixins, by calling a `self._compute_loss` or `self.compute_loss` if we want to expose this method as well. I have no strong opinion on that last item.",
"Ok, indeed makes sense and I don't think it is a problem to do that way, I will work on this today to see if there is any issue that would not allow us to do that.",
"I agree with @LysandreJik's 2nd point – maybe we can even take advantage of this to implement named tuples for TF models output, like @thomwolf and @patrickvonplaten intend to do for PyTorch (as it's going to be a breaking change in TF models anyways, maybe we can do this at the same time?)",
"Since my last commit, now the TF models return the loss such as the PT ones if the labels are given. \r\n\r\nAbout the named tuples, looks to be a good idea indeed, but I think we should implement this in another PR in order to release this in same time than for PT. No?",
"> About the named tuples [...] we should implement this in another PR in order to release this in same time than for PT. No?\r\n\r\nYes, makes sense!",
"Ok, looks good to me, I have tested the new models with different examples that use the trainer and they all work, tests looks to be ok as well except the quality one that I don't know how to fix :smile: ",
"A more general question regarding training in TensorFlow (I'm not super familiar with TF 2.0 training, so I'm asking primarily to learn a bit :-) ): \r\nI remember that when TF 2.0 was not out, most people used Keras to train a model with \r\n`model.fit(x_train, y_train)` => is this still the case?\r\nor are people more and more switching to the TF 2.0 training style as shown here: https://www.tensorflow.org/tutorials/quickstart/advanced and which basically consists of using \r\n`optimizer.apply_gradients(zip(gradients, model.trainable_variables))`. This is also what we do in the TF trainer right? \r\n\r\nWas it possible and recommended to train transformer models with keras' `model.train()` before TF Trainer and is it still possible now?",
"This is a good question! Short answer: yes it is still possible but witthout any gradient accumulation, that's mostly why the trainer uses the advanced training of TensorFlow.\r\n\r\nI'm currently preparing a next PR that will integrate the new `Model.train_step` feature added in [TF 2.2](https://github.com/tensorflow/tensorflow/releases/tag/v2.2.0). Basically this update allows you to create your own train step, and then integrate the missing gradient accumulation but this new PR will be only for TF >= 2.2.",
"@patrickvonplaten It was possible and we definitely aim to keep compatibility with keras' `fit` method. We don't have many tutorials that cover it, though, having some would probably make it easier for new users coming from Keras to use our lib.\r\n\r\n@julien-c, we've had the offline approval from @thomwolf, feel free to merge when you want. Glad to welcome this in the library!",
"Just tweaked the training_args.logging_dir to keep the same default as pytorch (I like that it creates a new subfolder each time you relaunch a training)\r\n\r\nGreat job @jplu, thank you 💪"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | Hello,
Here a quite big PR that propose the following updates:
- Loss computation is now attached to their respective class, such as PyTorch.
- Remove useless `mode` and `loss_name` parameters for the TF Trainer.
- Add missing task models to different Transformers
- Bugfix on T5 keras serialization + tests
- Add tests for TF Flaubert and XLM-Roberta
- Bugfix in TF Trainer for Tensorflow 2.2
Reviews are welcome :)
/cc @julien-c @LysandreJik @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4530/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4530/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4530",
"html_url": "https://github.com/huggingface/transformers/pull/4530",
"diff_url": "https://github.com/huggingface/transformers/pull/4530.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4530.patch",
"merged_at": 1591314354000
} |
https://api.github.com/repos/huggingface/transformers/issues/4529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4529/comments | https://api.github.com/repos/huggingface/transformers/issues/4529/events | https://github.com/huggingface/transformers/issues/4529 | 623,508,427 | MDU6SXNzdWU2MjM1MDg0Mjc= | 4,529 | Minor correction in Roberta Model docs, Roberta doesn't use NSP | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"You're absolutely right. Feel free to submit a PR to rectify this!",
"> You're absolutely right. RoBERTa swaps NSP for sentence order prediction (SOP). \r\n\r\nI think SOP was introduced into Albert \r\n\r\n>To further improve the performance of ALBERT, we also introduce a self-supervised loss for\r\nsentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed\r\nto address the ineffectiveness (Yang et al., 2019; Liu et al., 2019) of the next sentence prediction\r\n(NSP) loss proposed in the original BERT.\r\n\r\nhttps://arxiv.org/pdf/1909.11942.pdf\r\n\r\nSo it looks like the Albert document needs to be changed as well\r\n\r\nhttps://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertModel.forward\r\n\r\nRoberta uses something called FULL-SENTENCES \r\n\r\n>FULL-SENTENCES: Each input is packed with\r\nfull sentences sampled contiguously from one\r\nor more documents, such that the total length is\r\nat most 512 tokens. Inputs may cross document\r\nboundaries. When we reach the end of one document, we begin sampling sentences from the\r\nnext document and add an extra separator token\r\nbetween documents. We remove the NSP loss.\r\n\r\nhttps://arxiv.org/pdf/1907.11692.pdf\r\n\r\nIt sound like NSP isn't replaced, the task/training objective is removed altogether. So would that mean that that the linear layer which processes the CLS token outputs is untrained? It sounds like it, but I am not 100% sure. This is the linear layer I am talking about\r\n\r\n>Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training.\r\n\r\n- \r\n\r\n>Feel free to submit a PR to rectify this!\r\n\r\nSure would love to. Would need to 100% figure out if the aforementioned roberta tanh ff layer is trained, or if it's just random initialization. \r\n\r\nAre the docs on Github? I tried looking around, and found these\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/docs/source/model_doc/roberta.rst\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/docs/source/model_doc/albert.rst\r\n\r\nBut it doesn't seem to be the full documentation of the models.\r\n\r\nI also tried looking up \"next sentence prediction\" in the repo but only found comments for the model code, which I can also update in the PR. \r\n\r\n\r\n",
"I sent one of the authors an email asking about the layer, just wanted to be 100% sure before I make a PR. ",
"> I think SOP was introduced into Albert\r\n\r\nOof, sorry for the slip up. I've been working with different models these days so I sometimes mix 'em up.\r\n\r\nThe docs are generated from docstrings in the code. So you seem to be looking for this:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_tf_roberta.py#L194-L201\r\n\r\nand for Albert:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/src/transformers/modeling_tf_albert.py#L686-L692\r\n\r\nYou can check which weights are (not) loaded by setting the logger to level INFO. In such a case, you'll see a message of the layers whose layers were not loaded.\r\n\r\n```python\r\nfrom transformers import BertForNextSentencePrediction\r\n\r\nimport logging\r\n\r\nif __name__ == '__main__':\r\n logging.basicConfig(level=logging.INFO)\r\n model = BertForNextSentencePrediction.from_pretrained('bert-base-cased')\r\n```\r\n\r\nAs the last line in the log, you'll see:\r\n\r\n> Weights from pretrained model not used in BertForNextSentencePrediction: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']",
"Thanks, will use this info. For roberta it looks like all the weights are loaded; I didn't see a message about any weights not being loaded. I was expecting this since the architecture is the same, just the training is different. \r\n\r\nJust waiting to hear back if the tanh layer is untrained in roberta. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Ah, I messaged one of the authors but didn't hear anything back. But I'm pretty sure by now that there is no training of the pooler layer, so I'll start on an update. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Whoops, sort of fell off this. Will start looking into this soon. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,607 | 1,607 | CONTRIBUTOR | null |
In the roberta model docs
https://huggingface.co/transformers/model_doc/roberta.html
>pooler_output (tf.Tensor of shape (batch_size, hidden_size)):
Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.
Roberta uses something else though
>FULL-SENTENCES: Each input is packed with
full sentences sampled contiguously from one
or more documents, such that the total length is
at most 512 tokens. Inputs may cross document
boundaries. When we reach the end of one document, we begin sampling sentences from the
next document and add an extra separator token
between documents. We remove the NSP loss.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4528/comments | https://api.github.com/repos/huggingface/transformers/issues/4528/events | https://github.com/huggingface/transformers/pull/4528 | 623,477,859 | MDExOlB1bGxSZXF1ZXN0NDIyMTQwMDM0 | 4,528 | Warn the user about max_len being on the path to be deprecated. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,590 | 1,590 | 1,590 | MEMBER | null | Makes it clear the parameter `model_max_length` is preferred over `max_len` by writting a warning to the logger.
https://github.com/huggingface/transformers/issues/4527 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4528/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4528",
"html_url": "https://github.com/huggingface/transformers/pull/4528",
"diff_url": "https://github.com/huggingface/transformers/pull/4528.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4528.patch",
"merged_at": 1590185311000
} |
https://api.github.com/repos/huggingface/transformers/issues/4527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4527/comments | https://api.github.com/repos/huggingface/transformers/issues/4527/events | https://github.com/huggingface/transformers/issues/4527 | 623,460,038 | MDU6SXNzdWU2MjM0NjAwMzg= | 4,527 | Tokenizers bug: version 2.10 doesn't honor `max_len` when instantiating a pretrained model | {
"login": "soldni",
"id": 3913506,
"node_id": "MDQ6VXNlcjM5MTM1MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3913506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soldni",
"html_url": "https://github.com/soldni",
"followers_url": "https://api.github.com/users/soldni/followers",
"following_url": "https://api.github.com/users/soldni/following{/other_user}",
"gists_url": "https://api.github.com/users/soldni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soldni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soldni/subscriptions",
"organizations_url": "https://api.github.com/users/soldni/orgs",
"repos_url": "https://api.github.com/users/soldni/repos",
"events_url": "https://api.github.com/users/soldni/events{/privacy}",
"received_events_url": "https://api.github.com/users/soldni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Additional info: I ran git-blame, and determined that the change was introduced in PR [#3706](https://github.com/huggingface/transformers/pull/3706). ",
"Hi @soldni, thanks for reporting the issue.\r\n\r\nThe behavior you mention can now be achieved through: \r\n\r\n```python\r\n>>> tok.encode('This is a sentence', pad_to_max_length=True, max_length=16)\r\n[0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\r\n```\r\n\r\nAlso, please note `RobertaTokenizer` now has it \"fast\" counterpart, `RobertaTokenizerFast` which is implemented in Rust and can greatly improve the performances of the tokenizer. API stays the same between both implementations.\r\n\r\nIf I'm not mistaken, the name was changed because it was misleading in the context of generation (i.e. `generate(...)`).\r\n\r\nMorgan",
"Hi @mfuntowicz!\r\n\r\nThank you for the quick response! The issue remains that the tokenizer fails to initialize properly without raising an error. I guess I don't understand why `max_len` is still supported in some situations, but not others. I would have been fine with an error being raised, but hunting for this issue took quite a bit of time. \r\n\r\n-Luca ",
"You're right about the conflict if both are provided. \r\n\r\nI've opened a PR to at least write a warning the `max_len` parameter is being deprecated and `model_max_length` is now preferred.",
"Awesome! In the meantime, I've updated my code as you recommended. \r\n\r\nThanks again for the super quick response on this. \r\n\r\n-Luca ",
"@soldni I've fixed the issue when both are provided in the same PR, it will be included in the next patch release.\r\n\r\nThanks for reporting! I'm closing, feel free to reopen if needed 👍 \r\nMorgan"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
Hello! I've just upgraded from Transformers 2.8 to Transformers 2.10, and noticed that parameter `max_len` is not properly honored when instantiating a pretrained model. For example, in Transformer 2.8.0, I was able to limit the length of a tokenized sequence as follows:
```python
import transformers
>>> tok = transformers.RobertaTokenizer.from_pretrained('roberta-base', max_len=16)
>>> tok.encode('This is a sentence', pad_to_max_length=True)
[0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
>>>print(tok.max_len)
16
```
However, on version 2.10, `max_len` is ignored when loading a pretrained tokenizer:
```python
import transformers
>>> tok = transformers.RobertaTokenizer.from_pretrained('roberta-base', max_len=16)
>>> tok.encode('This is a sentence', pad_to_max_length=True)
[0, 152, 16, 10, 3645, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...] # 512 tokens
>>>print(tok.max_len)
512
```
This bug can be temporary solved by using `model_max_length` instead of `max_len`, but it broke the all my scripts that relied on that attribute. It seems that this issue was introduced in a recent change in [`tokenization_utils.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py) (line 825):
```python
# For backward compatibility we fallback to set model_max_length from max_len if provided
model_max_length = model_max_length if model_max_length is not None else kwargs.pop("max_len", None)
```
This compatibility is not guaranteed if the pretrained model contains `model_max_length` among its parameters, but `max_len` is specified in `from_pretrained`.
Model I am using (Bert, XLNet ...): As far as I can tell, this affects all pretrained models. Observed on BERT, RoBERTa, and DistilBERT.
Language I am using the model on (English, Chinese ...): As far as I can tell, this affects all pretrained models. Observed on English.
The problem arises when using:
* [ ] the official example scripts
* [x] my own modified scripts: See above.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: It's a classification task.
## To reproduce
See above.
## Expected behavior
See above.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Linux-4.15.0-1060-aws-x86_64-with-debian-buster-sid
- Python version: 3.6.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes; 4 x Tesla V100
- Using distributed or parallel set-up in script?: parallel, but not relevant
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4527/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4526/comments | https://api.github.com/repos/huggingface/transformers/issues/4526/events | https://github.com/huggingface/transformers/pull/4526 | 623,427,535 | MDExOlB1bGxSZXF1ZXN0NDIyMTAwMzUy | 4,526 | link to paper was broken | {
"login": "ameasure",
"id": 571959,
"node_id": "MDQ6VXNlcjU3MTk1OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/571959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameasure",
"html_url": "https://github.com/ameasure",
"followers_url": "https://api.github.com/users/ameasure/followers",
"following_url": "https://api.github.com/users/ameasure/following{/other_user}",
"gists_url": "https://api.github.com/users/ameasure/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameasure/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameasure/subscriptions",
"organizations_url": "https://api.github.com/users/ameasure/orgs",
"repos_url": "https://api.github.com/users/ameasure/repos",
"events_url": "https://api.github.com/users/ameasure/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameasure/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | changed from https://https://arxiv.org/abs/2001.04451.pdf to https://arxiv.org/abs/2001.04451.pdf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4526/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4526",
"html_url": "https://github.com/huggingface/transformers/pull/4526",
"diff_url": "https://github.com/huggingface/transformers/pull/4526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4526.patch",
"merged_at": 1590175029000
} |
https://api.github.com/repos/huggingface/transformers/issues/4525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4525/comments | https://api.github.com/repos/huggingface/transformers/issues/4525/events | https://github.com/huggingface/transformers/issues/4525 | 623,422,954 | MDU6SXNzdWU2MjM0MjI5NTQ= | 4,525 | Error in Longformer attention mask using apex mixed precision | {
"login": "wfangtw",
"id": 8427857,
"node_id": "MDQ6VXNlcjg0Mjc4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8427857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wfangtw",
"html_url": "https://github.com/wfangtw",
"followers_url": "https://api.github.com/users/wfangtw/followers",
"following_url": "https://api.github.com/users/wfangtw/following{/other_user}",
"gists_url": "https://api.github.com/users/wfangtw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wfangtw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wfangtw/subscriptions",
"organizations_url": "https://api.github.com/users/wfangtw/orgs",
"repos_url": "https://api.github.com/users/wfangtw/repos",
"events_url": "https://api.github.com/users/wfangtw/events{/privacy}",
"received_events_url": "https://api.github.com/users/wfangtw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Good catch! I guess that soon you'd want to move over PyTorch's built-in AMP which takes care of this automatically (I _think_), but for the time being your suggestion is a good fix. You can submit a PR if you want!",
"I think this is solved with the PR #4574 no?"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Longformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install latest transformers (2.10.0) and apex (0.1)
2. Code:
```
import torch
from transformers import LongformerTokenizer, LongformerModel, LongformerConfig
from apex import amp
tokenizer = LongformerTokenizer.from_pretrained('longformer-base-4096')
config = LongformerConfig.from_pretrained('longformer-base-4096')
model = LongformerModel.from_pretrained('longformer-base-4096').cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
# toy input example
inputs = torch.randint(config.vocab_size, (1, 1024)).cuda() # randomly select tokens with sequence length 1024
mask = torch.ones(1, 1024).cuda() # set mask for every token to local attention
mask[0] = 2. # global attention for first token
outputs = model(inputs, attention_mask=mask)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Error message:
```
File "/home/miniconda3/envs/test_transformer/lib/python3.8/site-packages/transformers/modeling_longformer.py", line 374, in
forward
attn[extra_attention_mask_nonzeros[::-1]] = nonzero_selected_attn.view(
RuntimeError: expected dtype Half but got dtype Float
```
`attn` is half precision but is assigned a tensor that is casted into single precision `.type_as(hidden_states)` in line 376 .
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4524/comments | https://api.github.com/repos/huggingface/transformers/issues/4524/events | https://github.com/huggingface/transformers/issues/4524 | 623,397,496 | MDU6SXNzdWU2MjMzOTc0OTY= | 4,524 | Codecov migration to marketplace app | {
"login": "thomasrockhu",
"id": 4213028,
"node_id": "MDQ6VXNlcjQyMTMwMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4213028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasrockhu",
"html_url": "https://github.com/thomasrockhu",
"followers_url": "https://api.github.com/users/thomasrockhu/followers",
"following_url": "https://api.github.com/users/thomasrockhu/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasrockhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasrockhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasrockhu/subscriptions",
"organizations_url": "https://api.github.com/users/thomasrockhu/orgs",
"repos_url": "https://api.github.com/users/thomasrockhu/repos",
"events_url": "https://api.github.com/users/thomasrockhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasrockhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @LysandreJik @thomwolf ",
"Hi @thomasrockhu, indeed we've faced such issues before. We'll take a look, thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | Hi, Tom from Codecov here.
We noticed that you are using our app with high frequency, and we’re so excited to see that! However, because you are not using our app, you may have experienced issues with uploading reports or viewing coverage information. This is due to rate-limiting from GitHub.
**In order to prevent any future outages, we ask that you move over to our GitHub marketplace app: https://github.com/marketplace/codecov.**
Let me know if you have any questions, or if I can help at all with this process. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4523/comments | https://api.github.com/repos/huggingface/transformers/issues/4523/events | https://github.com/huggingface/transformers/issues/4523 | 623,282,683 | MDU6SXNzdWU2MjMyODI2ODM= | 4,523 | Can't reproduce export to onnx with custom bert model | {
"login": "RensDimmendaal",
"id": 9828683,
"node_id": "MDQ6VXNlcjk4Mjg2ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RensDimmendaal",
"html_url": "https://github.com/RensDimmendaal",
"followers_url": "https://api.github.com/users/RensDimmendaal/followers",
"following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}",
"gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions",
"organizations_url": "https://api.github.com/users/RensDimmendaal/orgs",
"repos_url": "https://api.github.com/users/RensDimmendaal/repos",
"events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}",
"received_events_url": "https://api.github.com/users/RensDimmendaal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834083927,
"node_id": "MDU6TGFiZWwxODM0MDgzOTI3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/External",
"name": "External",
"color": "fbca04",
"default": false,
"description": "Using the library with external tools (onnx, tflite, ...)"
}
] | closed | false | null | [] | [
"Pinging @mfuntowicz, chief onnx officer",
"Hi @RensDimmendaal,\r\n\r\nThanks for reporting this 👍.\r\nCan you share the shape of the input you're feeding to the ONNX model? ",
"Thanks for investigating!\r\n\r\n```python\r\nfor k,v in inputs_onnx.items():\r\n print(f\"{k}: shape: {v.shape}\")\r\n```\r\n```\r\n>>>\r\ninput_ids: shape: (1, 10)\r\ntoken_type_ids: shape: (1, 10)\r\nattention_mask: shape: (1, 10)\r\n```\r\n\r\nSource: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=64GodG5fKb0m&line=2&uniqifier=1",
"Interesting I'm not able to reproduce on my side (_see at the end_). \r\n\r\nCan you try restarting the Colab kernel? _(just to make sure the path are correctly updated)_. \r\nLet us know if it change something, if not I'll dig further on a custom colab.\r\n\r\n```python\r\n>>> import onnxruntime as ort\r\n>>> from transformers import BertTokenizerFast\r\n>>> session = ort.InferenceSession(\"onnx/bert-base-cased.onnx\")\r\n\r\n>>> tokenizer = BertTokenizerFast.from_pretrained(\"bert-base-cased\")\r\n>>> onnx_in = tokenizer.encode_plus(\"S E Q W E N C E\", return_tensors=\"pt\")\r\n\r\n>>> inputs_onnx = {k: v.cpu().detach().numpy() for k, v in onnx_in.items()}\r\n>>> sequence, pooled = session.run(None, inputs_onnx)\r\n\r\n>>> sequence.shape\r\n(1, 10, 768)\r\n\r\n>>> pooled.shape\r\n(1, 768)\r\n```",
"I've done a restart and run all and the problem persists.\r\n\r\ni ran your code too, and it gives the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\n\r\nFail Traceback (most recent call last)\r\n\r\n<ipython-input-13-6776c93b3fb0> in <module>()\r\n 18 \r\n 19 inputs_onnx = {k: v.cpu().detach().numpy() for k, v in onnx_in.items()}\r\n---> 20 sequence, pooled = session.run(None, inputs_onnx)\r\n 21 \r\n 22 print(sequence.shape, pooled.shape)\r\n\r\n/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options)\r\n 109 output_names = [output.name for output in self._outputs_meta]\r\n 110 try:\r\n--> 111 return self._sess.run(output_names, input_feed, run_options)\r\n 112 except C.EPFail as err:\r\n 113 if self._enable_fallback:\r\n\r\nFail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Attention node. Name:'Attention_1' Status Message: CUBLAS error executing cublasGemmHelper( cublas, CUBLAS_OP_N, CUBLAS_OP_N, n, m, 1, &one, reinterpret_cast<const CudaT*>(bias->template Data<T>()), n, GetConstOnes<CudaT>(m), 1, &zero, reinterpret_cast<CudaT*>(gemm_buffer.get()), n, device_prop)\r\n```\r\n\r\nsource: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=dXeNg37RTxl_&line=9&uniqifier=1",
"Ok , thanks for checking @RensDimmendaal .\r\n\r\nI'll do some experiments on a fresh notebook and post update here 👍 ",
"@mfuntowicz,\r\n\r\nI run the [notebook](https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=64GodG5fKb0m&line=2&uniqifier=1) in my local machine, and look at the onnx model after export (and before optimization). I found that the exported onnx model has switched the position of \"attention_mask\" with \"token_type_ids\":\r\n\r\n\r\n\r\nThe above is a snapshot of embedding layer in exported graph. The \"attention_mask\" in the graph shall be named as \"token_type_ids\" since it is used to look up segment embeddings.",
"Hi,\r\n\r\nhow can I use it for my trained text classifier?",
"The onnx export script has assumption of order of inputs. If the class you used does not have same order (or there are other parameters in between), you can wrap a class to use the expected order for export like:\r\n```\r\nclass MyBertModel(BertForMaskedLM):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n def forward(self, input_ids, token_type_ids, attention_mask):\r\n return super().forward(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)\r\n\r\nmodel = MyBertModel(config)\r\nmodel.save_pretrained(\"./my_bert\")\r\n```\r\nIn this way, the exported model will have correct inputs.",
"Thanks for checking tianleiwu! \r\n\r\n(quick question in between: how do you make that plot of the onnx exported model?)\r\n\r\nIt does not solve the issue for me though. The error message remains the same.\r\n\r\nI've added your code here:\r\nhttps://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH#scrollTo=gXQ_JorGpAdI&line=1&uniqifier=1\r\n\r\nHowever, changing this in my inputs during inference did the trick:\r\n\r\n```python\r\n# CHANGE: SHUFFLE INPUTS\r\ninputs_onnx = {\r\n 'input_ids': inputs_onnx['input_ids'],\r\n 'attention_mask': inputs_onnx['token_type_ids'],\r\n 'token_type_ids': inputs_onnx['attention_mask'],\r\n }\r\n# Run the model (None = get all the outputs)\r\nsequence, pooled = cpu_model.run(None, inputs_onnx)\r\n```\r\n\r\n```\r\n>>>\r\nSequence output: (1, 10, 768), Pooled output: (1, 768)\r\n```\r\n\r\nTo me this seems like a bug that could be solved by having `transformers.convert_graph_to_onnx.ensure_valid_input` also return the reordered input_names.\r\nSomething like this:\r\n\r\n```python\r\ndef ensure_valid_input(model, tokens, input_names):\r\n \"\"\"\r\n Ensure input are presented in the correct order, without any None\r\n Args:\r\n model: The model used to forward the input data\r\n tokens: BatchEncoding holding the input data\r\n input_names: The name of the inputs\r\n\r\n Returns: Tuple\r\n\r\n \"\"\"\r\n model_args_name = model.forward.__code__.co_varnames\r\n model_args_pos = [(model_args_name.index(name) - 1, name) for name in input_names]\r\n model_args = [None] * (max(map(lambda x: x[0], model_args_pos)) + 1)\r\n ordered_input_names = [None] * len(model_args) # new\r\n\r\n for arg_pos, arg_name in model_args_pos:\r\n model_args[arg_pos] = tokens[arg_name]\r\n ordered_input_names[arg_pos] = arg_name # new\r\n\r\n model_args = tuple(takewhile(lambda arg: arg is not None, model_args)) # Need to be ordered\r\n return ordered_input_names, model_args # new\r\n```\r\n\r\nHowever, based on the test for this function it seems that it is also used for GPT2, and I don't know if this change will break anythinig for that model (test_onnx.py line 111 and 112).\r\n\r\nHappy to submit a PR if this seeems like the way to go.\r\n\r\n",
"@tianleiwu @RensDimmendaal I'll have a look on this asap. \r\n\r\nThe export should not permute inputs.",
"@RensDimmendaal I think your suggestion is the way to go, do you mind submitting a PR and assigning me as a reviewer ? 👍 ",
"Thanks @RensDimmendaal for submitting the PR, I'm closing this for now 👍.\r\n\r\nDon't hesitate to reopen / create a new issue if you ran into any problem!"
] | 1,590 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
I try to run onnx export on a custom bert model, but during inference I get the following error.
I share a google colab with the minimum changes to reproduce. All changes are marked with a `# CHANGE` comment. https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing
```
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]
```
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...):
None, I'm using a custom bert model, and for this bug report I'm using a random bert model.
The problem arises when using:
The official example notebook: https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb
## To reproduce
Steps to reproduce the behavior:
Run the convert to onxx script with a custom bert model. I've made a copy of the official notebook with the minimum changes required to illustrate the problem here: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing
```python
---------------------------------------------------------------------------
InvalidArgument Traceback (most recent call last)
<ipython-input-12-1d032f1e9ad0> in <module>()
9
10 # Run the model (None = get all the outputs)
---> 11 sequence, pooled = cpu_model.run(None, inputs_onnx)
12
13 # Print information about outputs
/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options)
109 output_names = [output.name for output in self._outputs_meta]
110 try:
--> 111 return self._sess.run(output_names, input_feed, run_options)
112 except C.EPFail as err:
113 if self._enable_fallback:
InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]
```
## Expected behavior
Get pooled and sequence output of bert model.
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4522/comments | https://api.github.com/repos/huggingface/transformers/issues/4522/events | https://github.com/huggingface/transformers/pull/4522 | 623,224,177 | MDExOlB1bGxSZXF1ZXN0NDIxOTMxNjA3 | 4,522 | Added huseinzol05/t5-small-bahasa-cased README.md | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=h1) Report\n> Merging [#4522](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bd6e3018322766b3a71ae6675552607923c02636&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4522 +/- ##\n=======================================\n Coverage 77.85% 77.85% \n=======================================\n Files 123 123 \n Lines 20551 20551 \n=======================================\n+ Hits 15999 16001 +2 \n+ Misses 4552 4550 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.53% <0.00%> (+0.11%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=footer). Last update [bd6e301...d1e772c](https://codecov.io/gh/huggingface/transformers/pull/4522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4522/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4522",
"html_url": "https://github.com/huggingface/transformers/pull/4522",
"diff_url": "https://github.com/huggingface/transformers/pull/4522.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4522.patch",
"merged_at": 1590174247000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4521/comments | https://api.github.com/repos/huggingface/transformers/issues/4521/events | https://github.com/huggingface/transformers/issues/4521 | 623,195,430 | MDU6SXNzdWU2MjMxOTU0MzA= | 4,521 | Using DistillBert to train Bert (run_languge_modeling.py) for some languge from scratch | {
"login": "mahdirezaey",
"id": 34715488,
"node_id": "MDQ6VXNlcjM0NzE1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/34715488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdirezaey",
"html_url": "https://github.com/mahdirezaey",
"followers_url": "https://api.github.com/users/mahdirezaey/followers",
"following_url": "https://api.github.com/users/mahdirezaey/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdirezaey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdirezaey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdirezaey/subscriptions",
"organizations_url": "https://api.github.com/users/mahdirezaey/orgs",
"repos_url": "https://api.github.com/users/mahdirezaey/repos",
"events_url": "https://api.github.com/users/mahdirezaey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdirezaey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"I guess you can, since the architecture of distilbert is more or less the same as BERT (just half the layers), but I would not expect great performance. The power of distilling lies (next to the training objective) in having a teacher model (e.g.. a full BERT model) and initializing the distilled student network with weights from the teacher. If you don't do that it might not be easy to have good initialisation. In addition, the triple training loss would not make sense then since you have no teacher predictions to compare your distilled model with.\r\n\r\nI can see that the language_modeling script allows to use distilbert, but I would assume that is intended for fine-tuning rather than pre-training. cc @VictorSanh ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,590 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
Is it a right thing to use 'distilbert-base-cased' for training that with more data for some language ( which exists in the list of multilingual languages , and doesn't have a separate model still)
Will we reach good results as good as doing that with 6 layer BERT ?
Is there any differences between 6 layer BERT and distillBert (to use as the model for run_languge_modeling.py) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4521/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4520/comments | https://api.github.com/repos/huggingface/transformers/issues/4520/events | https://github.com/huggingface/transformers/issues/4520 | 623,112,792 | MDU6SXNzdWU2MjMxMTI3OTI= | 4,520 | How to use its own custom Optimizer (GLUE Example) | {
"login": "BelhalK",
"id": 15817944,
"node_id": "MDQ6VXNlcjE1ODE3OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15817944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BelhalK",
"html_url": "https://github.com/BelhalK",
"followers_url": "https://api.github.com/users/BelhalK/followers",
"following_url": "https://api.github.com/users/BelhalK/following{/other_user}",
"gists_url": "https://api.github.com/users/BelhalK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BelhalK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BelhalK/subscriptions",
"organizations_url": "https://api.github.com/users/BelhalK/orgs",
"repos_url": "https://api.github.com/users/BelhalK/repos",
"events_url": "https://api.github.com/users/BelhalK/events{/privacy}",
"received_events_url": "https://api.github.com/users/BelhalK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the Trainer takes an optional `optimizers` arg which is a two-tuple of (optimizer, scheduler):\r\n\r\nhttps://github.com/huggingface/transformers/blob/95a26fcf2d8d7072e4e63129cea8605f756bba1d/src/transformers/trainer.py#L152-L181",
"I created the optimizers using the same way. But the model is not getting trained, because the training loss is not decreasing with time.\r\n\r\nimport transformers\r\ngrouped_params = model.parameters()\r\noptimizer=transformers.AdamW(grouped_params,\r\n lr=0.00025)\r\nscheduler=transformers.get_cosine_schedule_with_warmup(optimizer=optimizer, \r\n num_warmup_steps=2000, \r\n num_training_steps=60000)\r\noptimizers = optimizer, scheduler\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./test_checkpoint\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=15,\r\n per_device_train_batch_size=8,\r\n save_steps=1000,\r\n save_total_limit=3,\r\n logging_steps=50,\r\n dataloader_drop_last=True,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n prediction_loss_only=True,\r\n optimizers=optimizers\r\n)",
"Same issue here. Any solution?",
"I have the same issue as well.\r\n",
"> I have the same issue as well.\r\n\r\nHey Chris, I'd like to know how you actually found out that you cannot pass the custom optimizer to Trainer?\r\n\r\nIn my case, I create custom optim and lr scheduler by:\r\n\r\n```python\r\ntraining_steps = training_step_calc( # self-defined func\r\n encoded_dataset['train'],\r\n PER_DEVICE_TRAIN_BATCH_SIZE,\r\n gpu_count,\r\n NUM_TRAIN_EPOCHS\r\n)\r\n\r\nwarmup_steps = (training_steps * WARMUP_RATIO)\r\n\r\noptimizer = bnb.optim.AdamW8bit(\r\n model.parameters(),\r\n lr=LEARNING_RATE,\r\n weight_decay=WEIGHT_DECAY, \r\n)\r\n\r\nscheduler = transformers.get_cosine_schedule_with_warmup(\r\n optimizer, \r\n num_warmup_steps=warmup_steps, \r\n num_training_steps=training_steps, \r\n```\r\nThen I didn't specify the optim-related args in `TrainingArguments()`:\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=SAVE_PATH,\r\n # basic hp\r\n num_train_epochs=NUM_TRAIN_EPOCHS,\r\n # auto_find_batch_size=True,\r\n per_device_train_batch_size=PER_DEVICE_TRAIN_BATCH_SIZE,\r\n per_device_eval_batch_size=PER_DEVICE_EVAL_BATCH_SIZE,\r\n gradient_checkpointing=GRADIENT_CHECKPOINTING,\r\n # optim-related, comment if use a custom optimiser\r\n # optim=\"adamw_hf\" if OPTIM is None else OPTIM,\r\n # learning_rate=LEARNING_RATE,\r\n # weight_decay=WEIGHT_DECAY, \r\n # lr_scheduler_type=LR_SCHEDULER_TYPE,\r\n # warmup_ratio=WARMUP_RATIO,\r\n # data related\r\n data_seed=DATA_SEED,\r\n dataloader_num_workers=DATALOADER_NUM_WORKERS,\r\n```\r\nAfter passing all the parameters to `Trainer()`, it ended up with this:\r\n```python\r\ntrainer = Trainer(\r\n model = model,\r\n tokenizer = tokenizer,\r\n args = training_args,\r\n train_dataset = encoded_dataset[\"train\"],\r\n eval_dataset = encoded_dataset[\"test\"],\r\n data_collator = data_collator,\r\n optimizers=(optimizer, scheduler),\r\n compute_metrics = compute_metrics,\r\n)\r\n```\r\nWhen I check the `trainer.args`, the optim in the args seems to be the default, and so it's shown on wandb run page. But the `trainer.optimizer` is shown as:\r\n\r\n```python\r\nAdamW8bit (\r\nParameter Group 0\r\n betas: (0.9, 0.999)\r\n eps: 1e-08\r\n initial_lr: 7e-06\r\n lr: 0.0\r\n weight_decay: 0.01\r\n)\r\n```\r\nIn fact, by manipulating the optimizer settings of the Trainer, even though the default adamw_hf optimizer is still displayed in the args in wandb and trainer, the optimizer is overridden by the custom optimizer and scheduler at training time."
] | 1,590 | 1,685 | 1,590 | NONE | null | # ❓ Questions & Help
I am referring to the example on GLUE text-classification/run_glue.py.
I would like to change the default optimizer (which I believe it to be ADAMW) to my own one.
Should be in the imported Trainer or TrainingArguments, but I have not found an example doing so.
Is it possible with any optimizer? (as long as they are written as a Torch Optimizer of course)
Thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4519/comments | https://api.github.com/repos/huggingface/transformers/issues/4519/events | https://github.com/huggingface/transformers/pull/4519 | 623,108,902 | MDExOlB1bGxSZXF1ZXN0NDIxODM5MzM1 | 4,519 | Specify device in DataCollator | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=h1) Report\n> Merging [#4519](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a08652772791fdaeed6f263b1a99926ca64be5dc&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `81.81%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4519 +/- ##\n==========================================\n- Coverage 77.83% 77.82% -0.02% \n==========================================\n Files 123 123 \n Lines 20514 20513 -1 \n==========================================\n- Hits 15968 15964 -4 \n- Misses 4546 4549 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.70% <80.00%> (+0.47%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.37% <100.00%> (-0.11%)` | :arrow_down: |\n| [src/transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4519/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=footer). Last update [a086527...7f579cb](https://codecov.io/gh/huggingface/transformers/pull/4519?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@mfuntowicz Perhaps it's useful to pull this through to the encode methods of the tokenizers so that you can pass a device and if return_tensors is used, the tensors are automatically pushed to the correct device? ",
"I tend to agree we need to have a device argument on the tokenizer `encode_like` methods, that would remove the need to iterate over the items to relocate on GPU/TPU.\r\n\r\nIs there a common scheme we can use to do this on both Pytorch & TensorFlow (_I'm not very familiar with TensorFlow_)? \r\n\r\nI suspect strings might be the easiest way to handle this: \r\n\r\n- PyTorch: `device=\"cpu\"` || `device=\"cuda:0\"`\r\n- TensorFlow: `device=\"/device:cpu\"` || `device=\"/device:gpu:0\"` \r\n\r\nI need to test things out futher here to see what the API may looks like! 👍 \r\nLet's see what the other think of the propal for the Trainer and we can follow up on a dedicated PR asap.",
"> I tend to agree we need to have a device argument on the tokenizer `encode_like` methods, that would remove the need to iterate over the items to relocate on GPU/TPU.\r\n> \r\n> Is there a common scheme we can use to do this on both Pytorch & TensorFlow (_I'm not very familiar with TensorFlow_)?\r\n> \r\n> I suspect strings might be the easiest way to handle this:\r\n> \r\n> * PyTorch: `device=\"cpu\"` || `device=\"cuda:0\"`\r\n> * TensorFlow: `device=\"/device:cpu\"` || `device=\"/device:gpu:0\"`\r\n> \r\n> I need to test things out futher here to see what the API may looks like! 👍\r\n> Let's see what the other think of the propal for the Trainer and we can follow up on a dedicated PR asap.\r\n\r\nSorry, I also don't have much experience with TF so I can't really chip in on that. I guess the encode methods can accept a `device=` property that can be of type `Union[str, int, tf.device, torch.device]`. The device constructor (pt or tf) can depend on the already available `return_tensors` part which can be None, pt, or tf.\r\n\r\n- `str`: use it to initialize the `torch.device(str)` or `tf.device(str))` - allows the users with a lot of freedom in case they want do something like `tf.device('/job:bar/task:0/device:gpu:2')` (example from [docs](https://www.tensorflow.org/api_docs/python/tf/device))\r\n- `int`: assume this is a GPU device id: `torch.device(f\"cuda:{int}\")` or `torch.device(f\"/device:gpu:{int}\")`\r\n- a `device`: use the device",
"@mfuntowicz is the performance improvement of this change significant? How can we measure it?",
"Let me run some epochs to report quantitative numbers",
"Perhaps useful discussions here: https://stackoverflow.com/questions/28597014/python-why-is-accessing-instance-attribute-is-slower-than-local\r\n\r\nIn 100 calls to the variable (self or local) the answer reports 3 seconds difference. Not sure if that is worth it.\r\n\r\nI'm all for speed improvements but I'm also really fond of readability. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"cc @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Should this still be implemented, or is this PR superseded by another one? @mfuntowicz @julien-c ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Bump @julien-c @mfuntowicz ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Should I keep bumping this? @sgugger ",
"I don't think it's useful: the Trainer handles it already and for people that want to run their custom training loop, Accelerate can handle that.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,590 | 1,651 | 1,623 | MEMBER | null | By setting `device` parameter in `DataCollator` we're able to allocate tensors directly on the right device at tensor creation and avoid moving data around afterwards.
This effectively avoid some snippet like this in the Trainer:
```python
for k, v in some_dict.items():
v.to(device)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4519/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4519",
"html_url": "https://github.com/huggingface/transformers/pull/4519",
"diff_url": "https://github.com/huggingface/transformers/pull/4519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4519.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4518/comments | https://api.github.com/repos/huggingface/transformers/issues/4518/events | https://github.com/huggingface/transformers/issues/4518 | 623,101,571 | MDU6SXNzdWU2MjMxMDE1NzE= | 4,518 | [marian] possible memory leak problem while translating & extracting internal representations | {
"login": "jrvc",
"id": 33348954,
"node_id": "MDQ6VXNlcjMzMzQ4OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/33348954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jrvc",
"html_url": "https://github.com/jrvc",
"followers_url": "https://api.github.com/users/jrvc/followers",
"following_url": "https://api.github.com/users/jrvc/following{/other_user}",
"gists_url": "https://api.github.com/users/jrvc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jrvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jrvc/subscriptions",
"organizations_url": "https://api.github.com/users/jrvc/orgs",
"repos_url": "https://api.github.com/users/jrvc/repos",
"events_url": "https://api.github.com/users/jrvc/events{/privacy}",
"received_events_url": "https://api.github.com/users/jrvc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ooook.... this is embarrassing. I just realized that I had to detach the variable, so GPU memory could be freed. \r\nThis does the trick:\r\n```\r\nencoded_sentences.append( [x.detach().to('cpu') for x in model_outputs[4]+model_outputs[1]] )\r\n```\r\n\r\nsorry for the trouble ;) and thanks for the repo and all your hard work"
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
I am extracting the internal representations of some of the Marian models.
There seems to be a memory leak problem. In this issue, you will find code for running the model sentence by sentence (bsz = 1) just to keep it simple. When I use batching, the problem persists and arises earlier.
Model I am using: MarianMT
` modelnames=[f'Helsinki-NLP/opus-mt-en-{tgt}' for tgt in ['de', 'fr', 'ee', 'sv', 'el', 'fi', 'cs', 'ru' ]]`
Language I am using the model on: en-{tgt]
The problem arises when using:
* [ ] a mix of official example scripts and my own:
on this code, I keep the lines used to see if it is a memory problem. Hence the `empty_cache()`, keeping track of the memory usage with `memory_stats()`, and passing things to 'cpu' (but this has not solved the problem for me)
```
import torch
import transformers
config_overrider={'output_attentions':True, 'output_hidden_states':True}
model = transformers.MarianMTModel.from_pretrained(modelname, **config_overrider)
tokenizer = transformers.MarianTokenizer.from_pretrained(modelname)
model.eval()
encoded_sentences = []
memdict=[]
for sent in tqdm(sentences):
tokdsent = self.tokenizer.prepare_translation_batch(src_texts=[' '.join(sent)])
tokdsent = {k:v.to(self.device) for k,v in tokdsent.items()}
model_outputs = self.model.forward(**tokdsent)
encoded_sentences.append( [x.to('cpu') for x in model_outputs[4]+model_outputs[1]] )
torch.cuda.empty_cache()
memdict.append(torch.cuda.memory_stats(self.device))
print(memdict[-1]['active.all.current'],memdict[-1]['active.all.peak']) # comment out this part
```
The tasks I am working on is:
* [ ] Using a dataset from an official task:
semantic textual similarity - STS 2012, 2013, 2014, 2015 and 2016 (can use [this one](https://github.com/Helsinki-NLP/Geometry/blob/refactorize/data/STS/allSTS.txt))
## To reproduce
Steps to reproduce the behavior:
1. Load and tokenize the sentences (need this for what I am doing, even when I detok when passing it to the tokenizer)
```
STS_path = "path/to/allSTS.txt"
with open(STS_path, 'r') as f:
samples = f.readlines()
sentences = []
for sent in samples:
sent = sent.strip()
sent = re.findall(r'[\w]+|\.|,|\?|\!|;|:|\'|\(|\)|/',sent)
sentences.append(sent)
```
2. Run the code above. The one on _"The problem arises when using:"_
3. For me, around line 3750 I get OOM:
```
RuntimeError('CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 31.75 GiB total capacity; 30.67 GiB already allocated; 17.69 MiB free; 30.67 GiB reserved in total by PyTorch)')
```
Here I copy some of the linesprinted from `active.all.current ` and `active.all.peak` (it never changes the upwards trend):
```
533 535
779 811
1025 1057
1271 1303
1517 1549
1763 1795
2009 2041
2255 2287
2501 2533
2747 2779
2993 3025
...
9635 9667
9881 9913
10127 10159
10373 10405
10619 10651
...
921311 921343
921557 921589
921803 921835
922049 922081
922295 922327
922541 922573
```
^-- these are the first 10 lines, somewhere around 40 sentences, and the last lines before running out of mem - close to 3750 sents.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the memory on the cuda device to be freed after every iteration since I am overwriting the variable there and what I append to the list I want to keep is sent to cpu.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux 3.10.0-1062.7.1.el7.x86_64 x86_64, Red Hat Enterprise Linux Server 7.7 (Maipo)
- Python version: 3.7.3
- PyTorch version (GPU?): 1.5.0 for cuda 10.2 (Nvidia Volta V100 GPU with 32 GB of memory)
- Tensorflow version (GPU?): not using tf
- Using GPU in script?: yes (but I have seen the same problem on CPU)
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4518/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4517/comments | https://api.github.com/repos/huggingface/transformers/issues/4517/events | https://github.com/huggingface/transformers/issues/4517 | 623,098,629 | MDU6SXNzdWU2MjMwOTg2Mjk= | 4,517 | How to train a custom seq2seq model with BertModel | {
"login": "chenjunweii",
"id": 27069126,
"node_id": "MDQ6VXNlcjI3MDY5MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/27069126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenjunweii",
"html_url": "https://github.com/chenjunweii",
"followers_url": "https://api.github.com/users/chenjunweii/followers",
"following_url": "https://api.github.com/users/chenjunweii/following{/other_user}",
"gists_url": "https://api.github.com/users/chenjunweii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenjunweii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenjunweii/subscriptions",
"organizations_url": "https://api.github.com/users/chenjunweii/orgs",
"repos_url": "https://api.github.com/users/chenjunweii/repos",
"events_url": "https://api.github.com/users/chenjunweii/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenjunweii/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @chenjunweii - thanks for your issue! I will take a deeper look at the EncoderDecoder framework at the end of this week and should add a google colab on how to fine-tune it.",
"Using Bert - Bert model for seq2seq task should work using simpletransformers library, there is an working code.\r\nBut there is one strange thing that the saved models loads wrong weight's.\r\nPredicting the same string multiple times works correctly, loading the model each time again it's generating a new result every time @patrickvonplaten ",
"Hi @flozi00, \r\ncould you add a code snippet here that reproduces this bug?",
"Of course, it should be reproduceable using this code:\r\n\r\n```python\r\nimport logging\r\n\r\nimport pandas as pd\r\nfrom simpletransformers.seq2seq import Seq2SeqModel\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\ntransformers_logger = logging.getLogger(\"transformers\")\r\ntransformers_logger.setLevel(logging.WARNING)\r\n\r\n\r\ntrain_data = [\r\n [\"one\", \"1\"],\r\n [\"two\", \"2\"],\r\n]\r\n\r\ntrain_df = pd.DataFrame(train_data, columns=[\"input_text\", \"target_text\"])\r\n\r\neval_data = [\r\n [\"three\", \"3\"],\r\n [\"four\", \"4\"],\r\n]\r\n\r\neval_df = pd.DataFrame(eval_data, columns=[\"input_text\", \"target_text\"])\r\n\r\nmodel_args = {\r\n \"reprocess_input_data\": True,\r\n \"overwrite_output_dir\": True,\r\n \"max_seq_length\": 10,\r\n \"train_batch_size\": 2,\r\n \"num_train_epochs\": 10,\r\n \"save_eval_checkpoints\": False,\r\n \"save_model_every_epoch\": False,\r\n \"evaluate_generated_text\": True,\r\n \"evaluate_during_training_verbose\": True,\r\n \"use_multiprocessing\": False,\r\n \"max_length\": 15,\r\n \"manual_seed\": 4,\r\n}\r\n\r\nencoder_type = \"roberta\"\r\n\r\nmodel = Seq2SeqModel(\r\n encoder_type,\r\n \"roberta-base\",\r\n \"bert-base-cased\",\r\n args=model_args,\r\n use_cuda=True,\r\n)\r\n\r\nmodel.train_model(train_df)\r\n\r\nresults = model.eval_model(eval_df)\r\n\r\nprint(model.predict([\"five\"]))\r\n\r\n\r\nmodel1 = Seq2SeqModel(\r\n encoder_type,\r\n encoder_decoder_name=\"outputs\",\r\n args=model_args,\r\n use_cuda=True,\r\n)\r\nprint(model1.predict([\"five\"])\r\n```\r\n\r\nIt the sample code in documentation of simpletransformers library.\r\nThe dataset size doesn't matter.\r\n\r\nhttps://github.com/ThilinaRajapakse/simpletransformers/blob/master/README.md#encoder-decoder",
"Hey @flozi00, I think #4680 fixes the error.\r\n\r\n@chenjunweii - a Bert2Bert model using the `EncoderDecoder` framework should be the right approach here! You can use one `Bert` model as an encoder and the other `Bert` model as a decoder. You will have to fine-tune the `EncoderDecoder` model a bit, but it should work fine!\r\n\r\nYou can load the model via:\r\n```python\r\nfrom transformers import EncoderDecoder\r\n\r\nmodel = EncoderDecoder.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert\r\n```\r\n\r\nand train it on conditional language text generation providing the `input_ids` as context, the `decoder_input_ids` as the text to generate and `lm_labels` as your shifted text to generate. Think of it as `decoder_input_ids` and `lm_labels` being your normal inputs for causal text generation inputs and `input_ids` as your context to condition the model on. I will soon provide a notebook that makes this clearer.",
"Thank you for working on this problem and thank you for 🤗 !\r\nIt looks like it is finally possible to write seq2seq models in under 10 lines of code, yay!\r\n\r\nBut I still have some questions and concerns about the `EncoderDecoder`.\r\n\r\n1. It is not clear now, how masking now works in the decoder implementation. I spent quite some time to get into it.\r\n\r\nDocumentation says that \"Causal mask will also be used by default\", but I did not find how to change it. E.g. what if I am training model without teacher forcing (just generating words one by one during training) or if I am doing inference?\r\n\r\nI would suggest to add one more argument to the forward that would make it both more clear when causal masking is used and how to enable/disable it. What do you think?\r\n\r\n2. It is not clear what is the default decoder class.\r\n\r\nIt just feels weird to use BERT as a decoder. BERT is a mode that is a) non-autoregressive b) pre-trained without cross-attention modules. It is also unclear at which point the cross-attention modules are created. It would be great, if it is possible, to add something like `TransformerDecoder` model.\r\n\r\n",
"Hey @Guitaricet :-) ,\r\n\r\nFirst, at the moment only Bert2Bert works with the encoder-decoder framework. Also, if you use Bert as a decoder you will always use a causal mask. At the moment I cannot think of an encoder-decoder in which the decoder does not use a causal mask, so I don't see a reason why one would want to disable it. Can you give me an example where the decoder should not have a causal mask? \r\nDo you mean auto-regressive language generation by \"generating words one by one\"? Auto-regressive language modeling always requires a causal mask...\r\n\r\n2. Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation! ",
"I'm trying to build a Bert2Bert model using EncoderDecoder, but I have a couple quick questions regarding the format of inputs and targets for the BERT decoder.\r\n\r\nWhat exactly is a good way to format the conditional mask to the decoder. For example, if I want to feed the decoder [I, am] and make it output [I, am, happy], how exactly do I mask the input? Do I give the decoder [CLS, I, am, MASK, ...., MASK, SEP] where the number of MASKs is such that the total number of tokens is a fixed length (like 512)? Or do I just input [CLS, I, am, MASK, SEP, PAD, ..., PAD]?\r\n\r\nSimilarly, what should the decoder's output be? Does the first token (the \"output\" of CLS) be the token \"I\"?\r\n\r\nLastly, is there a website or resource that explains the input and output representations of text given to the decoder in Bert2Bert? I don't think the authors of the paper have released their code yet.\r\n\r\nThanks!\r\n\r\n",
"I will soon release a bert2bert notebook that will show how to do this. You can also take a look at this: \r\nhttps://github.com/huggingface/transformers/issues/4647\r\n\r\nMaybe it helps.",
"Thank you @patrickvonplaten for clarification\r\n\r\n1. I see why not using a causal mask seems weird and I agree with you. I can think of two reasons not to use a causal mask for generation: 1) inference: you don't have any future to look into, thus the mask is not strictly needed (you won't be able to cache the decoder states though) 2) you can train a model without [teacher forcing](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/), i.e. during training forwarding your decoder tgt_len times only using the words that has been predicted by the model instead of feeding the ground truth.\r\n\r\nIt is very possible that both of these cases are rare, so the library may not need `causal_masking` argument, but at least some clarification may be needed. This is the reason why I found this issue in the first place.\r\n\r\n2. Yes, improving the documentation would help a lot! Still, I would argue that a designated `Decoder` class is a much more clear way if you want to train it from scratch.\r\n\r\nI also noticed that `config.is_decoder` option is only documented in BertModel and not in `BertConfig` class. Adding it would help a lot. (I only found it because I thought that it is not documented at all and wanted to check my claim via searching for \"is_decoder\" in the source code)\r\n\r\nAgain, thank you for you work, 🤗 is what NLP community needed for quite some time!\r\n\r\n**UPD:** more reasons to use a different attention mask (not for seq2seq though) XLNet-like or ULM-like pre-training",
"> I will soon release a bert2bert notebook that will show how to do this. You can also take a look at this:\r\n> #4647\r\n> \r\n> Maybe it helps.\r\n\r\nHi @patrickvonplaten ,\r\n\r\nThanks for the clarification on this topic and for the great work you've been doing on those seq2seq models.\r\nIs this notebook you mentioned here already available?\r\n\r\nThanks.",
"Yeah, the code is ready in this PR: https://github.com/huggingface/transformers/tree/more_general_trainer_metric .\r\nThe script to train an Encoder-Decoder model can be assessed here: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/bert_encoder_decoder_summary.py\r\n\r\nAnd in order for the script to work, you need to use this Trainer class:\r\nhttps://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/trainer.py\r\n\r\nI'm currently training the model myself. When the results are decent, I will publish a little notebook.",
"Hi @patrickvonplaten, thanks for sharing the scripts. However, the second link for training an encoder-decoder model is not found. Could you please upload this script? Thanks.",
"You ",
"Sorry, I deleted the second link. You can see all the necessary code on this model page: \r\nhttps://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework",
"Thanks for sharing this, Patrick. ",
"I am trying to implement a encoder decoder with BART but I have no idea how to do so, and I need to fine tune the decoder model, so eventually I need to train my decoder model. I am trying to use the `EncoderDecoder` model in my script but I don't know how to access the decoder model for training it. Instead of using the module, I initialized `BartModel` as encoder,whereas for decoder I used `BartForConditionalGeneration`. Here's the model I initialized\r\n```\r\nencoder = BartModel.from_pretrained('facebook/bart-base)\r\ndecoder = BartForConditionalGeneration.from_pretrained('facebook/bart-base)\r\n```\r\nAnd here's how I am using it.\r\n\r\n```\r\nfor epoch in range(epochs):\r\n #------------------------training------------------------\r\n decoder.train()\r\n losses = 0\r\n times = 0\r\n print('\\n'+'-'*20 + f'epoch {epoch}' + '-'*20)\r\n for batch in tqdm(train_dataloader):\r\n batch = [item.to(device) for item in batch]\r\n\r\n encoder_input, decoder_input, mask_encoder_input, mask_decoder_input = batch\r\n\r\n lhs,hs,att,_,_,_ = encoder(input_ids = encoder_input, attention_mask = mask_encoder_input,output_attentions = True,output_hidden_states = True)\r\n past = (lhs,hs,att)\r\n \r\n\r\n \r\n logits,_,_,_= decoder(input_ids = decoder_input, attention_mask = mask_decoder_input, encoder_outputs = past)\r\n \r\n \r\n out = logits[:, :-1].contiguous()\r\n target = decoder_input[:, 1:].contiguous()\r\n target_mask = mask_decoder_input[:, 1:].contiguous()\r\n \r\n \r\n loss = util.sequence_cross_entropy_with_logits(out, target, target_mask, average=\"token\")\r\n loss.backward()\r\n\r\n losses += loss.item()\r\n times += 1\r\n \r\n update_count += 1\r\n\r\n if update_count % num_gradients_accumulation == num_gradients_accumulation - 1:\r\n optimizer.step()\r\n scheduler.step()\r\n optimizer.zero_grad()\r\n```\r\nI am calculating perplexity from the loss, and I am getting a perplexity score of 1000+, which is bad. I would like to know whats my model is lacking and is it possible that I could use `EncoderDecoder` module",
"@AmbiTyga from what I know, BART is already a encoder-decoder model, with a BERT as a encoder and a GPT as a decoder. So you are encoding-decoding in encoder and encoding-decoding in decoder, which I don t think is a good idea. For the moment EncoderDecoderModel supports only BERT.",
"@iliemihai So can you refer me how to use BART in such cases like I have coded above?",
"@patrickvonplaten is Bert the only model that is supported as a decoder? I was hoping to train a universal model so wanted to use xlm-roberta (xlmr) as both encoder and decoder; Is this possible given the current EncoderDecoder framework? I know bert has a multilingual checkpoint but performance-wise an xlm-roberta model should be better. I noticed the notebook https://github.com/huggingface/transformers/blob/16e38940bd7d2345afc82df11706ee9b16aa9d28/model_cards/patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16/README.md does roberta2roberta; is this same code applicable to xlm-roberta?\r\nI tried following the same template with xlmr but I noticed that the output is the same regardless of the input - the is_decoder flag is properly set to True in the decoder but this issue persists.",
"Hey @spookypineapple - good question! Here is the PR that adds XLM-Roberta to the EncoderDecoder models: https://github.com/huggingface/transformers/pull/6878\r\n\r\nwill not make it to 3.1.0 but should be available on master in ~1,2 days",
"Im pulling from master so I should get at least the neccessary code artifacts to get bert2bert to work. However Im seeing (for a bert2bert setup using bert-base-multilingual-cased) that the output of the decoder remains unchanged regardless of the input to the encoder; this behavior seems to persist with training... The code im using to initialize the EncoderDecoder model is as follows:\r\n\r\n\r\n```\r\nimport torch\r\nfrom transformers import (\r\n MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,\r\n AdamW,\r\n get_linear_schedule_with_warmup,\r\n AutoConfig,\r\n AutoTokenizer,\r\n AutoModelForSeq2SeqLM,\r\n EncoderDecoderModel\r\n)\r\nmodel_type = 'bert'\r\nmodel_name = config_name = tokenizer_name = \"bert-base-multilingual-cased\"\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n tokenizer_name,\r\n do_lower_case=False,\r\n cache_dir=None,\r\n force_download=False\r\n)\r\nconfig = AutoConfig.from_pretrained(\r\n config_name,\r\n cache_dir=None,\r\n force_download=False\r\n)\r\nmodel = EncoderDecoderModel.from_encoder_decoder_pretrained(\r\n model_name, # encoder\r\n model_name, # decoder\r\n from_tf=bool(\".ckpt\" in model_name),\r\n config=config,\r\n cache_dir=None,\r\n)\r\nif model_type in ['bert']:\r\n tokenizer.bos_token = tokenizer.cls_token\r\n tokenizer.eos_token = tokenizer.sep_token\r\nmodel.config.decoder_start_token_id = tokenizer.bos_token_id\r\nmodel.config.eos_token_id = tokenizer.eos_token_id\r\nmodel.tie_weights()\r\nmodel.decoder.config.use_cache = False\r\n\r\ninput_str1 = \"this is the first example\"\r\ninput_str2 = \"and heres another example for you\"\r\ninput_encodings1 = tokenizer.encode_plus(input_str1,\r\n padding=\"max_length\",\r\n truncation=True,\r\n max_length=512,\r\n return_tensors=\"pt\")\r\ninput_encodings2 = tokenizer.encode_plus(input_str2,\r\n padding=\"max_length\",\r\n truncation=True,\r\n max_length=512,\r\n return_tensors=\"pt\")\r\ngen1 = model.generate(input_ids=input_encodings1.input_ids,\r\n attention_mask=input_encodings1.attention_mask,\r\n max_length=25,\r\n decoder_start_token_id=model.config.decoder_start_token_id\r\n )\r\ngen2 = model.generate(input_ids=input_encodings2.input_ids,\r\n attention_mask=input_encodings2.attention_mask,\r\n max_length=25,\r\n decoder_start_token_id=model.config.decoder_start_token_id\r\n )\r\ndec1 = [tokenizer.decode(ids, skip_special_tokens=True) for ids in gen1]\r\ndec2 = [tokenizer.decode(ids, skip_special_tokens=True) for ids in gen2]\r\nprint(dec1)\r\nprint(dec2)\r\n\r\n# the outputs are identical even though the inputs are different\r\n```\r\n",
"Hey @spookypineapple,\r\n\r\nA couple of things regarding your code:\r\n\r\n1) `.from_encoder_decoder_pretrained()` usually does not need a config. The way you use this function with a `conifg` inserted means that you are overwriting the encoder config, which is not recommended when loading an encoder decoder model from two pretrained \"bert-base-multilingual-cased\" checkpoints. Also `from_tf` will also only apply to the encoder. You would additionally have to pass `decoder_from_tf`.\r\n\r\n2) An encoder decoder model initialized from two pretrained \"bert-base-multilingual-cased\" checkpoints needs to be fine-tuned before any meaningful results can be seen.\r\n\r\n=> You might want to check these model cards of bert2bert which explain how to fine-tune such an encoder decoder model: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16\r\n\r\nHope this helps! \r\n",
"> Hey @spookypineapple,\r\n> \r\n> A couple of things regarding your code:\r\n> \r\n> 1. `.from_encoder_decoder_pretrained()` usually does not need a config. The way you use this function with a `conifg` inserted means that you are overwriting the encoder config, which is not recommended when loading an encoder decoder model from two pretrained \"bert-base-multilingual-cased\" checkpoints. Also `from_tf` will also only apply to the encoder. You would additionally have to pass `decoder_from_tf`.\r\n> 2. An encoder decoder model initialized from two pretrained \"bert-base-multilingual-cased\" checkpoints needs to be fine-tuned before any meaningful results can be seen.\r\n> \r\n> => You might want to check these model cards of bert2bert which explain how to fine-tune such an encoder decoder model: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16\r\n> \r\n> Hope this helps!\r\n\r\nIt does help indeed! Thankyou @patrickvonplaten ",
"@patrickvonplaten can you please share a tutorial/notebook on training the encoder-decoder model for machine translation? ",
"@patrickvonplaten can you create a notebook on how to use custom dataset to fine tune bert2bert models ? ",
"> Hey @Guitaricet :-) ,\r\n> \r\n> First, at the moment only Bert2Bert works with the encoder-decoder framework. Also, if you use Bert as a decoder you will always use a causal mask. At the moment I cannot think of an encoder-decoder in which the decoder does not use a causal mask, so I don't see a reason why one would want to disable it. Can you give me an example where the decoder should not have a causal mask?\r\n> Do you mean auto-regressive language generation by \"generating words one by one\"? Auto-regressive language modeling always requires a causal mask...\r\n> \r\n> 1. Currently, only Bert works as a decoder. We might add GPT2 in a couple of weeks. Note that no model has `cross-attention` layers if it is not already an encoder-decoder model (like Bart or T5) and in this case it does not make sense to use the encoder-decoder wrapper. The model is initialized with random weights for the cross attention layers which will have to be fine-tuned. I agree, that this should be made clearer in the documentation!\r\n\r\nI would like to disable causal masking to use it in [DETR](https://arxiv.org/abs/2005.12872), which uses parallel decoding... But this not seem possible at the moment. In my opinion, an option to disable causal masking in the decoder would be useful",
"> Yeah, the code is ready in this PR: https://github.com/huggingface/transformers/tree/more_general_trainer_metric . The script to train an Encoder-Decoder model can be assessed here: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/bert_encoder_decoder_summary.py\r\n> \r\n> And in order for the script to work, you need to use this Trainer class: https://github.com/huggingface/transformers/blob/more_general_trainer_metric/src/transformers/trainer.py\r\n> \r\n> I'm currently training the model myself. When the results are decent, I will publish a little notebook.\r\n\r\n@patrickvonplaten , none of the links is working. Is it possible to fix them? ",
"For BERT2BERT you can just use the `EncoderDecoderModel` class as shown here: https://huggingface.co/docs/transformers/v4.21.3/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel.forward.example \r\n\r\nThis example shows how to instantiate a Bert2Bert model which you can then train on any seq2seq task you want, e.g. summarization: https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization (you just need to slighly adapt the example, or pre-create a BERT2BERT and use it as a checkpoint)",
"Thanks!\r\nBtw, I just submitted an issue and tagged you. There's some problem when using EncoderDecoderModel with the most recent transformers versions. "
] | 1,590 | 1,662 | 1,591 | NONE | null | How to train a custom seq2seq model with `BertModel`,
I would like to use some Chinese pretrained model base on `BertModel`
so I've tried using `Encoder-Decoder Model`, but it seems the`Encoder-Decoder Model` is not used for conditional text generation
and I saw that BartModel seems to be the model I need, but I cannot load pretrained BertModel weight with BartModel.
by the way, could I finetune a BartModel for seq2seq with custom data ?
any suggestion, thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4517/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4516/comments | https://api.github.com/repos/huggingface/transformers/issues/4516/events | https://github.com/huggingface/transformers/issues/4516 | 623,082,746 | MDU6SXNzdWU2MjMwODI3NDY= | 4,516 | sometimes loss starts with nan when running "Quick tour TF 2.0 training and PyTorch interoperability" script | {
"login": "catqaq",
"id": 42762740,
"node_id": "MDQ6VXNlcjQyNzYyNzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/42762740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/catqaq",
"html_url": "https://github.com/catqaq",
"followers_url": "https://api.github.com/users/catqaq/followers",
"following_url": "https://api.github.com/users/catqaq/following{/other_user}",
"gists_url": "https://api.github.com/users/catqaq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/catqaq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/catqaq/subscriptions",
"organizations_url": "https://api.github.com/users/catqaq/orgs",
"repos_url": "https://api.github.com/users/catqaq/repos",
"events_url": "https://api.github.com/users/catqaq/events{/privacy}",
"received_events_url": "https://api.github.com/users/catqaq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"I ran your script five times and I cannot reproduce this so it's very hard to debug. Can you clear the cache created by convert_examples and try again? And if that does not work, try updating tensorflow and tensorflow_datasets to the latest versions.",
"Thanks for your attention. As i said \"sometimes\", the script works well now. I did nothing except reboot my Spyder several times. I don’t know why, but it ’s back to normal",
"That's good to hear! It might have been a caching issue somewhere with Spyder. (Personally I can recommend PyCharm.)",
"Yes, I guess so. Thanks for your recommendation. I have been hesitating whether to abandon Spyder completely. PyCharm is powerful but a little too heavy..."
] | 1,590 | 1,590 | 1,590 | NONE | null | # 🐛 Bug
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
"Quick tour TF 2.0 training and PyTorch interoperability"
## To reproduce
Steps to reproduce the behavior:
1.run the code:
```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(10)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=10, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
```
2. sometimes the loss is nan and sometimes it works well:
### nan case:
Train for 115 steps, validate for 7 steps
Epoch 1/10
115/115 [==============================] - 863s 8s/step - loss: nan - accuracy: 0.3255 - val_loss: nan - val_accuracy: 0.3162
Epoch 2/10
115/115 [==============================] - 854s 7s/step - loss: nan - accuracy: 0.3255 - val_loss: nan - val_accuracy: 0.3162
### normal case:
Train for 115 steps, validate for 7 steps
Epoch 1/10
27/115 [======>.......................] - ETA: 11:37 - loss: 0.6249 - accuracy: 0.6609
## Environment info
- `transformers` version: 2.9.1
- Platform:ubuntu 16.04
- Python version:3.6
- PyTorch version (GPU?):1.2.0 GPU
- Tensorflow version (GPU?):2.0.0 gpu
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4516/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4515/comments | https://api.github.com/repos/huggingface/transformers/issues/4515/events | https://github.com/huggingface/transformers/pull/4515 | 623,075,669 | MDExOlB1bGxSZXF1ZXN0NDIxODEyODg0 | 4,515 | Allow BatchEncoding to be pickled | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Issues seems related to pretokenized input. Checking with @n1t0 what changed in-between 👍 ",
"Indeed, I think we'll have to merge https://github.com/huggingface/transformers/pull/4510 before this.",
"Closing in favor of #5039"
] | 1,590 | 1,651 | 1,592 | MEMBER | null | Overrides the methods ___get_state()___ & ___set_state()___ to (respectively) export the content of the underlying `data` dictionnary and - if defined - the content of `encodings`.
Unittests added to covert the serialization & deserialization of all the exported properties.
Block until new release of **tokenizers** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4515/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4515",
"html_url": "https://github.com/huggingface/transformers/pull/4515",
"diff_url": "https://github.com/huggingface/transformers/pull/4515.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4515.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4514/comments | https://api.github.com/repos/huggingface/transformers/issues/4514/events | https://github.com/huggingface/transformers/issues/4514 | 623,022,855 | MDU6SXNzdWU2MjMwMjI4NTU= | 4,514 | ❓ How Linear layer difference between TF2 and PT are handled ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"[Here's an example when loading a TF BERT model in PyTorch.](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L122)"
] | 1,590 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
There is a difference between TF2 and Pytorch on how to store the weights of a Linear Layer.
As shown in [this Colab notebook](https://colab.research.google.com/drive/1zLWONO3wo09-PImo0kg1bwARsh2jKnpQ?usp=sharing), in order to get the same output for both TF2 and PT when using `torch.nn.Linear` and `tf.keras.layers.Dense`, we need to transpose the weights in PT.
**I couldn't find in this library where this is handled** (when loading a Pytorch checkpoint to TF2 for example).
Can someone point me out where and how this is handled ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4514/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.