url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/6020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6020/comments | https://api.github.com/repos/huggingface/transformers/issues/6020/events | https://github.com/huggingface/transformers/pull/6020 | 665,246,221 | MDExOlB1bGxSZXF1ZXN0NDU2MzUxMjkw | 6,020 | Create README.md | {
"login": "rdenadai",
"id": 917516,
"node_id": "MDQ6VXNlcjkxNzUxNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/917516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rdenadai",
"html_url": "https://github.com/rdenadai",
"followers_url": "https://api.github.com/users/rdenadai/followers",
"following_url": "https://api.github.com/users/rdenadai/following{/other_user}",
"gists_url": "https://api.github.com/users/rdenadai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rdenadai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rdenadai/subscriptions",
"organizations_url": "https://api.github.com/users/rdenadai/orgs",
"repos_url": "https://api.github.com/users/rdenadai/repos",
"events_url": "https://api.github.com/users/rdenadai/events{/privacy}",
"received_events_url": "https://api.github.com/users/rdenadai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"👍 \r\n\r\nWould you like to add example inputs for all models for `pt`? If you do, please open a PR against this file: https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts\r\n\r\nThanks:)"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | README.md for my model
https://huggingface.co/rdenadai/BR_BERTo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6020",
"html_url": "https://github.com/huggingface/transformers/pull/6020",
"diff_url": "https://github.com/huggingface/transformers/pull/6020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6020.patch",
"merged_at": 1595614274000
} |
https://api.github.com/repos/huggingface/transformers/issues/6019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6019/comments | https://api.github.com/repos/huggingface/transformers/issues/6019/events | https://github.com/huggingface/transformers/pull/6019 | 665,236,771 | MDExOlB1bGxSZXF1ZXN0NDU2MzQzNTA0 | 6,019 | Update the new model template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=h1) Report\n> Merging [#6019](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/778e635fc900e5afc90e5f8f4ca74b5d5fc2976a&el=desc) will **increase** coverage by `1.37%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6019 +/- ##\n==========================================\n+ Coverage 77.31% 78.68% +1.37% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20295 20655 +360 \n+ Misses 5954 5594 -360 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=footer). Last update [778e635...b79103c](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | The model template files were old and did not use any of the recent improvements (output attentions, output hidden states, docstrings refactored...)
This PR fixes them and adds a few more instructions.
Tagging @LysandreJik, @patrickvonplaten, @thomwolf and @julien-c : We have to make sure that whenever we make a substantial change that applies to all model, this template is also updated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6019/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6019/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6019",
"html_url": "https://github.com/huggingface/transformers/pull/6019",
"diff_url": "https://github.com/huggingface/transformers/pull/6019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6019.patch",
"merged_at": 1595614538000
} |
https://api.github.com/repos/huggingface/transformers/issues/6018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6018/comments | https://api.github.com/repos/huggingface/transformers/issues/6018/events | https://github.com/huggingface/transformers/issues/6018 | 665,213,195 | MDU6SXNzdWU2NjUyMTMxOTU= | 6,018 | seq2seq/finetune.py can take config train_params through command line | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00 this might be up your alley.",
"Added to my todo queue - thank you, @sshleifer",
"https://github.com/huggingface/transformers/pull/6149"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | - this might belong in lightning_base
- check how/if trainer does it.
Desired API:
```bash
python finetune.py --encoder_layerdrop 0.1 --decoder_layerdrop 0.1 --dropout 0.1 --attention_dropout 0.1
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6018/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6018/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6017/comments | https://api.github.com/repos/huggingface/transformers/issues/6017/events | https://github.com/huggingface/transformers/issues/6017 | 665,211,411 | MDU6SXNzdWU2NjUyMTE0MTE= | 6,017 | convert_bart script should support mbart through command line. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | Desired command:
```convert_mbart_checkpoint.py --checkpoint_dir --pytorch_dump_path```
This should probably also convert the tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6017/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6016/comments | https://api.github.com/repos/huggingface/transformers/issues/6016/events | https://github.com/huggingface/transformers/pull/6016 | 665,209,091 | MDExOlB1bGxSZXF1ZXN0NDU2MzIwNTM1 | 6,016 | Create model card for RuPERTa-base | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Feel free to apply the change",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=h1) Report\n> Merging [#6016](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.22%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6016 +/- ##\n==========================================\n+ Coverage 78.29% 78.51% +0.22% \n==========================================\n Files 146 146 \n Lines 26249 26249 \n==========================================\n+ Hits 20552 20610 +58 \n+ Misses 5697 5639 -58 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=footer). Last update [3996041...a80f84b](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6016/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6016",
"html_url": "https://github.com/huggingface/transformers/pull/6016",
"diff_url": "https://github.com/huggingface/transformers/pull/6016.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6016.patch",
"merged_at": 1595614349000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6015/comments | https://api.github.com/repos/huggingface/transformers/issues/6015/events | https://github.com/huggingface/transformers/pull/6015 | 665,188,820 | MDExOlB1bGxSZXF1ZXN0NDU2MzAzOTQx | 6,015 | Tf trainer cleanup | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=h1) Report\n> Merging [#6015](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b44aa935a4d8f1b0e93a23070d97be6b9c9506b&el=desc) will **increase** coverage by `1.29%`.\n> The diff coverage is `22.22%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6015 +/- ##\n==========================================\n+ Coverage 77.32% 78.61% +1.29% \n==========================================\n Files 146 146 \n Lines 26253 26270 +17 \n==========================================\n+ Hits 20299 20652 +353 \n+ Misses 5954 5618 -336 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.87% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.24% <22.22%> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=footer). Last update [3b44aa9...5ff70d6](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"As long as you don't create too many merge conflicts, I can wait ;-)",
"PR available #6038 ",
"Merge conflicts were a mess so opened a new version."
] | 1,595 | 1,596 | 1,596 | COLLABORATOR | null | This follows up from #5982 and does the same thing for `TFTrainer`.
Basically no code is changed, just moved around and the customization points are made public and explicit to the user. Starting to make the docs better but I will do it properly (for both Trainer and TFTrainer) after this PR is merged. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6015",
"html_url": "https://github.com/huggingface/transformers/pull/6015",
"diff_url": "https://github.com/huggingface/transformers/pull/6015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6015.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/6014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6014/comments | https://api.github.com/repos/huggingface/transformers/issues/6014/events | https://github.com/huggingface/transformers/pull/6014 | 665,159,409 | MDExOlB1bGxSZXF1ZXN0NDU2Mjc5NTI3 | 6,014 | Fix question template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=h1) Report\n> Merging [#6014](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a5404052135a455f9b3ada6154dcfbe7c6ac3968&el=desc) will **increase** coverage by `1.32%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6014 +/- ##\n==========================================\n+ Coverage 77.18% 78.51% +1.32% \n==========================================\n Files 146 146 \n Lines 26252 26252 \n==========================================\n+ Hits 20263 20612 +349 \n+ Misses 5989 5640 -349 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=footer). Last update [a540405...78af589](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6014",
"html_url": "https://github.com/huggingface/transformers/pull/6014",
"diff_url": "https://github.com/huggingface/transformers/pull/6014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6014.patch",
"merged_at": 1595599466000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6013/comments | https://api.github.com/repos/huggingface/transformers/issues/6013/events | https://github.com/huggingface/transformers/issues/6013 | 665,048,404 | MDU6SXNzdWU2NjUwNDg0MDQ= | 6,013 | Cannot import DPRReader | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for flagging, the stable doc was building on a wrong commit hash, hence the DPR models being present.\r\nThis is fixed now."
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
You have DPRReader included in your v3.0.2 documentation, but it's in the Master branch, it's not included in v3.0.2 tag; therefore it crashes and cannot import it. I think it would be convenient to either eliminate that piece of documentation until the models are added in the tag, or update the transformers version with the Master branch.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6013/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6012/comments | https://api.github.com/repos/huggingface/transformers/issues/6012/events | https://github.com/huggingface/transformers/issues/6012 | 665,012,321 | MDU6SXNzdWU2NjUwMTIzMjE= | 6,012 | When i train a GPT-2 which uses BPE, the computed perplexity is per sub-word right? | {
"login": "Nkonstan",
"id": 35643708,
"node_id": "MDQ6VXNlcjM1NjQzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/35643708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nkonstan",
"html_url": "https://github.com/Nkonstan",
"followers_url": "https://api.github.com/users/Nkonstan/followers",
"following_url": "https://api.github.com/users/Nkonstan/following{/other_user}",
"gists_url": "https://api.github.com/users/Nkonstan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nkonstan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nkonstan/subscriptions",
"organizations_url": "https://api.github.com/users/Nkonstan/orgs",
"repos_url": "https://api.github.com/users/Nkonstan/repos",
"events_url": "https://api.github.com/users/Nkonstan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nkonstan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,595 | 1,614 | 1,614 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6011/comments | https://api.github.com/repos/huggingface/transformers/issues/6011/events | https://github.com/huggingface/transformers/issues/6011 | 664,899,103 | MDU6SXNzdWU2NjQ4OTkxMDM= | 6,011 | [Benchmark] | {
"login": "Rianley",
"id": 30228744,
"node_id": "MDQ6VXNlcjMwMjI4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/30228744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rianley",
"html_url": "https://github.com/Rianley",
"followers_url": "https://api.github.com/users/Rianley/followers",
"following_url": "https://api.github.com/users/Rianley/following{/other_user}",
"gists_url": "https://api.github.com/users/Rianley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rianley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rianley/subscriptions",
"organizations_url": "https://api.github.com/users/Rianley/orgs",
"repos_url": "https://api.github.com/users/Rianley/repos",
"events_url": "https://api.github.com/users/Rianley/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rianley/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,596 | 1,596 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6011/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6010/comments | https://api.github.com/repos/huggingface/transformers/issues/6010/events | https://github.com/huggingface/transformers/issues/6010 | 664,897,389 | MDU6SXNzdWU2NjQ4OTczODk= | 6,010 | install sentence-transformers on Linux by python error | {
"login": "kension0929",
"id": 15052580,
"node_id": "MDQ6VXNlcjE1MDUyNTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/15052580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kension0929",
"html_url": "https://github.com/kension0929",
"followers_url": "https://api.github.com/users/kension0929/followers",
"following_url": "https://api.github.com/users/kension0929/following{/other_user}",
"gists_url": "https://api.github.com/users/kension0929/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kension0929/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kension0929/subscriptions",
"organizations_url": "https://api.github.com/users/kension0929/orgs",
"repos_url": "https://api.github.com/users/kension0929/repos",
"events_url": "https://api.github.com/users/kension0929/events{/privacy}",
"received_events_url": "https://api.github.com/users/kension0929/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am also facing same issue while trying to install transformers version 3.0.2...Did you find any solution?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"\r\nhow to fix this....pls help\r\n"
] | 1,595 | 1,605 | 1,603 | NONE | null | # ❓ Questions & Help
When I install sentence-transformers on Linux by python

, I got an error message:
ERROR: Could not find a version that satisfies the requirement transformers>=3.0.2 (from sentence-transformers) (fr
om versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1)
ERROR: No matching distribution found for transformers>=3.0.2 (from sentence-transformers)
system: Linux 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) x86_64 on GCP VM instance.
Is there any suggestion?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6009/comments | https://api.github.com/repos/huggingface/transformers/issues/6009/events | https://github.com/huggingface/transformers/pull/6009 | 664,874,805 | MDExOlB1bGxSZXF1ZXN0NDU2MDQxMzc3 | 6,009 | [model_cards] roberta-base-finetuned-yelp-polarity | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=h1) Report\n> Merging [#6009](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f5b5c5bd7e213dea1645f07902b681f88e3cf954&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6009 +/- ##\n==========================================\n- Coverage 78.52% 78.51% -0.01% \n==========================================\n Files 146 146 \n Lines 26252 26252 \n==========================================\n- Hits 20614 20612 -2 \n- Misses 5638 5640 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=footer). Last update [f5b5c5b...fed314c](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | MEMBER | null | Model card for [roberta-base-finetuned-yelp-polarity](https://huggingface.co/VictorSanh/roberta-base-finetuned-yelp-polarity). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6009",
"html_url": "https://github.com/huggingface/transformers/pull/6009",
"diff_url": "https://github.com/huggingface/transformers/pull/6009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6009.patch",
"merged_at": 1595598322000
} |
https://api.github.com/repos/huggingface/transformers/issues/6008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6008/comments | https://api.github.com/repos/huggingface/transformers/issues/6008/events | https://github.com/huggingface/transformers/pull/6008 | 664,802,063 | MDExOlB1bGxSZXF1ZXN0NDU1OTgxOTE4 | 6,008 | fix: model card readme clutter | {
"login": "csarron",
"id": 8440740,
"node_id": "MDQ6VXNlcjg0NDA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8440740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csarron",
"html_url": "https://github.com/csarron",
"followers_url": "https://api.github.com/users/csarron/followers",
"following_url": "https://api.github.com/users/csarron/following{/other_user}",
"gists_url": "https://api.github.com/users/csarron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csarron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csarron/subscriptions",
"organizations_url": "https://api.github.com/users/csarron/orgs",
"repos_url": "https://api.github.com/users/csarron/repos",
"events_url": "https://api.github.com/users/csarron/events{/privacy}",
"received_events_url": "https://api.github.com/users/csarron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"looks good now 👍\r\n\r\n(we'll have a better system for model cards soon-ish)"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | this removes the clutter line in the readme.md of model card `csarron/roberta-base-squad-v1`. It also fixes the result table. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6008",
"html_url": "https://github.com/huggingface/transformers/pull/6008",
"diff_url": "https://github.com/huggingface/transformers/pull/6008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6008.patch",
"merged_at": 1595578673000
} |
https://api.github.com/repos/huggingface/transformers/issues/6007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6007/comments | https://api.github.com/repos/huggingface/transformers/issues/6007/events | https://github.com/huggingface/transformers/issues/6007 | 664,795,308 | MDU6SXNzdWU2NjQ3OTUzMDg= | 6,007 | Fine tune T5 for paraphrase generation | {
"login": "mengyahuUSTC-PU",
"id": 57302843,
"node_id": "MDQ6VXNlcjU3MzAyODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/57302843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mengyahuUSTC-PU",
"html_url": "https://github.com/mengyahuUSTC-PU",
"followers_url": "https://api.github.com/users/mengyahuUSTC-PU/followers",
"following_url": "https://api.github.com/users/mengyahuUSTC-PU/following{/other_user}",
"gists_url": "https://api.github.com/users/mengyahuUSTC-PU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mengyahuUSTC-PU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mengyahuUSTC-PU/subscriptions",
"organizations_url": "https://api.github.com/users/mengyahuUSTC-PU/orgs",
"repos_url": "https://api.github.com/users/mengyahuUSTC-PU/repos",
"events_url": "https://api.github.com/users/mengyahuUSTC-PU/events{/privacy}",
"received_events_url": "https://api.github.com/users/mengyahuUSTC-PU/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Phew, won't be able answer all the questions in single comment, will try my best.\r\n\r\n2. IMO for generating paraphrases you probably won't need negative examples.\r\n\r\n3. You assumption is correct, knowledge learned from one task can be useful for other similar tasks. You can approach this as a multitask problem.\r\n 1. identify if two sentences are paraphrases of each other\r\n 2. given one sentence generate it's paraphrase\r\n\r\n With T5 you can use task prefixes for multitask learning, so for identification your example could look something like\r\n```\r\ninput_text: \"sent1: sentence 1 text sent2: sentence two text\"\r\ntarget_text: \"equivalent\" if sent2 is paraphrase of sent1 else \"not equivalent\"\r\n```\r\nand for generation\r\n```\r\ninput_text: \"paraphrase: sentence\"\r\ntaregt_tetx: \"paraphrase of sentence\"\r\n```\r\n\r\n4. Task prefixes are not required for T5 (required when doing multitask training), but if your task is similar to one of the tasks used in T5's pre-training mixture, then use the prefixes so you can exploit the knowledge already learned by the model. \r\nIn my notebook I didn't use task prefixes (even though I was doing sentiment classification) because I wanted to see if it makes any difference if we don't use prefixes. Again use task prefixes when doing multitask learning or if your task similar to one of the tasks used in T5's pre-training mixture.\r\n\r\n5. Check which dataset is closer to your own task and make decide on that.\r\n\r\n10) I used `model.model` because here the first `model` is an instance of lightening model and the HF model is initialized in the first model so `model.model`, but once you save using `.save_pretrained` then you can load using `.from_pretrained` and you can do `model.generate`.\r\n\r\nAnd for evaluation you can use BLUE, ROUGE and METEOR metrics. I usually use [nlg-eval ](https://github.com/Maluuba/nlg-eval)for calculating theses metrics. Generate predictions on your test data, then give your original reference file and generated file to nlg eval and it'll calculate the metrics.\r\n\r\nAnd yes `.from_pretrained` sets model in eval model by default.\r\n\r\nHope this helps.\r\n\r\n",
"Thank you, @patil-suraj ! I learn a lot from your answers!\r\n\r\n**4.** I thought T5 is already pretrained with several with several different tasks, which means T5 is multitasking model. We can use the prefix in the appendix of the paper to perform the corresponding task. Though we finetuned it for a new task (like in your notebook), the ability for pretrained task is not lost, right? If so, it is surprising for me T5 didn't mess thing up and knows what to do when you didn't give it any prefix.\r\n\r\n**5. & 3.** If I want to finetune on QQP, should I also use MRPC's data set (i.e. my own data+ MRPC)? On the other hand, if I train a new prefix, should I use QQP + MRPC + my own data? Will the finetuned T5 overfit a little for QQP and MRPC, as the model see them several times (though for the training of different prefix)? Similarly, if I use the QQP+MRPC+my dataset to finetune T5 paraphrase detection AND then use the positive examples in the QQP+MRPC+my data set to finetune T5 paraphrase generation, will this be information leakage? Should I avoid using same positive examples in two tasks?\r\n\r\n**10.** none of the example uses 'model.train()' to set the mode for training. Is this redundant? \r\n\r\n**11**. Thanks for suggestion nlg-eval ! However, metrics like BLUE can't really evaluate the quality of paraphrase generated. ( Like really good ones should have diverse phrases and structures but still same meaning).\r\n\r\nHope someone can address question 1. 6. 7. 8. 9. too.\r\n\r\n",
"**4.** Yes T5 is a multitask model and you can use the the prefixes to perform the corresponding task. But note that, the results reported on individual tasks in the paper are reported after again fine-tuning the model on that task specifically. \r\nAnd after fine-tuning the model can forget about other tasks\r\n\r\n**5.** To answer the first question, you probably won't need to use those datasets, by using task prefix the model can exploit already available knowledge. \r\n\r\n**10**. pytorch lightning automatically puts model in train mode for train loop and in eval mode for eval loop.\r\n\r\n**11**. Yes, BLUE won't make much sense, ROUGE seems like a good metric for this task. ",
"Thanks, @patil-suraj . \r\n\r\n**4.** Could you explain what do you mean by forget? Here is my understanding: model is first fine-tuned on task **A** with data set **a** using prefix **Aa**, so now the model has the set of parameters **aa** and we call it model **AA**; Then I use the resulting model to further fine tune on task **B** with data set **b** using **Bb**, so the model's parameters change to **bb** and we call it model **BB**. Thus, if we use the final model **BB** to perform task on task **A**, the model may/may not 'recoganize' prefix **Aa**, but the performance will be worse than model **AA**.\r\n\r\nIf what I say above is correct, then my original understanding that transfer learning 'using the same model (structure and set of parameters are the same) for different tasks' is wrong. If so, the transfer learning or learning from multiple tasks are just give better initialization using the current task result for the next task.\r\n\r\n **5.** If the understanding in 4 is correct, I think I may need to reuse the data set when training a new prefix.\r\n",
"**4.**, forget here is the context of multitask learning, if you take a multitask model and then only fine-tune it for one task then there's a chance that it can forget about other tasks.",
"Thanks,@patil-suraj!\r\n How about just one prefix/task? Will the model forget? \r\n\r\nFor example, I have paraphrase data set A and paraphrase data set B.\r\n**Fine tune 1:** I first fine tune t5-large on data set A using prefix 'para:' with 2 epoch. The resulting model is T5-A.\r\n I then fine tune t5-A on data set B using prefix 'para:' with 2 epoch. The resulting model is T5-B.\r\n**Fine tune 2:** I first combine data set A and data set B into one file. Then I fine tune t5-large on the combined data set using prefix 'para:' with 2 epoch. The resulting model is T5-2.\r\n\r\nWill T5-B forget about data set A? I tried two fine tunning methods and T5-2 seems worse than T5-B (T5-2 with more epoches seems worse than T5-B too). \r\n\r\nMy thought: If it is gradient descent and both methods have converged and only one optimal solution, they should have no difference. However, in real life, there are maybe a lot of local optimum, numerically, there is no guarantee which of the method is better andThe T5-2 should have a higher chance to be better as it has larger data set and prevent overfitting.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
Dear all,
I am new to NLP and has a lot of questions. Sorry to ask this long list here. I tried asking on huggingface's forum but as a new user, I can only put 2 lines there.
My goal is to fine-tuned t5-large for paraphrase generation. I found [this code](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/master/t5-pretrained-question-paraphraser.ipynb) which is based on [this code](https://github.com/patil-suraj/exploring-T5). So I just modified to further fine tune on my dataset. My questions( I also asked some of them on [the github code mentioned above](https://github.com/patil-suraj/exploring-T5/issues/3) but I feel these question may be better address here):
1. I saw for 2 epoches and the paraphrases generated looks good. When I trained for 11 epochs and the model seems overfitted (the paraphrases generated is similar to the original sentence). Do you have any recommendation for further improving the performance beside decrease the number of epochs?
2. For paraphrase generation using T5 as a text-to-text task, I don't know how to utilize the negative examples (pairs that are not paraphrases) directly here. Any recommendation?
3. One idea I have to include the negative examples is : I plan to first further fine tune T5-large's paraphrase identification with my data set (with positive and negative examples) and then used this fine tuned version to further fine tune on paraphrase generation. My assumption is the information learned in paraphrase identification task will help improve paraphrase generation. Is this correct?
4. I am also a little confused about the prefix.
On huggingface'T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g.: for translation: translate English to German: …, summarize: …. For more information about which prefix to use, it is easiest to look into Appendix D of the paper '.
Thus, I think that prefix notifies the T5 which task he should be performing. [The answer of this thread](https://github.com/huggingface/transformers/issues/4092 ) agrees with my understanding. However, in the first two examples [here](https://github.com/patil-suraj/exploring-T5), the code seems only add </s> at the end of the sentence "< / s >" but no 'prefix'. Could you tell me why? Do that mean these fine-tuned model will not do pretrain tasks of T5 but only their specific trained task so we don't need a prefix?
5.Also, MRPC and QQP are both paraphrase identification. If I want to fine tune, should I use my data set to fine with both of their prefix, or fine tune with one of their prefix or create my own prefix?
6. The loss function of the code is cross entropy, which is not the best for this task. I am thinking if I can use the paraphrase identification result (like the probability of being paraphrase) as the target function. Is this OK? I feel it maybe suuuupppeer slow. And I am not really sure how to implement it.
7. I have 3 paraphrase datasets (let's call them A,B,C) from different sources. Previously, I first train the model on A for 2 epoch, and then load this model as pretrained model to further train B for 2 epoches, and then C.
I then Combine the A,B,C into one dataset and directly trained for 2 epoches. The resulting two models has different results and the second one is worse. I have the same random seeds for them. Any idea?
8. I set early_stop_callback= True and set max_epochs=32, then it stops at epoch 11. But if I set max_epochs = 6, it stops at epoch 3. I don't understand, as I thought it will stop at epoch 6. I have the same random seed.
9. Another strange thing during training, I saw this on the screen:
Epoch 10: 100%............(time, loss et al)...
INFO:_main_:avg_train_loss = tensor(..)
INFO:_main_:epoch = 8
........
Why the epoch number is not the same?!
10. what is the correct way to evaluate on testing set? I saw several different examples.
[In this example, ](https://www.kaggle.com/enzoamp/t5-for-q-a-training-tutorial-pytorch)
t5 = T5ForConditionalGeneration.from_pretrained('output/')
input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) # Batch size 1
**t5.eval()**
generated_ids = t5.generate(
input_ids=input_ids,
num_beams=1,
max_length=80,
#repetition_penalty=2.5
).squeeze()
predicted_span = tokenizer.decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return predicted_span
[This code has two examples:](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb)
First directly:
outs = **model.model.generate**(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
He also has :
loader = DataLoader(dataset, batch_size=32, num_workers=4)
**model.model.eval()**
outputs = []
targets = []
for batch in tqdm(loader):
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
Is eval() necessary? From[ Huggingface's doc,](https://huggingface.co/transformers/model_doc/auto.html) it seems not necessary when the model is 'from pretrain':
"The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated) To train the model, you should first set it back in training mode with model.train()"
On the other side, all the examples above, none of them uses 'model.train()' to set the mode. They directly train the model. I am confused.
Is model.model necessary?
Thanks!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6007/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6007/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6006/comments | https://api.github.com/repos/huggingface/transformers/issues/6006/events | https://github.com/huggingface/transformers/pull/6006 | 664,754,862 | MDExOlB1bGxSZXF1ZXN0NDU1OTQzMTA3 | 6,006 | Model cards: add roberta-base-squad-v1 and bert-base-uncased-squad-v1 | {
"login": "csarron",
"id": 8440740,
"node_id": "MDQ6VXNlcjg0NDA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8440740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csarron",
"html_url": "https://github.com/csarron",
"followers_url": "https://api.github.com/users/csarron/followers",
"following_url": "https://api.github.com/users/csarron/following{/other_user}",
"gists_url": "https://api.github.com/users/csarron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csarron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csarron/subscriptions",
"organizations_url": "https://api.github.com/users/csarron/orgs",
"repos_url": "https://api.github.com/users/csarron/repos",
"events_url": "https://api.github.com/users/csarron/events{/privacy}",
"received_events_url": "https://api.github.com/users/csarron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks!",
"> Thanks!\r\n\r\nLove this amazing library! Happy to contribute! 😊"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6006",
"html_url": "https://github.com/huggingface/transformers/pull/6006",
"diff_url": "https://github.com/huggingface/transformers/pull/6006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6006.patch",
"merged_at": 1595541228000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/6005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6005/comments | https://api.github.com/repos/huggingface/transformers/issues/6005/events | https://github.com/huggingface/transformers/pull/6005 | 664,747,687 | MDExOlB1bGxSZXF1ZXN0NDU1OTM3MTQ0 | 6,005 | Model utils doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=h1) Report\n> Merging [#6005](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e251ae0395bd3e558c633fe8664fbdf612cb4f4&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `95.83%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6005 +/- ##\n==========================================\n- Coverage 78.66% 78.66% -0.01% \n==========================================\n Files 146 146 \n Lines 26230 26231 +1 \n==========================================\n Hits 20634 20634 \n- Misses 5596 5597 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <90.90%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.20% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=footer). Last update [7e251ae...ead6b62](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,596 | 1,595 | COLLABORATOR | null | This PR documents all the model utilities and fixes a few things in the `PreTrainedModel`/`TFPreTrainedModel`.
It introduces a new category I called "internal" that comes at the very end of the package documentation, which is aimed at documenting all our public internal methods (e.g., public in their modules, but not in the project init). This should be useful for people who tweak our code and reuse those pieces.
Preview of the new file in the docs is [here](https://64241-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/modeling_utils.html) (and you can navigate on the other pages if you want to check the smaller changes). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6005/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6005",
"html_url": "https://github.com/huggingface/transformers/pull/6005",
"diff_url": "https://github.com/huggingface/transformers/pull/6005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6005.patch",
"merged_at": 1595596589000
} |
https://api.github.com/repos/huggingface/transformers/issues/6004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6004/comments | https://api.github.com/repos/huggingface/transformers/issues/6004/events | https://github.com/huggingface/transformers/issues/6004 | 664,743,318 | MDU6SXNzdWU2NjQ3NDMzMTg= | 6,004 | CI/Examples: ModuleNotFoundError: No module named '_sqlite3' | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Very weird. Only the seq2seq examples have a dependency on nltk, which in turn has a dependency on sqlite3?\r\n\r\nAre we pinning a recent nltk version?",
"We are requiring rouge_scorer, which uses nltk for stemming. There is no pinning.",
"@mfuntowicz fixed, thanks!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | This is a common issue, all solutions I have found to address involve recompiling python.
like [1](https://stackoverflow.com/questions/1210664/no-module-named-sqlite3) [2](https://www.virtualizationteam.com/cloud/running-vcd-cli-fail-with-the-following-error-modulenotfounderror-no-module-named-_sqlite3.html)
Full Traceback:
```
2020-07-23T00:30:23.1433578Z ==================================== ERRORS ====================================
2020-07-23T00:30:23.1435598Z ____________ ERROR collecting examples/seq2seq/test_bash_script.py _____________
2020-07-23T00:30:23.1437122Z ImportError while importing test module '/home/hf/actions-runner_transformers/_work/transformers/transformers/examples/seq2seq/test_bash_script.py'.
2020-07-23T00:30:23.1437481Z Hint: make sure your test modules/packages have valid Python names.
2020-07-23T00:30:23.1437737Z Traceback:
2020-07-23T00:30:23.1438102Z examples/seq2seq/finetune.py:21: in <module>
2020-07-23T00:30:23.1438342Z from .utils import (
2020-07-23T00:30:23.1438573Z examples/seq2seq/utils.py:14: in <module>
2020-07-23T00:30:23.1438821Z from rouge_score import rouge_scorer, scoring
2020-07-23T00:30:23.1439560Z .env/lib/python3.7/site-packages/rouge_score/rouge_scorer.py:41: in <module>
2020-07-23T00:30:23.1440092Z from nltk.stem import porter
2020-07-23T00:30:23.1440789Z .env/lib/python3.7/site-packages/nltk/__init__.py:149: in <module>
2020-07-23T00:30:23.1441053Z from nltk.translate import *
2020-07-23T00:30:23.1441694Z .env/lib/python3.7/site-packages/nltk/translate/__init__.py:23: in <module>
2020-07-23T00:30:23.1441956Z from nltk.translate.meteor_score import meteor_score as meteor
2020-07-23T00:30:23.1442637Z .env/lib/python3.7/site-packages/nltk/translate/meteor_score.py:10: in <module>
2020-07-23T00:30:23.1442916Z from nltk.stem.porter import PorterStemmer
2020-07-23T00:30:23.1443523Z .env/lib/python3.7/site-packages/nltk/stem/__init__.py:29: in <module>
2020-07-23T00:30:23.1443794Z from nltk.stem.snowball import SnowballStemmer
2020-07-23T00:30:23.1444468Z .env/lib/python3.7/site-packages/nltk/stem/snowball.py:29: in <module>
2020-07-23T00:30:23.1444741Z from nltk.corpus import stopwords
2020-07-23T00:30:23.1445379Z .env/lib/python3.7/site-packages/nltk/corpus/__init__.py:66: in <module>
2020-07-23T00:30:23.1445750Z from nltk.corpus.reader import *
2020-07-23T00:30:23.1446438Z .env/lib/python3.7/site-packages/nltk/corpus/reader/__init__.py:105: in <module>
2020-07-23T00:30:23.1446711Z from nltk.corpus.reader.panlex_lite import *
2020-07-23T00:30:23.1447355Z .env/lib/python3.7/site-packages/nltk/corpus/reader/panlex_lite.py:15: in <module>
2020-07-23T00:30:23.1447591Z import sqlite3
2020-07-23T00:30:23.1447839Z /home/hf/.pyenv/versions/3.7.6/lib/python3.7/sqlite3/__init__.py:23: in <module>
2020-07-23T00:30:23.1448311Z from sqlite3.dbapi2 import *
2020-07-23T00:30:23.1448584Z /home/hf/.pyenv/versions/3.7.6/lib/python3.7/sqlite3/dbapi2.py:27: in <module>
2020-07-23T00:30:23.1448845Z from _sqlite3 import *
2020-07-23T00:30:23.1449485Z E ModuleNotFoundError: No module named '_sqlite3'
2020-07-23T00:30:23.1449638Z
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6003/comments | https://api.github.com/repos/huggingface/transformers/issues/6003/events | https://github.com/huggingface/transformers/pull/6003 | 664,737,218 | MDExOlB1bGxSZXF1ZXN0NDU1OTI4NTIz | 6,003 | MBART: support summarization tasks where max_src_len > max_tgt_len | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=h1) Report\n> Merging [#6003](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6003 +/- ##\n=======================================\n Coverage 78.52% 78.53% \n=======================================\n Files 146 146 \n Lines 26314 26316 +2 \n=======================================\n+ Hits 20664 20667 +3 \n+ Misses 5650 5649 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6003/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.71% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6003/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=footer). Last update [d1d15d6...6c95793](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"we have a good test for `MbartDataset` I'll add a test for just the tokenizer to `tokenization_mbart.py`"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Previously, MBartTokenizer.prepare_translation_batch always truncated `src_texts` and `tgt_texts` to the same length.
This PR exposes a `max_target_length` kwarg, which, if specified, will control truncation of target_texts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6003",
"html_url": "https://github.com/huggingface/transformers/pull/6003",
"diff_url": "https://github.com/huggingface/transformers/pull/6003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6003.patch",
"merged_at": 1595938692000
} |
https://api.github.com/repos/huggingface/transformers/issues/6002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6002/comments | https://api.github.com/repos/huggingface/transformers/issues/6002/events | https://github.com/huggingface/transformers/issues/6002 | 664,733,034 | MDU6SXNzdWU2NjQ3MzMwMzQ= | 6,002 | Deebert Examples test failure | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Ji-Xin would you mind taking a stab at this?\r\nCommand:\r\n```bash\r\nRUN_SLOW=1 pytest examples/deebert/ \r\n```"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | `DeeBertTests.test_glue_deebert`
Excerpt from [CI](https://pipelines.actions.githubusercontent.com/SFFqAjp6ciVZiZmfZfjie9y9Q96dfpUE8sJvWAtTDWoFlixGkf/_apis/pipelines/1/runs/4783/signedlogcontent/3?urlExpires=2020-07-23T20%3A02%3A20.7871420Z&urlSigningMethod=HMACV1&urlSignature=XL%2Fj%2F2NSBD3Q174ifxC4jTOQUloQMx3UFZ%2ByaM%2Beomk%3D)
```python
2020-07-23T00:30:23.1537872Z with patch.object(sys, "argv", train_args):
2020-07-23T00:30:23.1538109Z result = run_glue_deebert.main()
2020-07-23T00:30:23.1538324Z for value in result.values():
2020-07-23T00:30:23.1538573Z > self.assertGreaterEqual(value, 0.75)
2020-07-23T00:30:23.1538856Z E AssertionError: 0.6666666666666666 not greater than or equal to 0.75
2020-07-23T00:30:23.1539030Z
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6002/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6001/comments | https://api.github.com/repos/huggingface/transformers/issues/6001/events | https://github.com/huggingface/transformers/issues/6001 | 664,728,469 | MDU6SXNzdWU2NjQ3Mjg0Njk= | 6,001 | MbartDataset can support Summarization | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | need to allow max_source_length != max_seq_length | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6001/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/6000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/6000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/6000/comments | https://api.github.com/repos/huggingface/transformers/issues/6000/events | https://github.com/huggingface/transformers/pull/6000 | 664,720,406 | MDExOlB1bGxSZXF1ZXN0NDU1OTE0NjA5 | 6,000 | Bert german dbmdz uncased sentence stsb | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=h1) Report\n> Merging [#6000](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e161955105f7e012dba5d51842923fc25fc5cdf&el=desc) will **increase** coverage by `1.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6000 +/- ##\n==========================================\n+ Coverage 77.32% 78.51% +1.18% \n==========================================\n Files 146 146 \n Lines 26242 26242 \n==========================================\n+ Hits 20291 20603 +312 \n+ Misses 5951 5639 -312 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=footer). Last update [6e16195...e043625](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great work!\r\n\r\n@stefan-it maybe we need to implement a \"Fine-tune\" button similar to GitHub's \"Fork\" button at some point 😉\r\n\r\n@PhilipMay What's your feedback on Optuna?",
"> @PhilipMay What's your feedback on Optuna?\r\n\r\n@julien-c Optuna is awsome. I love it. Very good documentation, clean code, nice integrations and I like the pruning integration."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Model Card for bert-german-dbmdz-uncased-sentence-stsb | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/6000/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/6000/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/6000",
"html_url": "https://github.com/huggingface/transformers/pull/6000",
"diff_url": "https://github.com/huggingface/transformers/pull/6000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/6000.patch",
"merged_at": 1595541405000
} |
https://api.github.com/repos/huggingface/transformers/issues/5999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5999/comments | https://api.github.com/repos/huggingface/transformers/issues/5999/events | https://github.com/huggingface/transformers/pull/5999 | 664,598,131 | MDExOlB1bGxSZXF1ZXN0NDU1ODEzNjYy | 5,999 | Fix #5974 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=h1) Report\n> Merging [#5999](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76f52324b1e2d2bb631c80895a5f16ddc303a099&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5999 +/- ##\n=======================================\n Coverage 78.66% 78.66% \n=======================================\n Files 146 146 \n Lines 26230 26229 -1 \n=======================================\n Hits 20633 20633 \n+ Misses 5597 5596 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.12% <0.00%> (-0.04%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=footer). Last update [76f5232...c6a5a2a](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Small bug introduced by the model outputs PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5999",
"html_url": "https://github.com/huggingface/transformers/pull/5999",
"diff_url": "https://github.com/huggingface/transformers/pull/5999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5999.patch",
"merged_at": 1595526689000
} |
https://api.github.com/repos/huggingface/transformers/issues/5998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5998/comments | https://api.github.com/repos/huggingface/transformers/issues/5998/events | https://github.com/huggingface/transformers/pull/5998 | 664,588,884 | MDExOlB1bGxSZXF1ZXN0NDU1ODA2MDg5 | 5,998 | MbartTokenizer: do not hardcode vocab size | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=h1) Report\n> Merging [#5998](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5998 +/- ##\n=======================================\n Coverage 78.48% 78.49% \n=======================================\n Files 146 146 \n Lines 26230 26232 +2 \n=======================================\n+ Hits 20587 20591 +4 \n+ Misses 5643 5641 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.58% <100.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=footer). Last update [33d7506...39fdead](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | language codes should start at the end of the standard vocab. If the standard vocab is smaller, this number is not 250,001. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5998",
"html_url": "https://github.com/huggingface/transformers/pull/5998",
"diff_url": "https://github.com/huggingface/transformers/pull/5998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5998.patch",
"merged_at": 1595533274000
} |
https://api.github.com/repos/huggingface/transformers/issues/5997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5997/comments | https://api.github.com/repos/huggingface/transformers/issues/5997/events | https://github.com/huggingface/transformers/issues/5997 | 664,582,983 | MDU6SXNzdWU2NjQ1ODI5ODM= | 5,997 | bug in trainer.py line297 | {
"login": "Jiaxin-Wen",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jiaxin-Wen",
"html_url": "https://github.com/Jiaxin-Wen",
"followers_url": "https://api.github.com/users/Jiaxin-Wen/followers",
"following_url": "https://api.github.com/users/Jiaxin-Wen/following{/other_user}",
"gists_url": "https://api.github.com/users/Jiaxin-Wen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jiaxin-Wen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jiaxin-Wen/subscriptions",
"organizations_url": "https://api.github.com/users/Jiaxin-Wen/orgs",
"repos_url": "https://api.github.com/users/Jiaxin-Wen/repos",
"events_url": "https://api.github.com/users/Jiaxin-Wen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jiaxin-Wen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Will be fixed by #5982",
"Fixed by #5982"
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
I found this bug while running example/token_classification/run_ner.py.
I fixed that by delete the 'self.' prefix.
## Reproduce
just execute `run_ner.py`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5997/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5996/comments | https://api.github.com/repos/huggingface/transformers/issues/5996/events | https://github.com/huggingface/transformers/issues/5996 | 664,454,377 | MDU6SXNzdWU2NjQ0NTQzNzc= | 5,996 | T5 pre training on different languages from scratch | {
"login": "ashispapu",
"id": 9023527,
"node_id": "MDQ6VXNlcjkwMjM1Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9023527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashispapu",
"html_url": "https://github.com/ashispapu",
"followers_url": "https://api.github.com/users/ashispapu/followers",
"following_url": "https://api.github.com/users/ashispapu/following{/other_user}",
"gists_url": "https://api.github.com/users/ashispapu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashispapu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashispapu/subscriptions",
"organizations_url": "https://api.github.com/users/ashispapu/orgs",
"repos_url": "https://api.github.com/users/ashispapu/repos",
"events_url": "https://api.github.com/users/ashispapu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashispapu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @ashispapu, T5 pre-training is not yet available in transformers, I'm working on it but might take some time. Feel free to take a stab.",
"Hi @patil-suraj Thanks for the response. Let me try it.",
"@patil-suraj, thanks for working on the different T5 ropes. Can you point to the branch you 're working for pre-training? I'd be interested to contribute.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm interested in T5 pre-training from scratch. Any update on this project?"
] | 1,595 | 1,631 | 1,606 | NONE | null | Hi Team,
I was exploring the pre training procedure/document for T5 models(small,base, large) models on different languages from scratch. However did not come across anything which could help me. Please share if there is any resource for the same. If not, could you please consider to include it .
Thank you . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5996/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5995/comments | https://api.github.com/repos/huggingface/transformers/issues/5995/events | https://github.com/huggingface/transformers/pull/5995 | 664,435,922 | MDExOlB1bGxSZXF1ZXN0NDU1Njc2MDEy | 5,995 | [WIP] Trainer supports all Datasets as train_dataset, with/without __len__ #5990 | {
"login": "j-rossi-nl",
"id": 48321582,
"node_id": "MDQ6VXNlcjQ4MzIxNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/48321582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-rossi-nl",
"html_url": "https://github.com/j-rossi-nl",
"followers_url": "https://api.github.com/users/j-rossi-nl/followers",
"following_url": "https://api.github.com/users/j-rossi-nl/following{/other_user}",
"gists_url": "https://api.github.com/users/j-rossi-nl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-rossi-nl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-rossi-nl/subscriptions",
"organizations_url": "https://api.github.com/users/j-rossi-nl/orgs",
"repos_url": "https://api.github.com/users/j-rossi-nl/repos",
"events_url": "https://api.github.com/users/j-rossi-nl/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-rossi-nl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> Note that there is some moving around of the code in Trainer coming in #5982 (I'll probably merge it today) so you may need to adapt a bit the code.\r\n\r\nI'll wait for the merge of #5982 and introduce the fix for #5990 after.\r\n\r\n> Note that there is no test_steps field since the evaluation is supposed to be complete on the test set (users can always pass along a shorter dataset if they want).\r\n\r\nI see. \r\nAt the moment, the functionality is that it will refuse a dataset that does not implement `__len__`, whether it is the EVAL dataset or the TEST dataset.\r\n\r\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=h1) Report\n> Merging [#5995](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `78.57%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5995 +/- ##\n==========================================\n+ Coverage 78.50% 78.90% +0.40% \n==========================================\n Files 146 146 \n Lines 26249 26264 +15 \n==========================================\n+ Hits 20606 20723 +117 \n+ Misses 5643 5541 -102 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `60.63% <78.57%> (+19.75%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.23% <0.00%> (+0.91%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `97.36% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.90% <0.00%> (+3.48%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `86.73% <0.00%> (+6.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=footer). Last update [c69ea5e...4066671](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi,\r\n\r\n- Merged to include all changes from #5982\r\n- Now accepts `IterableDataset` for any dataset (TRAIN, EVAL, TEST)\r\n- Displays information (how many steps, etc...)\r\n- `test_trainer.py` includes an end-to-end test of `train()` and `predict()`\r\n\r\nIt fixes the issue #5990 with my code.\r\n\r\nChecklist:\r\n- `make test` is positive (all passed, no failed)\r\n- `make style` and `make quality`\r\n\r\nLooking forward to review.\r\n\r\nI'm still unsure about how to test what the dataset is:\r\n- the type hinting says `train_dataset: Dataset`\r\n- `pytorch` indicates it is good practice to implement `__len__` on Map-Style dataset, but in the code there is no way this could be enforced\r\n- `pytorch` relies only on the logic: user will inherit `Dataset` and implement `__len__` or user will inherit `IterableDataset` and not implement `__len__`. In `Dataloader` every time there is a doubt it is checking if the object is an instance of `Dataset` or `IterableDataset`\r\n- in my code, I followed my first rule: either the object has `__len__` or it does not. \r\n\r\nStill wondering if I should code it `pytorch`-style and trust the given answer blindly, or keep it the way it is done, which is a bit more paranoid ?\r\nAny comment appreciated.",
"- the test is whether a `Dataset` object has `__len__` or not\r\n- iterable dataset is OK for training, __only__ if `max_steps` has a strictly positive value\r\n- iterable dataset is not acceptable for evaluation or prediction\r\n\r\nThe confusion about `eval_steps` has been purged.\r\n\r\nThe test `test_trainer_iterable_dataset` in `test_trainer.py` has been extended to check for corner cases and associated exceptions. Only the exception type is checked, not exception message.\r\n\r\nChecklist:\r\n- `make test` passed (no fail)\r\n- `make style`\r\n- `make quality`\r\n\r\nLooking forward to review.",
"Nitpicking is fine in my book.\r\n\r\n- removed redundant test on dataset (eval / test)\r\n\r\nChecklist:\r\n- `make test` passed (no fail)\r\n- `make style`\r\n- `make quality`\r\n\r\nUp for review again.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, is this PR still being reviewed? I would like to use `Trainer` with an `IterableDataset` and this looks like exactly what's needed to make that happen. If you have time, I would greatly appreciate this PR to get into the next version :) thank you!",
"I realize a part has been merged, but not everything.",
"And I can't seem to find a way to re-open this PR. So I guess, I should open a new one, and link to this one...",
"@carson-sestili The PR #7858 has been merged to master and fixes the bug.\r\nYou can already use it by installing from source.\r\n",
"@j-rossi-nl Thank you very much!"
] | 1,595 | 1,603 | 1,602 | CONTRIBUTOR | null | A first PR.
Passed:
- `make test`
- `make style`
- `make quality`
Modifies:
- trainer.py: fixes issue
- test_trainer.py: calls `Trainer.train()`
It fixes only the case where the TRAINING dataset has not the `__len__` method.
The distinction is not between `Dataset` and `IterableDataset`, but between objects that are instances of class where `__len__` is implemented or not. This is pointed in [pytorch source](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#Dataset), implementation of `__len__` is up to the user.
The test is therefore: `isinstance(dataset, collections.Sized)`
NB: fixing for EVAL and TEST dataset will require more code refactor: all get funneled to `Trainer._prediction_loop()` without keeping track of whether it is EVAL or TEST, which makes the usage of `TrainingArguments.eval_steps` impossible to assume. (not to mention, there is no `test_steps` field in `TrainingArguments`)
Not all datasets have an implementation of `__len__` method, therefore the trainer should not assume it is available.
The use case is:
- Dataset with `__len__`: use `num_train_epochs` or `max_steps` to specify how long should training run
- Dataset without `__len__`: use only `max_steps`
The limitation is still valid on the EVAL / TEST dataset who still has to implement `__len__` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5995",
"html_url": "https://github.com/huggingface/transformers/pull/5995",
"diff_url": "https://github.com/huggingface/transformers/pull/5995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5995.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5994/comments | https://api.github.com/repos/huggingface/transformers/issues/5994/events | https://github.com/huggingface/transformers/pull/5994 | 664,369,262 | MDExOlB1bGxSZXF1ZXN0NDU1NjE5NjM2 | 5,994 | [examples (seq2seq)] fix preparing decoder_input_ids for T5 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=h1) Report\n> Merging [#5994](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5994 +/- ##\n==========================================\n- Coverage 78.48% 78.25% -0.24% \n==========================================\n Files 146 146 \n Lines 26230 26230 \n==========================================\n- Hits 20587 20525 -62 \n- Misses 5643 5705 +62 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=footer). Last update [33d7506...fbf4b03](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I will let you know ASAP!",
"As far as I know, @mrm8488 hasn't used finetune.py for T5. He has linked his training colabs in model cards and they just pass `labels` directly and `decoder_input_ids` are created by the `T5ForConditionalGeneration`",
" I usually follow this one https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb",
"Also I think #5866 should be merged before this, because if `</s>` is not added then it might miss the last token ",
"> can we unittest this behavior somehow?\r\n\r\nif we move this logic to dataset or collate function then we can unit test `__getitem__` or `collat_fn`. Would this be a good idea ?",
"Yes, great idea. Whichever seems more natural.",
"Hi @sshleifer , does this PR needs any other changes before merging ?",
"sorry for being slow!"
] | 1,595 | 1,595 | 1,595 | MEMBER | null | possible fix for #5987
`</s>` should be added at the end of target text
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5994",
"html_url": "https://github.com/huggingface/transformers/pull/5994",
"diff_url": "https://github.com/huggingface/transformers/pull/5994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5994.patch",
"merged_at": 1595859044000
} |
https://api.github.com/repos/huggingface/transformers/issues/5993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5993/comments | https://api.github.com/repos/huggingface/transformers/issues/5993/events | https://github.com/huggingface/transformers/issues/5993 | 664,331,715 | MDU6SXNzdWU2NjQzMzE3MTU= | 5,993 | Store Predictions on CPU in Every Prediction Iteration (Trainer) | {
"login": "gonglinyuan",
"id": 9744170,
"node_id": "MDQ6VXNlcjk3NDQxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9744170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gonglinyuan",
"html_url": "https://github.com/gonglinyuan",
"followers_url": "https://api.github.com/users/gonglinyuan/followers",
"following_url": "https://api.github.com/users/gonglinyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/gonglinyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gonglinyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gonglinyuan/subscriptions",
"organizations_url": "https://api.github.com/users/gonglinyuan/orgs",
"repos_url": "https://api.github.com/users/gonglinyuan/repos",
"events_url": "https://api.github.com/users/gonglinyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/gonglinyuan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🚀 Feature request
Store Predictions on CPU in Every Prediction Iteration (Trainer)
## Motivation
Currently, in [`Trainer._prediction_loop`,](https://github.com/huggingface/transformers/blob/33d7506ea10ca92886fd1bb3b5306a1a720c58fe/src/transformers/trainer.py#L785) the predictions (logits) of the model is stored on GPU/TPU in each iteration. After all iterations are finished, they will be concatenated together and sent to CPU. In this way, the GPU/TPU memory usage will increase linearly during prediction. If the test set is very large, there may be insufficient GPU/TPU memory to finish the prediction phase. To save GPU/TPU memory and allow larger-scale inference, the trainer should instead send a batch of prediction to CPU after each iteration.
However, if we do inference on multiple devices, we would need a `distributed_concat` function to aggregate all predictions from all devices. If the predictions are already stored on CPU, we can no longer aggregate them using NCCL. Therefore, this issue is unsolvable unless we introduce another CPU-based distributed communication mechanism. Still, we can at least solve it in a single-GPU scenario.
## Your contribution
If approved, I can write codes to solve this problem when the inference is not running in the distributed mode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5993/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5993/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5992/comments | https://api.github.com/repos/huggingface/transformers/issues/5992/events | https://github.com/huggingface/transformers/pull/5992 | 664,324,158 | MDExOlB1bGxSZXF1ZXN0NDU1NTgxMTYw | 5,992 | ONNX documentation | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=h1) Report\n> Merging [#5992](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **increase** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5992 +/- ##\n==========================================\n+ Coverage 78.25% 78.51% +0.25% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n+ Hits 20515 20581 +66 \n+ Misses 5699 5633 -66 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=footer). Last update [d322794...59c00a2](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Merging as the failures are not related to the changeset introduced in this PR"
] | 1,595 | 1,596 | 1,596 | MEMBER | null | - Rename the current **torchscript.rst** to **serialization.rst**
- Move torchscript documentation into a subsection of the above **serialization.rst**
- Introduce documentation for ONNX/ONNXRuntime in the above section
Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5992",
"html_url": "https://github.com/huggingface/transformers/pull/5992",
"diff_url": "https://github.com/huggingface/transformers/pull/5992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5992.patch",
"merged_at": 1596013355000
} |
https://api.github.com/repos/huggingface/transformers/issues/5991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5991/comments | https://api.github.com/repos/huggingface/transformers/issues/5991/events | https://github.com/huggingface/transformers/issues/5991 | 664,301,935 | MDU6SXNzdWU2NjQzMDE5MzU= | 5,991 | T5 Tensorflow: _shift_right returns wrong result | {
"login": "maurice-g",
"id": 2892585,
"node_id": "MDQ6VXNlcjI4OTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2892585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maurice-g",
"html_url": "https://github.com/maurice-g",
"followers_url": "https://api.github.com/users/maurice-g/followers",
"following_url": "https://api.github.com/users/maurice-g/following{/other_user}",
"gists_url": "https://api.github.com/users/maurice-g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maurice-g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maurice-g/subscriptions",
"organizations_url": "https://api.github.com/users/maurice-g/orgs",
"repos_url": "https://api.github.com/users/maurice-g/repos",
"events_url": "https://api.github.com/users/maurice-g/events{/privacy}",
"received_events_url": "https://api.github.com/users/maurice-g/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for raising this issue! Indeed, this looks like an issue. Do you want to open a PR with the fix you propose?",
"Hey @maurice-g, \r\n\r\nThanks a lot for the fix!"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🐛 Bug
## Information
The [_shift_right](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_t5.py#L777) method of the `TFT5PreTrainedModel` returns all zeros instead of shifting the `input_ids`.
## To reproduce
```
import tensorflow as tf
def shape_list(x):
"""Deal with dynamic shape in tensorflow cleanly."""
static = x.shape.as_list()
dynamic = tf.shape(x)
return [dynamic[i] if s is None else s for i, s in enumerate(static)]
def _shift_right(input_ids):
decoder_start_token_id = 0
pad_token_id = 0
assert (
decoder_start_token_id is not None
), "self.model.config.decoder_start_token_id has to be defined. In TF T5 it is usually set to the pad_token_id. See T5 docs for more information"
# shift inputs to the right
shifted_input_ids = tf.zeros_like(input_ids, dtype=tf.int32)
shifted_input_ids = tf.roll(shifted_input_ids, 1, axis=-1)
start_tokens = tf.fill((shape_list(shifted_input_ids)[0], 1), decoder_start_token_id)
shifted_input_ids = tf.concat([start_tokens, shifted_input_ids[:, 1:]], -1)
assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids = tf.where(
shifted_input_ids == -100, tf.fill(shape_list(shifted_input_ids), pad_token_id), shifted_input_ids
)
assert tf.math.reduce_any(
shifted_input_ids >= 0
).numpy(), "Verify that `labels` has only positive values and -100"
return shifted_input_ids
input_ids = tf.convert_to_tensor([[32000, 1, 2, 3, 0, 0, 0]])
print(_shift_right(input_ids))
```
## Expected behavior
Should shift the tensor to the right.
## Suggested solution
Replace line `shifted_input_ids = tf.zeros_like(input_ids, dtype=tf.int32)` with `shifted_input_ids = tf.cast(input_ids, tf.int32)`.
Further I'd recommend removing the assertion for positive label values, as it depends on the `numpy()` method, which is not available in some cases (e.g. when using datasets loaded from tfrecord files) and will throw an error then.
Shall I open a PR directly?
## Environment info
- `transformers` version: Master (file commit `4dc6559`)
- Tensorflow version: 2.1.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5991/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5990/comments | https://api.github.com/repos/huggingface/transformers/issues/5990/events | https://github.com/huggingface/transformers/issues/5990 | 664,279,965 | MDU6SXNzdWU2NjQyNzk5NjU= | 5,990 | Trainer: exception raised when calling len() on IterableDataset | {
"login": "j-rossi-nl",
"id": 48321582,
"node_id": "MDQ6VXNlcjQ4MzIxNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/48321582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-rossi-nl",
"html_url": "https://github.com/j-rossi-nl",
"followers_url": "https://api.github.com/users/j-rossi-nl/followers",
"following_url": "https://api.github.com/users/j-rossi-nl/following{/other_user}",
"gists_url": "https://api.github.com/users/j-rossi-nl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-rossi-nl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-rossi-nl/subscriptions",
"organizations_url": "https://api.github.com/users/j-rossi-nl/orgs",
"repos_url": "https://api.github.com/users/j-rossi-nl/repos",
"events_url": "https://api.github.com/users/j-rossi-nl/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-rossi-nl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
## Information
While pre-training a Longformer model from scratch, the text is delivered through an `IterableDataset` object. The code which is called by `Trainer.train()` still calls `len()` on this object, which raises an exception.
#5829 addressed the proper creation of the Dataloader.
The problem arises when using:
* [x] my own modified scripts: see code
The tasks I am working on is:
* [x] my own task or dataset: pre-train a LM from scratch
## To reproduce
Here is my entire code, but it can be reproduced with any `PreTrainedModel` by using an `IterableDataset`.
```python
import logging
import random
from dataclasses import dataclass, field
from transformers import LongformerConfig, LongformerForMaskedLM, LongformerTokenizerFast
from transformers import Trainer, TrainingArguments
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import HfArgumentParser
from sklearn.model_selection import train_test_split
from pathlib import Path
from utils_pretrain import MultiTextDataset
logger = logging.getLogger(__name__)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
max_seq_len: int = field(
metadata={"help": "Input Sequence Length"}
)
num_hidden_layers: int = field(
metadata={'help': 'Number of transformer layers in Longformer'}
)
tok_dir: str = field(
metadata={
'help': 'Folder with tokenizer files'
}
)
txt_dir: str = field(
metadata={"help": "Folder with txt files for tokenizer training"}
)
filter_files: str = field(
default='[a-c]*.txt',
metadata={"help": "regex to select specific files"}
)
test_size: float = field(
default=0.05,
metadata={'help': 'proportion of the data that will be used for evaluation'}
)
def main():
parser = HfArgumentParser((ModelArguments, TrainingArguments))
model_args, train_args = parser.parse_args_into_dataclasses()
model_args: ModelArguments
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
train_args.local_rank,
train_args.device,
train_args.n_gpu,
bool(train_args.local_rank != -1),
train_args.fp16,
)
logger.info("Training/evaluation parameters %s", train_args)
MODEL_NAME = 'allenai/longformer-base-4096'
tokenizer: LongformerTokenizerFast = LongformerTokenizerFast.from_pretrained(model_args.tok_dir)
# Customize an existing config rather than create from scratch
config: LongformerConfig = LongformerConfig.from_pretrained(MODEL_NAME)
config.max_position_embeddings = model_args.max_seq_len + 2
config.num_hidden_layers = model_args.num_hidden_layers
config.attention_window = [512] * model_args.num_hidden_layers
config.vocab_size = tokenizer.vocab_size
model = LongformerForMaskedLM(config)
data_files = list(Path(model_args.txt_dir).glob(model_args.filter_files))
shuffled_files = random.sample(data_files, len(data_files))
train_files, val_files = train_test_split(shuffled_files, test_size=model_args.test_size)
train_ds, val_ds = list(
map(
lambda x: MultiTextDataset(
files=x,
tokenizer=tokenizer,
block_size=model_args.max_seq_len
),
[train_files, val_files]
)
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15
)
train_args: TrainingArguments
train_args.do_train = True
train_args.evaluate_during_training = True
trainer = Trainer(
model=model,
args=train_args,
data_collator=data_collator,
train_dataset=train_ds,
eval_dataset=val_ds,
)
trainer.train(train_args.output_dir)
```
The class `MultiTextDataset` inherits `IterableDataset`. It has no `__len__` method, and the length would require the whole dataset to be parsed at once to be known.
Here is the exception and stack trace:
```
Traceback (most recent call last):
File "longformer_pretrain.py", line 131, in <module>
main()
File "longformer_pretrain.py", line 122, in main
trainer.train(train_args.output_dir)
File "/home/jrossi/anaconda3/envs/COLIEE/lib/python3.7/site-packages/transformers/trainer.py", line 392, in train
self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1
File "/home/jrossi/anaconda3/envs/COLIEE/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 313, in __len__
length = self._IterableDataset_len_called = len(self.dataset)
TypeError: object of type 'MultiTextDataset' has no len()
```
## Expected behavior
The call to `Trainer.train()` starts the training. A case has to be made in the code to accomodate the usage of `IterableDataset`, which means not assuming that `len()` can be called on the dataset at any point.
- If a number of epochs is given, one epoch corresponds to consuming the iterable dataset until StopIteration
- If a number of steps is given, training stops after performing MAX_STEPS or catching a StopIteration, whichever comes first
- During training, the progress bar should be either a % of epochs performed, or a % of steps performed
- (optional) If a number of epochs is given, register how many steps it took to consume the iterator so a better progress bar can be shown for the next epochs (each epoch will consume the same iterator once)
With regards to [Pytorch documentation](https://pytorch.org/docs/stable/data.html#), there is no certainty that `__len__` method will be implemented, even on `Dataset` objects.
The distinction should be made between objects that implement `__len__` and those that do not implement it.
The current code __assumes__ that the `Dataset` objects given when creating a `Trainer` implement `len()`, but there is no guarantee of this.
```python
import collections
if isinstance(bar, collections.Sized): (...)
```
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.7.8-1.el7.elrepo.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO (for the moment)
## Fix
I can contribute. I will suggest a PR to fix this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5989/comments | https://api.github.com/repos/huggingface/transformers/issues/5989/events | https://github.com/huggingface/transformers/pull/5989 | 664,279,942 | MDExOlB1bGxSZXF1ZXN0NDU1NTQ0NzEz | 5,989 | Create README.md | {
"login": "gianfrancobarone",
"id": 18675023,
"node_id": "MDQ6VXNlcjE4Njc1MDIz",
"avatar_url": "https://avatars.githubusercontent.com/u/18675023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianfrancobarone",
"html_url": "https://github.com/gianfrancobarone",
"followers_url": "https://api.github.com/users/gianfrancobarone/followers",
"following_url": "https://api.github.com/users/gianfrancobarone/following{/other_user}",
"gists_url": "https://api.github.com/users/gianfrancobarone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianfrancobarone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianfrancobarone/subscriptions",
"organizations_url": "https://api.github.com/users/gianfrancobarone/orgs",
"repos_url": "https://api.github.com/users/gianfrancobarone/repos",
"events_url": "https://api.github.com/users/gianfrancobarone/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianfrancobarone/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=h1) Report\n> Merging [#5989](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5989 +/- ##\n==========================================\n+ Coverage 78.48% 78.66% +0.17% \n==========================================\n Files 146 146 \n Lines 26230 26230 \n==========================================\n+ Hits 20587 20634 +47 \n+ Misses 5643 5596 -47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=footer). Last update [33d7506...ec916e3](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Hello, we are making this pull request to add model card for our Italian sentiment political model. Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5989",
"html_url": "https://github.com/huggingface/transformers/pull/5989",
"diff_url": "https://github.com/huggingface/transformers/pull/5989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5989.patch",
"merged_at": 1595518894000
} |
https://api.github.com/repos/huggingface/transformers/issues/5988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5988/comments | https://api.github.com/repos/huggingface/transformers/issues/5988/events | https://github.com/huggingface/transformers/issues/5988 | 664,265,665 | MDU6SXNzdWU2NjQyNjU2NjU= | 5,988 | EncoderDecoderModel: weight can not be init from the checkpoint | {
"login": "nghuyong",
"id": 16462374,
"node_id": "MDQ6VXNlcjE2NDYyMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16462374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nghuyong",
"html_url": "https://github.com/nghuyong",
"followers_url": "https://api.github.com/users/nghuyong/followers",
"following_url": "https://api.github.com/users/nghuyong/following{/other_user}",
"gists_url": "https://api.github.com/users/nghuyong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nghuyong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nghuyong/subscriptions",
"organizations_url": "https://api.github.com/users/nghuyong/orgs",
"repos_url": "https://api.github.com/users/nghuyong/repos",
"events_url": "https://api.github.com/users/nghuyong/events{/privacy}",
"received_events_url": "https://api.github.com/users/nghuyong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @nghuyong , you won't need to fix this warning, the reason for this warning is that cross-attention layer is added newly in the model as both of these models are encoder models and cross-attention is not available for encoder only models.\r\n\r\nThis warning will go away when you train the model, after training EncoderDecoder model you can load it using just `EncoderDecoderModel.from_pretrained`\r\n\r\nHope this helps.",
"@patil-suraj Thanks for your reply\r\nI still have a question, should I follow the instruction in the model card of [bert2bert-cnn_dailymail-fp16](https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/bert2bert-cnn_dailymail-fp16/README.md#training-script):\r\n**make sure you checkout to the branch more_general_trainer_metric**\r\nto train a seq2seq model",
"yes, that branch has a change in `Trainer` class to make it work with `EncoderDecoder` models.",
"I will open a cleaner PR soon to integrate this branch into master.",
"@patrickvonplaten I'm also modifying `Trainer` to support generative metrics and other seq2seq functionalities like label smoothing loss etc in this PR #6769, it's for `examples/seq2seq` right now, but if you think it's useful then can try to move it into `Trainer`",
"I think it's fine to leave it separated for now! Eventually it would be nice to move everything to Trainer",
"That will be really COOL ! Thanks for your work, it will be very convenient to use~ @patrickvonplaten @patil-suraj ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,604 | 1,604 | CONTRIBUTOR | null | I try to use EncoderDecoderModel to train a Chinese summary model.
```Python
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
encoder_config = BertConfig.from_pretrained('bert-base-chinese')
decoder_config = BertConfig.from_pretrained('bert-base-chinese', is_decoder=True)
encoder_config.max_length = 512
decoder_config.max_length = 128
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-chinese', 'bert-base-chinese',
encoder_config=encoder_config,
decoder_config=decoder_config)
```
However, I get a warning, the whole encoder model doesn't init from checkpoint:
```
WARNING:transformers.modeling_utils:Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertLMHeadModel: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
WARNING:transformers.modeling_utils:Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-chinese and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias',
'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.0.crossattention.self.key.bias',
'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.9.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias', 'cls.predictions.decoder.bias']
```
So, how to fix this warning | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5988/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5988/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5987/comments | https://api.github.com/repos/huggingface/transformers/issues/5987/events | https://github.com/huggingface/transformers/issues/5987 | 664,216,578 | MDU6SXNzdWU2NjQyMTY1Nzg= | 5,987 | Possible bug in preparing deocder_input_ids for T5 in seq2seq finetune.y | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This could be one of the reasons for strange T5 behaviour.",
"Yes this is an excellent catch @patil-suraj !"
] | 1,595 | 1,595 | 1,595 | MEMBER | null | In finetune.py in `_step` method ([link](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L131)) `decoder_input_ids` are prepared like this
```python3
source_ids, source_mask, target_ids = batch["input_ids"], batch["attention_mask"], batch["decoder_input_ids"]
decoder_input_ids = target_ids[:, :-1].contiguous() # Why this line?
lm_labels = target_ids[:, 1:].clone() # why clone?
```
The `T5ForConditionalGeneration` automatically prepares `decoder_input_ids` using `lables` when they are not passed. It uses the _shift_right [method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L620).
```python3
decoder_start_token_id = self.config.decoder_start_token_id
pad_token_id = self.config.pad_token_id
# shift inputs to the right
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
shifted_input_ids[..., 0] = decoder_start_token_id
```
so finetune.py doesn't add decoder start token id as required by T5 model. This works for bart because for bart input starts with `bos` (`<s>`) which the tokenizer adds automatically. So for T5 model this removes the first token from `lm_labels` and adds no start token in `decoder_input_ids`.
### To reproduce
```python3
from transformers import T5ForConditionalGeneration, T5Tokenizer, T5Config
tokenizer = T5Tokenizer.from_pretrained("t5-base")
config = T5Config.from_pretrained("t5-base")
batch = tokenizer(["simple is better than complex </s>"], return_tensors="pt")
# from finetune.py
pad_token_id = tokenizer.pad_token_id
target_ids = batch["input_ids"]
decoder_input_ids = target_ids[:, :-1].contiguous() # Why this line?
lm_labels = target_ids[:, 1:].clone() # why clone?
print(decoder_input_ids[0])
# => tensor([ 650, 19, 394, 145, 1561])
print(tokenizer.convert_ids_to_tokens(decoder_input_ids[0]))
# => ['▁simple', '▁is', '▁better', '▁than', '▁complex']
print(tokenizer.decode(lm_labels[0]))
# => is better than complex
# from T5PreTrainedModel._shift_right
decoder_start_token_id = config.decoder_start_token_id
input_ids = batch["input_ids"]
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
shifted_input_ids[..., 0] = decoder_start_token_id
print(shifted_input_ids[0])
# => tensor([ 0, 650, 19, 394, 145, 1561])
print(tokenizer.convert_ids_to_tokens(shifted_input_ids[0]))
# => ['<pad>', '▁simple', '▁is', '▁better', '▁than', '▁complex']
print(tokenizer.decode(input_ids[0]))
# => simple is better than complex
```
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5987/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5986/comments | https://api.github.com/repos/huggingface/transformers/issues/5986/events | https://github.com/huggingface/transformers/issues/5986 | 664,146,960 | MDU6SXNzdWU2NjQxNDY5NjA= | 5,986 | how can I download T5-11B pretrained model? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I get an error when trying to use the hosted inference API too.\r\n\r\n\r\n> ⚠️ Error loading model Can't load weights for 't5-11b'. Make sure that: - 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. OSError(\"Can't load weights for 't5-11b'. Make sure that:\\n\\n- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'\\n\\n- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\\n\\n\")\r\n",
"This is a known issue, and is due to the fact that the [checkpoint size is 42GB](https://huggingface.co/t5-11b#list-files) while the max supported file size for Cloudfront is [20GB](https://aws.amazon.com/blogs/aws/amazon-cloudfront-support-for-20-gb-objects/).\r\n\r\nYou should use the `use_cdn=False` flag in `AutoModel.from_pretrained(use_cdn=False)` to work around this for now.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Forgot to update this issue at the time, but note that this now works transparently",
"Hi. I am having trouble when downloading t5-11b model too but with bit different issue. I am keep getting 'connection rest by peer' error and failing to download the pretrained weights. Is there other way to download the model parameters?",
"Hi @nobellant215 you could try downloading the weights file directly: `https://huggingface.co/t5-11b/resolve/main/pytorch_model.bin`"
] | 1,595 | 1,648 | 1,601 | CONTRIBUTOR | null | It gives me the following error.
**OSError: Can't load weights for 't5-11b'. Make sure that:
- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5985/comments | https://api.github.com/repos/huggingface/transformers/issues/5985/events | https://github.com/huggingface/transformers/pull/5985 | 664,065,639 | MDExOlB1bGxSZXF1ZXN0NDU1MzY4MjUz | 5,985 | Update doc of the model page | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=h1) Report\n> Merging [#5985](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3206eef44e9fbfca9ed4527f528107fcba31888&el=desc) will **decrease** coverage by `0.39%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5985 +/- ##\n==========================================\n- Coverage 78.65% 78.26% -0.40% \n==========================================\n Files 146 146 \n Lines 26227 26230 +3 \n==========================================\n- Hits 20630 20530 -100 \n- Misses 5597 5700 +103 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <100.00%> (+0.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.17% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=footer). Last update [c3206ee...bfd8d3a](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Fixes docstring to conform to sphinx syntax and rephrase when needed. Also fixed bad copy-pastes from PyTorch to TF.
Preview is [here](https://64023-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/model.html). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5985",
"html_url": "https://github.com/huggingface/transformers/pull/5985",
"diff_url": "https://github.com/huggingface/transformers/pull/5985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5985.patch",
"merged_at": 1595456098000
} |
https://api.github.com/repos/huggingface/transformers/issues/5984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5984/comments | https://api.github.com/repos/huggingface/transformers/issues/5984/events | https://github.com/huggingface/transformers/issues/5984 | 664,064,128 | MDU6SXNzdWU2NjQwNjQxMjg= | 5,984 | Albert pre-train from scratch convergence problem | {
"login": "yl-to",
"id": 23205976,
"node_id": "MDQ6VXNlcjIzMjA1OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23205976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yl-to",
"html_url": "https://github.com/yl-to",
"followers_url": "https://api.github.com/users/yl-to/followers",
"following_url": "https://api.github.com/users/yl-to/following{/other_user}",
"gists_url": "https://api.github.com/users/yl-to/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yl-to/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yl-to/subscriptions",
"organizations_url": "https://api.github.com/users/yl-to/orgs",
"repos_url": "https://api.github.com/users/yl-to/repos",
"events_url": "https://api.github.com/users/yl-to/events{/privacy}",
"received_events_url": "https://api.github.com/users/yl-to/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patil-suraj @julien-c @sgugger @sshleifer ",
"I tried different learning rate like 1.76e-3, 5e-5.(https://github.com/huggingface/transformers/issues/4727)\r\ndifferent dataset type: TextDataset, line_by_lineDataset.\r\nNone of them works.",
"I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.",
"> I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.\r\n\r\n@sgugger Thanks for the reply! The `AlbertForPreTraining` model did requires 2 different labels. I am planning to create a PR for that, before that,\r\n**Could I have any hints about why the `AlbertForMaskedLM` model didn't converge to 0?** Actually this is my major concern, I am planing to train the Albert using larger datasets like the whole wikipedia, if the MLM task even not converge on small dataset(say, wikiText-2), I think it impede my way to scale up the dataset.",
"@sgugger I used larger batch batch size the lower the learning rate did help the model converged to 0 in a very tiny text data(not wikiText-2), I repeated several articles several time to create that text dataset.",
"> I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.\r\n\r\n@sgugger hi,I want to train a albert tiny model. Could you tell me is there any methods or class that can be used to generate the pair of sentences and adds the sentence_order_label ? Thanks a lot.\r\n\r\nI have gotten it after read your commit. I think the class LineByLineWithSOPTextDataset in \"transformers/data/datasets/language_modeling.py\" can solve my problem. Thank yuo again,hh.",
"I also resolved with larger batch size (96) and lower learning_rate (5e-5). Thank you. "
] | 1,595 | 1,660 | 1,599 | CONTRIBUTOR | null | # 🐛 Bug
Albert pre-train convergence problem
- The model training loss converged at 6.6 when using AlbertForMaskedLM as model class
- negative training loss when using AlbertForPretrain as model class
notice: I was deliberately set the eval dataset the same as training set for checking training loss at last run.
## Information
Using AlbertForMaskedLM as model class, figure showed below:

Using AlbertForPretrain as model class, figure showed below:

Besides, when I was using the official `run_lanugage_modeling.py`, the training loss on wikiText-2 is also not converge to 0, it converged at 6.6 for several epochs.
Model I am using (Bert, XLNet ...):
Albert
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: wikiText-2
* [ ] my own task or dataset: (give details below)
## To reproduce
```
from transformers import (
AlbertConfig,
AlbertTokenizer,
BertTokenizer,
AlbertForPreTraining,
AlbertForMaskedLM,
LineByLineTextDataset,
TextDataset,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments
)
import math
albert_base_configuration = AlbertConfig(
hidden_size=768,
num_attention_heads=12,
intermediate_size=3072,
)
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
# model = AlbertForPreTraining(config=albert_base_configuration)
model = AlbertForMaskedLM(config=albert_base_configuration)
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data_local/wikitext-2-raw/wiki.train.raw",
block_size=512,
)
eval_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data_local/wikitext-2-raw/wiki.test.raw",
block_size=512,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir="./results/new",
overwrite_output_dir=True,
num_train_epochs=5,
per_gpu_train_batch_size=5,
save_steps=10_000,
save_total_limit=1,
logging_steps=100,
learning_rate=1.76e-3
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model("./results/new")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
print({"loss": eval_output["eval_loss"]})
result = {"perplexity": perplexity}
print(result)
```
Steps to reproduce the behavior:
1.Download Albert here https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/
2.Run the script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
- The training loss should converge to 0 in tiny datasets.
- Cross entropy should be always positive and eventually converge to zero.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v3.02
- Platform: AWS instance
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 with GPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes, 8 GPU in total
- Using distributed or parallel set-up in script? Not sure.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5984/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5983/comments | https://api.github.com/repos/huggingface/transformers/issues/5983/events | https://github.com/huggingface/transformers/issues/5983 | 664,036,345 | MDU6SXNzdWU2NjQwMzYzNDU= | 5,983 | pipeline does not do truncation on long texts input, error message found | {
"login": "yuyongze",
"id": 41526760,
"node_id": "MDQ6VXNlcjQxNTI2NzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/41526760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuyongze",
"html_url": "https://github.com/yuyongze",
"followers_url": "https://api.github.com/users/yuyongze/followers",
"following_url": "https://api.github.com/users/yuyongze/following{/other_user}",
"gists_url": "https://api.github.com/users/yuyongze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuyongze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuyongze/subscriptions",
"organizations_url": "https://api.github.com/users/yuyongze/orgs",
"repos_url": "https://api.github.com/users/yuyongze/repos",
"events_url": "https://api.github.com/users/yuyongze/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuyongze/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #4224"
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): No specified
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I have tried using pipeline on my own purpose, but I realized it will cause errors if I input long sentence on some tasks, it should do truncation automatically, but it does not. And the pipeline function does not take extra argument so we cannot add something like `truncation=True`. Here is an example on sentiment-analysis task:
```python
from transformers import pipeline
nlp = pipeline('sentiment-analysis')
text = "This is an example"*300
nlp(text)
```
2. The error message is below:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-31-0a8ce849cc29> in <module>()
2 nlp = pipeline('sentiment-analysis')
3 text = "This is an example"*300
----> 4 nlp(text)
11 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
```
3. same error on NER task, maybe some other task as well.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: Python 3.6.9
- PyTorch version (GPU?):1.5.1+cu101, no GPU
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5983/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5982/comments | https://api.github.com/repos/huggingface/transformers/issues/5982/events | https://github.com/huggingface/transformers/pull/5982 | 664,015,975 | MDExOlB1bGxSZXF1ZXN0NDU1MzI2ODY2 | 5,982 | Cleanup Trainer and expose customization points | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=h1) Report\n> Merging [#5982](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c0da7803a75f0fc6e6d484e23ca283faa32d785&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `60.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5982 +/- ##\n==========================================\n- Coverage 78.66% 78.51% -0.16% \n==========================================\n Files 146 146 \n Lines 26227 26240 +13 \n==========================================\n- Hits 20632 20602 -30 \n- Misses 5595 5638 +43 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.87% <60.00%> (+2.41%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=footer). Last update [2c0da78...e583bf0](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Clean up some parts of the code of `Trainer` and expose some function as customization points (doc will follow if you agree on this). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5982",
"html_url": "https://github.com/huggingface/transformers/pull/5982",
"diff_url": "https://github.com/huggingface/transformers/pull/5982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5982.patch",
"merged_at": 1595520342000
} |
https://api.github.com/repos/huggingface/transformers/issues/5981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5981/comments | https://api.github.com/repos/huggingface/transformers/issues/5981/events | https://github.com/huggingface/transformers/pull/5981 | 663,958,148 | MDExOlB1bGxSZXF1ZXN0NDU1Mjc5MTkz | 5,981 | [WIP] Proposal for TF model outputs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sadly changing the `__iter__` function does not work as it's used inside TensorFlow (specifically in `tensorflow/python/util/nest.py`).",
"Ok closing this prototype now that we have a way forward (see [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8))."
] | 1,595 | 1,596 | 1,596 | COLLABORATOR | null | This picks up on the work @thomwolf did on #5740 to have self-documented outputs in TensorFlow that are compatible with the AutoGraph system.
`TFModelOuput` subclasses `OrderedDict` while still being a dataclass, with some tweaks in the `post_init`:
- only the not-None attributes are set as values for the dictionary because tensorflow refuses None as outputs.
- a `TFModelOutput` can be instantiated with the regular keyword arguments but also with an iterator passed as the first argument (as a dict would) like @thomwolf suggested in #5740, with a fix to make sure the first input is not a tensor (because tensors are iterables).
This breaks two things for the TensorFlow side of the library:
1. when unpacking `outputs`, a slice needs to be used, otherwise the keys of the dictionary are returned, not the values:
```
loss, logits = outputs
```
will fail, it needs to be changed to
```
loss, logits = outputs[:2]
```
2. when loading and saving a model using `SavedModel`, the subclass is lost and the output becomes a regular dictionary (see the change in `test_keras_save_load`).
Apart from those, these model outputs are fully backward compatible (you can index with an int or a slice and get the same behavior as before).
If this is accepted, I would strongly recommend using the same base class for PyTorch model outputs, which would imply the breaking change number 1 but would have the added benefit of:
1. fixing the problem with `DataParallel`
2. have consistent outputs between TF and PyTorch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5981",
"html_url": "https://github.com/huggingface/transformers/pull/5981",
"diff_url": "https://github.com/huggingface/transformers/pull/5981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5981.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5980/comments | https://api.github.com/repos/huggingface/transformers/issues/5980/events | https://github.com/huggingface/transformers/pull/5980 | 663,944,825 | MDExOlB1bGxSZXF1ZXN0NDU1MjY4NDIy | 5,980 | add fine-tuned mobilebert squad v1 and squad v2 model cards | {
"login": "csarron",
"id": 8440740,
"node_id": "MDQ6VXNlcjg0NDA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8440740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csarron",
"html_url": "https://github.com/csarron",
"followers_url": "https://api.github.com/users/csarron/followers",
"following_url": "https://api.github.com/users/csarron/following{/other_user}",
"gists_url": "https://api.github.com/users/csarron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csarron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csarron/subscriptions",
"organizations_url": "https://api.github.com/users/csarron/orgs",
"repos_url": "https://api.github.com/users/csarron/repos",
"events_url": "https://api.github.com/users/csarron/events{/privacy}",
"received_events_url": "https://api.github.com/users/csarron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5980",
"html_url": "https://github.com/huggingface/transformers/pull/5980",
"diff_url": "https://github.com/huggingface/transformers/pull/5980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5980.patch",
"merged_at": 1595519850000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5979/comments | https://api.github.com/repos/huggingface/transformers/issues/5979/events | https://github.com/huggingface/transformers/issues/5979 | 663,924,651 | MDU6SXNzdWU2NjM5MjQ2NTE= | 5,979 | dynamic masking for RoBERTa model | {
"login": "mingyang3github",
"id": 8649789,
"node_id": "MDQ6VXNlcjg2NDk3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8649789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mingyang3github",
"html_url": "https://github.com/mingyang3github",
"followers_url": "https://api.github.com/users/mingyang3github/followers",
"following_url": "https://api.github.com/users/mingyang3github/following{/other_user}",
"gists_url": "https://api.github.com/users/mingyang3github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mingyang3github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mingyang3github/subscriptions",
"organizations_url": "https://api.github.com/users/mingyang3github/orgs",
"repos_url": "https://api.github.com/users/mingyang3github/repos",
"events_url": "https://api.github.com/users/mingyang3github/events{/privacy}",
"received_events_url": "https://api.github.com/users/mingyang3github/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @mingyang3github \r\nmasking is implemented in ` DataCollatorForLanguageModeling`'s `mask_tokens` method, [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L103)\r\n\r\nFor pre-training you can use the language modeling script, it takes of the dataset and masking. ",
"> Hi @mingyang3github\r\n> masking is implemented in ` DataCollatorForLanguageModeling`'s `mask_tokens` method, [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L103)\r\n> \r\n> For pre-training you can use the language modeling script, it takes of the dataset and masking.\r\n\r\nHi @patil-suraj ,\r\nThank you for your reply. If I understand this correctly, DataCollatorForLanguageModeling's method is static masking, right?\r\n\r\nBased on RoBERTa model's paper, the dynamic masking refers to \"training data was duplicated 10 times so that each sequence is masked in 10 different ways over the 40 epochs of training.\"\r\nHas this method implemented been implemented in DataCollatorForLanguageModeling ?",
"It is dynamic masking, as the masking is handled in the collate function, the same examples gets masked differently. Because every batch first goes through collate function before the batch is returned from the loader, so each time the examples goes through collate it gets masked differently.",
"Hi @patil-suraj , is that mean we **always use dynamic masking**, when use `DataCollatorForLanguageModeling`, no matter pre-training for Bert, RoBERTa or else.",
"@buaapengbo\r\nYes, the `DataCollatorForLanguageModeling` always does dynamic masking no matter the model",
"I thought I responded, but it turns out I didn't. Sorry.\r\n@patil-suraj Thank you very much! That makes sense! \r\n",
"Thanks for the answer. For the original roberta, they seem to use the same mask per sentence 4 times during training and 10 different masks for the same sentence during the whole training. But this function always returns different masks for the same sentence during training. This means the model always receives different masks for the same sentence during the whole training process right?"
] | 1,595 | 1,648 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
<!-- Description of your issue -->
I saw that "dynamic masking" was mentioned on the README file for language modeling:
"In accordance to the RoBERTa paper, we use dynamic masking rather than static masking. The model may, therefore, converge slightly slower (over-fitting takes more epochs)."
I couldn't find which class this method is implemented in and how to enable this feature during pre-training using the Trainer class. Could someone please help me?
Thank you very much in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5979/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5978/comments | https://api.github.com/repos/huggingface/transformers/issues/5978/events | https://github.com/huggingface/transformers/issues/5978 | 663,908,923 | MDU6SXNzdWU2NjM5MDg5MjM= | 5,978 | Training data format | {
"login": "vyaslkv",
"id": 33617789,
"node_id": "MDQ6VXNlcjMzNjE3Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/33617789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyaslkv",
"html_url": "https://github.com/vyaslkv",
"followers_url": "https://api.github.com/users/vyaslkv/followers",
"following_url": "https://api.github.com/users/vyaslkv/following{/other_user}",
"gists_url": "https://api.github.com/users/vyaslkv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyaslkv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyaslkv/subscriptions",
"organizations_url": "https://api.github.com/users/vyaslkv/orgs",
"repos_url": "https://api.github.com/users/vyaslkv/repos",
"events_url": "https://api.github.com/users/vyaslkv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyaslkv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"```\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"language-modeling/output/\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"language-modeling/output/\")\r\n\r\n\r\ninput_text=\"organic che\"\r\nfeatures = tokenizer([input_text], return_tensors='pt')\r\n\r\noutput = model.generate(input_ids=features['input_ids'], \r\n attention_mask=features['attention_mask'])\r\n\r\ntokenizer.decode(output[0])\r\n\r\n```",
"I want to make a query auto complete these are the user queries separated by new line\r\n\r\n",
"@patil-suraj ",
"should I add some special token at the end and start of every search query\r\n",
"as far as I can see, your dataset is format is correct, also you don't need to add any special tokens, tokenizer adds that by default.",
"--line_by_line I added then the error is coming \r\nYou are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one.",
"I want to make a text auto complete am I using correct model ? do I have sufficient training sentences? should I add --line_by_line while training? Please help!!\r\n@patil-suraj ",
"Hi @vyaslkv you can use GPT-2 for auto complete, as for training examples you will need to experiment.\r\n\r\npinging @sgugger for the error.",
"LineByLineDataset is not really suitable for GPT2: you should concatenate your texts with the separation token and feed chunks of the the model size (can't remember if it's 512 or 1024 at the top of my mind but it should be in the config of the model). Like the error message says, GPT2 does not know padding.",
"@sgugger can you explain me a bit which token to use and how the code will look like in that case so sorry if I am asking too much or can you give me some reference which I could use\r\n\r\nThanks for responding ",
"The separation token will automatically be added by the tokenizer. The rest is just standard python: concatenate all your lists of tokens in a big numpy array, then reshape it to `something x model_len`, something being the number of \"sequences\" (they'll actually span over several lines of your dataset) you can build with your dataset. You can then iterate through the rows of that array as a dataset.",
"In this what changes I need to do\r\n```\r\n[python run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE](url)\r\n```",
"The script will do this automatically for you if you don't add the line by line flag. (Except the sentences are separated by new lines and not the special token.) You can try to replace the new lines by \"<|endoftext|>\"",
"cool Thanks @sgugger Just to clarify If I add \"<|endoftext|>\" in place of new line I don't need to make any changes right?",
"Normally, no.",
"Thanks @sgugger Thanks a ton really for help so quick ",
"@sgugger @patil-suraj I trained with the format you shared but it is generating some irrelevant text not from the training data I gave. What I am missing in this case \r\n```\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"output1/\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"output1/\")\r\ninput_ids = tokenizer.encode('Vegetative reproduction of Agave', return_tensors='pt')\r\n# set return_num_sequences > 1\r\nbeam_outputs = model.generate(\r\n input_ids, \r\n max_length=50, \r\n num_beams=10, \r\n no_repeat_ngram_size=2, \r\n num_return_sequences=10, \r\n early_stopping=True\r\n)\r\n\r\n# now we have 3 output sequences\r\nprint(\"Output:\\n\" + 100 * '-')\r\nfor i, beam_output in enumerate(beam_outputs):\r\n print(\"{}: {}\".format(i, tokenizer.decode(beam_output, skip_special_tokens=False)))\r\n```",
"I want to generate text autocomplete from the training data text\r\n",
"@sgugger can you please help",
"Hi @vyaslkv , I think the best place to ask this question is [HF forums](https://discuss.huggingface.co/) someone who has already worked on similar task can answer it better. Although @sgugger might have some answers :)",
"@patil-suraj Thanks I will put my question there as well\r\n",
"https://discuss.huggingface.co/t/search-query-autocomplete-from-the-queries-i-have-in-my-data/546",
"@sgugger @patil-suraj no one has responded on the forum 😔",
"@patil-suraj I didn't get any response can you please help\r\n",
"Hi @vyaslkv , I'll see if anyone I know has worked on similar problem and get back to you.",
"@patil-suraj Thanks",
"@patil-suraj ?",
"Hello, @patil-suraj we found anything related to that?\r\n\r\nThanks!!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,604 | 1,604 | NONE | null | I have text on which I want to fine tune the gpt2 model for text autocompletion on my text the text sentences are separated by new line is there any format I should follow. When I trained on the data as it is it is not giving me proper results with the default training parameters. I have nearly after split 25k sentences for training. Please suggest. The training data looks like this
<img width="1220" alt="Screenshot 2020-07-22 at 10 24 01 PM" src="https://user-images.githubusercontent.com/33617789/88205241-18a09c00-cc6a-11ea-924e-a8df103c8b94.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5978/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5977/comments | https://api.github.com/repos/huggingface/transformers/issues/5977/events | https://github.com/huggingface/transformers/pull/5977 | 663,892,270 | MDExOlB1bGxSZXF1ZXN0NDU1MjI1NjE2 | 5,977 | [demo] Broken fp16 test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,651 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5977/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5977",
"html_url": "https://github.com/huggingface/transformers/pull/5977",
"diff_url": "https://github.com/huggingface/transformers/pull/5977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5977.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5976/comments | https://api.github.com/repos/huggingface/transformers/issues/5976/events | https://github.com/huggingface/transformers/pull/5976 | 663,874,296 | MDExOlB1bGxSZXF1ZXN0NDU1MjEwOTcy | 5,976 | [test] partial coverage for train_mbart_enro_cc25.sh | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=h1) Report\n> Merging [#5976](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c0da7803a75f0fc6e6d484e23ca283faa32d785&el=desc) will **decrease** coverage by `1.38%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5976 +/- ##\n==========================================\n- Coverage 78.66% 77.28% -1.39% \n==========================================\n Files 146 146 \n Lines 26227 26227 \n==========================================\n- Hits 20632 20270 -362 \n- Misses 5595 5957 +362 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=footer). Last update [2c0da78...f83fdd3](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | CI covers all the examples logic that doesn't run on CUDA.
Adds coverage for:
- user facing bash script train_mbart_cc25_enro.sh
- the idea that seq2seq/finetune.py should lead to models getting better/ val BLEU increasing.
This only takes 10s to run on brutasse, so I gave it a minute for github actions CI, the only automated tester that will run this.
Will add coverage for `--do_predict` once [this](https://github.com/PyTorchLightning/pytorch-lightning/issues/2673) is fixed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5976/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5976",
"html_url": "https://github.com/huggingface/transformers/pull/5976",
"diff_url": "https://github.com/huggingface/transformers/pull/5976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5976.patch",
"merged_at": 1595442890000
} |
https://api.github.com/repos/huggingface/transformers/issues/5975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5975/comments | https://api.github.com/repos/huggingface/transformers/issues/5975/events | https://github.com/huggingface/transformers/pull/5975 | 663,817,799 | MDExOlB1bGxSZXF1ZXN0NDU1MTY0NDk4 | 5,975 | Transformer-XL: Fixed returned outputs when using `return_tuple=True` | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=h1) Report\n> Merging [#5975](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `0.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5975 +/- ##\n=======================================\n Coverage 78.51% 78.51% \n=======================================\n Files 146 146 \n Lines 26214 26214 \n=======================================\n+ Hits 20581 20582 +1 \n+ Misses 5633 5632 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.16% <0.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=footer). Last update [ae67b24...ed8722d](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Was fixed in different PR: #5999 "
] | 1,595 | 1,596 | 1,595 | CONTRIBUTOR | null | Fixes #5974
`mems` are returned again when using `return_tuple=True`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5975",
"html_url": "https://github.com/huggingface/transformers/pull/5975",
"diff_url": "https://github.com/huggingface/transformers/pull/5975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5975.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5974/comments | https://api.github.com/repos/huggingface/transformers/issues/5974/events | https://github.com/huggingface/transformers/issues/5974 | 663,805,673 | MDU6SXNzdWU2NjM4MDU2NzM= | 5,974 | Transformer-XL: no mems are return when using 'return_tuple' | {
"login": "RafaelWO",
"id": 38643099,
"node_id": "MDQ6VXNlcjM4NjQzMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38643099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RafaelWO",
"html_url": "https://github.com/RafaelWO",
"followers_url": "https://api.github.com/users/RafaelWO/followers",
"following_url": "https://api.github.com/users/RafaelWO/following{/other_user}",
"gists_url": "https://api.github.com/users/RafaelWO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RafaelWO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RafaelWO/subscriptions",
"organizations_url": "https://api.github.com/users/RafaelWO/orgs",
"repos_url": "https://api.github.com/users/RafaelWO/repos",
"events_url": "https://api.github.com/users/RafaelWO/events{/privacy}",
"received_events_url": "https://api.github.com/users/RafaelWO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for flagging the issue, the fix is on its way to review.",
"Oh, I opened a PR for this, but it seems you fixed it yourself. I will close my PR then,"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
## Information
The forward pass of the `TransfoXLLMHeadModel` returns no `mems` when using `return_tuple=True`.
Model I am using: Transformer-XL
Language I am using the model on: English
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
```Python
from transformers import TransfoXLLMHeadModel, TransfoXLTokenizer
model = TransfoXLLMHeadModel.from_pretrained("transfo-xl-wt103")
model.train()
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
encoded = tokenizer("Max is walking the dog in the streets", return_tensors='pt')
outputs = model(input_ids=encoded['input_ids'], mems=None, labels=encoded['input_ids'], return_tuple=True)
loss, _, mems = outputs
print(loss.size())
print(len(mems)) # should be 18 due to the 18 layers
```
Output:
```
Traceback (most recent call last):
File "user/script.py", line 10, in <module>
loss, _, mems = outputs
ValueError: not enough values to unpack (expected 3, got 2)
```
## Expected behavior
Output:
```
torch.Size([1, 7])
18
```
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5974/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5973/comments | https://api.github.com/repos/huggingface/transformers/issues/5973/events | https://github.com/huggingface/transformers/issues/5973 | 663,798,325 | MDU6SXNzdWU2NjM3OTgzMjU= | 5,973 | [cleanup] much cruft in unittests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@stas00, this might be up your alley! ",
"will do, thank you!",
"a large part is done: https://github.com/huggingface/transformers/pull/6196\r\n",
"problem: not all `results` in tests are objects, some are plain dict and can't be called with `.key_name`. So there is a large chunk of tests that haven't been re-written because of that.\r\n\r\nSo we could add a wrapper to the test utils that will make the code consistent with those tests where `results` is an object.\r\n\r\n```\r\nclass DictAttr:\r\n def __init__(self, args):\r\n for k in args:\r\n setattr(self, k, args[k])\r\n\r\n def __getitem__(self, item):\r\n return getattr(self, item)\r\n```\r\nand then in the tests:\r\n```\r\n# import DictAttr and then\r\ndata = {\r\n \"loss_1\": 1,\r\n \"mems_1\": 2,\r\n} \r\nresult = DictAttr(data)\r\n```\r\nnow it works either way:\r\n```\r\nprint(result[\"loss_1\"]) # 1\r\nprint(result.loss_1) # 1\r\n```\r\n\r\nNot sure about the best class name for this, suggestions?\r\n\r\nSo practically, with this change the test code will look\r\n\r\n```\r\n--- a/tests/test_modeling_tf_albert.py\r\n+++ b/tests/test_modeling_tf_albert.py\r\n@@ -136,14 +136,12 @@ class TFAlbertModelTester:\r\n\r\n sequence_output, pooled_output = model(input_ids)\r\n\r\n- result = {\r\n+ result = DictAttr({\r\n \"sequence_output\": sequence_output.numpy(),\r\n \"pooled_output\": pooled_output.numpy(),\r\n- }\r\n- self.parent.assertListEqual(\r\n- list(result[\"sequence_output\"].shape), [self.batch_size, self.seq_length, self.hidden_size]\r\n- )\r\n- self.parent.assertListEqual(list(result[\"pooled_output\"].shape), [self.batch_size, self.hidden_size])\r\n+ })\r\n+ self.parent.assertEqual(result.sequence_output.shape, (self.batch_size, self.seq_length, self.hidden_size))\r\n+ self.parent.assertEqual(result.pooled_output.shape, (self.batch_size, self.hidden_size))\r\n```\r\nplus an extra import of whatever the final class name will be.",
"1) can we just never create a `result` dict? It just creates unneeded indirection.\r\n2) If we need to create a results dict, cant we just do key lookup with `[key]`\r\n3) checkout `collections.UserDict`\r\n",
"> * can we just never create a `result` dict? It just creates unneeded indirection.\r\n\r\nFrom looking at the existing tests, it sort of mimics the returns objects, but doesn't have the accessors for the keys.\r\n\r\nSo I'm not quite sure what you propose. A short code sample is usually most demonstrative.\r\n \r\n> * If we need to create a results dict, cant we just do key lookup with `[key]`\r\n\r\nI lost you, unless I am misunderstanding what you're suggesting, isn't the big part of this \"issue\" - replacing `[key]` with `.key`. otherwise nothing else needs to be done and this ticket can be closed - except now it's a mish-mash of results.key (most pt tests) and results[\"key\"] (most tf tests)\r\n\r\n> * checkout `collections.UserDict`\r\n\r\nI checked - it doesn't provide `[key]` and `.key` functionality.",
"ignore #1, I was confused.\r\nwhen I made this issue, sylvain hadn't merged #6155 , so I guess what remains of the issue is\r\nSorry for the miscommunication!\r\n",
"Your #6196 will completely close the issue.",
"So I just need to complete: \"delete all mentions of check_loss_output\" then - there is one remaining test there.\r\nedit: now done",
"I plan to do part 2 PR to make the rest of the tests consistent with this change, but I have to wait for this to be merged as it impacts too many files to proceed easily."
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | Anti patterns:
- making a result dict and then using each of it's keys. Why use the dict?
- delete all mentions of `check_loss_output`
- use tuple equality: `self.assertEqual(tensor.shape, (bs, seq_len)` instead of
```python
self.assertListEqual(list(tensor.size()), [bs, seq_len])
```
This does not need to be done for all test files at once.
fix `templates/testing_xxx ` to reflect the new best practice. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5973/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5972/comments | https://api.github.com/repos/huggingface/transformers/issues/5972/events | https://github.com/huggingface/transformers/pull/5972 | 663,780,274 | MDExOlB1bGxSZXF1ZXN0NDU1MTMzOTE3 | 5,972 | Update to match renamed attributes in fairseq master | {
"login": "LilianBordeau",
"id": 24193358,
"node_id": "MDQ6VXNlcjI0MTkzMzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/24193358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LilianBordeau",
"html_url": "https://github.com/LilianBordeau",
"followers_url": "https://api.github.com/users/LilianBordeau/followers",
"following_url": "https://api.github.com/users/LilianBordeau/following{/other_user}",
"gists_url": "https://api.github.com/users/LilianBordeau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LilianBordeau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LilianBordeau/subscriptions",
"organizations_url": "https://api.github.com/users/LilianBordeau/orgs",
"repos_url": "https://api.github.com/users/LilianBordeau/repos",
"events_url": "https://api.github.com/users/LilianBordeau/events{/privacy}",
"received_events_url": "https://api.github.com/users/LilianBordeau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you mind doing the code quality so that we can merge? You can do `pip install -e \".[quality]` followed by `make style && make quality`.\r\n\r\nIf you don't have access to your fork, I can push on it directly to update the code quality and merge after.",
"Hi @LysandreJik,\r\nI tried to do the code quality but couldn't pass `make quality` because `black` and `flake8` return with a non-zero exit code for some reason (i'm on windows). Here are their outputs:\r\n```shell\r\nblack --check --line-length 119 --target-version py35 examples templates tests src utils\r\nwould reformat C:\\Users\\u165983\\Documents\\transformers\\src\\transformers\\__init__.py\r\nwould reformat C:\\Users\\u165983\\Documents\\transformers\\templates\\adding_a_new_example_script\\run_xxx.py\r\nwould reformat C:\\Users\\u165983\\Documents\\transformers\\templates\\adding_a_new_example_script\\utils_xxx.py\r\nOh no! 💥 💔 💥\r\n3 files would be reformatted, 340 files would be left unchanged.\r\n\r\nflake8 examples templates tests src utils\r\ntests\\test_tokenization_common.py:31:5: F401 'transformers.PretrainedConfig' imported but unused\r\ntests\\test_tokenization_common.py:31:5: F401 'transformers.PreTrainedModel' imported but unused\r\ntests\\test_tokenization_common.py:31:5: F401 'transformers.TFPreTrainedModel' imported but unused\r\nsrc\\transformers\\pipelines.py:72:5: F401 '.modeling_utils.PreTrainedModel' imported but unused\r\nsrc\\transformers\\pipelines.py:73:5: F401 '.modeling_tf_utils.TFPreTrainedModel' imported but unused\r\n```\r\nI still commited and pushed on my fork. Tell me if there is anything more I can do.",
"`make style` checks `black` and `isort` and updates the files, while `make quality` checks `black`, `isort` and `flake8`, but doesn't update the files and instead tells you what fails.\r\n\r\nI ran `make style` on your repository and pushed directly on it, thanks for iterating!"
] | 1,595 | 1,596 | 1,596 | NONE | null | Fix #5917 : RobertaModel no longer have model.encoder and args.num_classes attributes as of 5/28/20. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5972/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5972",
"html_url": "https://github.com/huggingface/transformers/pull/5972",
"diff_url": "https://github.com/huggingface/transformers/pull/5972.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5972.patch",
"merged_at": 1596626636000
} |
https://api.github.com/repos/huggingface/transformers/issues/5971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5971/comments | https://api.github.com/repos/huggingface/transformers/issues/5971/events | https://github.com/huggingface/transformers/issues/5971 | 663,752,778 | MDU6SXNzdWU2NjM3NTI3Nzg= | 5,971 | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' | {
"login": "ccoay",
"id": 20883154,
"node_id": "MDQ6VXNlcjIwODgzMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/20883154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccoay",
"html_url": "https://github.com/ccoay",
"followers_url": "https://api.github.com/users/ccoay/followers",
"following_url": "https://api.github.com/users/ccoay/following{/other_user}",
"gists_url": "https://api.github.com/users/ccoay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccoay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccoay/subscriptions",
"organizations_url": "https://api.github.com/users/ccoay/orgs",
"repos_url": "https://api.github.com/users/ccoay/repos",
"events_url": "https://api.github.com/users/ccoay/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccoay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | NONE | null | such error really annoys me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5971/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5971/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5970/comments | https://api.github.com/repos/huggingface/transformers/issues/5970/events | https://github.com/huggingface/transformers/pull/5970 | 663,745,142 | MDExOlB1bGxSZXF1ZXN0NDU1MTA1MDUy | 5,970 | [WIP] Ner pipeline grouped_entities fixes | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm wondering should the B & I part maybe separated from the entity type part? In the sense that you average the entities (disregarding the B/I part) and vice-versa. I now have the feeling that only the first subtoken decides whether the complete word is a B or an I. ",
"I want to complete this but ran into another issue while working it:\r\n\r\nAll [UNK] tokens get mapped to [UNK] in the output, instead of the actual input token (because the code is getting from ids->tokens), Also [UNK]s gets lost when using skip_special_tokens (https://github.com/huggingface/transformers/issues/6863)\r\nWhile this is a simple token alignment issue and can be solved by using `offset_mappings`. `offset_mappings` is only available with fast tokenizers, I'm wondering what would be a more general approach to solving this?",
"Dear @cceyda,\r\n\r\nIn the last couple of days I started to work with Huggingface's transformers and especially NER-classification. I ran into issues that has been previously addressed in other issues you just mentioned at the beginning. Especially that subtokens that were classified with 'O' were not properly merged with the full token. \r\n\r\nFor example (Dutch):\r\nsentence = \"Als we Volkswagens OR-voorzitter **Bernd Osterloh** moeten geloven, dan moet dat binnen drie jaar het geval zijn.\"\r\n\r\nGives me as group entities: \r\n[{'entity_group': 'B-per', 'score': 0.9999980926513672, 'word': 'Bern'}, \r\n{'entity_group': 'I-per', 'score': 0.9999990463256836, 'word': 'Ost'}]\r\n\r\nI expect:\r\n[{'entity_group': 'B-per', 'score': 0.9999980926513672, 'word': 'Bernd'}, \r\n{'entity_group': 'I-per', 'score': 0.9999990463256836, 'word': 'Osterloh'}]\r\n\r\nHowever, the considered subtokens are classified as 'O':\r\n\r\n{'word': '[CLS]', 'score': 0.9999999403953552, 'entity': 'O', 'index': 0}\r\n{'word': 'Als', 'score': 0.9999999403953552, 'entity': 'O', 'index': 1}\r\n{'word': 'we', 'score': 0.9999999403953552, 'entity': 'O', 'index': 2}\r\n{'word': 'Volkswagen', 'score': 0.9999955296516418, 'entity': 'B-misc', 'index': 3}\r\n{'word': '##s', 'score': 0.9999999403953552, 'entity': 'O', 'index': 4}\r\n{'word': 'O', 'score': 0.9981945157051086, 'entity': 'I-misc', 'index': 5}\r\n{'word': '##R', 'score': 0.9999998807907104, 'entity': 'O', 'index': 6}\r\n{'word': '-', 'score': 0.9999999403953552, 'entity': 'O', 'index': 7}\r\n{'word': 'voorzitter', 'score': 0.9999998807907104, 'entity': 'O', 'index': 8}\r\n{'word': 'Bern', 'score': 0.9999980926513672, 'entity': 'B-per', 'index': 9}\r\n**{'word': '##d', 'score': 0.9999998807907104, 'entity': 'O', 'index': 10}**\r\n{'word': 'Ost', 'score': 0.9999990463256836, 'entity': 'I-per', 'index': 11}\r\n**{'word': '##er', 'score': 0.9999998807907104, 'entity': 'O', 'index': 12}\r\n{'word': '##lo', 'score': 0.9999997615814209, 'entity': 'O', 'index': 13}\r\n{'word': '##h', 'score': 0.9999998807907104, 'entity': 'O', 'index': 14}**\r\n{'word': 'moeten', 'score': 0.9999999403953552, 'entity': 'O', 'index': 15}\r\n{'word': 'geloven', 'score': 0.9999998807907104, 'entity': 'O', 'index': 16}\r\n{'word': ',', 'score': 0.9999999403953552, 'entity': 'O', 'index': 17}\r\n{'word': 'dan', 'score': 0.9999999403953552, 'entity': 'O', 'index': 18}\r\n{'word': 'moet', 'score': 0.9999999403953552, 'entity': 'O', 'index': 19}\r\n{'word': 'dat', 'score': 0.9999999403953552, 'entity': 'O', 'index': 20}\r\n{'word': 'binnen', 'score': 0.9999999403953552, 'entity': 'O', 'index': 21}\r\n{'word': 'drie', 'score': 0.9999999403953552, 'entity': 'O', 'index': 22}\r\n{'word': 'jaar', 'score': 0.9999999403953552, 'entity': 'O', 'index': 23}\r\n{'word': 'het', 'score': 0.9999999403953552, 'entity': 'O', 'index': 24}\r\n{'word': 'geval', 'score': 0.9999999403953552, 'entity': 'O', 'index': 25}\r\n{'word': 'zijn', 'score': 0.9999999403953552, 'entity': 'O', 'index': 26}\r\n{'word': '.', 'score': 0.9999999403953552, 'entity': 'O', 'index': 27}\r\n{'word': '[SEP]', 'score': 0.9999999403953552, 'entity': 'O', 'index': 28}\r\n\r\nI believe your pull request addresses these issues properly. \r\nHowever, I saw the merge did not complete since it failed on some tasks.\r\n\r\nI was wondering if there is still the intention to solve these issues.\r\n\r\nDisclaimer: I am a total newbie to git (just set up an account), so please be mild, haha.\r\nAny help is much appreciated!\r\n\r\nThank you in advance,\r\n\r\nMonique",
"@cceyda I actually want this PR to move forward. Are you okay collaborating on your fork (can add me as collaborator)? I can help out with some of the issues failing so we can get this merged :smile:\r\n\r\n",
"@enzoampil I have added you as a collaborator.\r\nAlso pushed some additional changes addressing the [UNK] token mapping problem I mentioned before. \r\nStill there are some things I'm not very satisfied with:\r\n\r\n1. subword prefix was fixed to '##' before. with the latest change I added a check to see if the tokenizer has an `is_subword_fn `defined (still dont like handling it this way). I know some tokenizers have `subword_prefix` but most don't and this was the most flexible solution for now. \r\n2. `offset_mappings` is needed to resolve [UNK] tokens, but is only available with fast tokenizers. Fast tokenizers don't have `convert_ids_to_tokens` so had to implement a hacky solution for those aswell.\r\n3. `skip_special_tokens` also dropped [UNK] tokens so I had to change things and rely on `special_tokens_mask`.\r\n\r\nIt is not optimal but it worked for my use cases. \r\nHaven't had a chance to look at the failing tests yet :/\r\n",
"I have changed the `ignore_subwords` default to True which covers cases like\r\n```\r\n[\r\n{'word': 'Cons', 'score': 0.9994944930076599, 'entity': 'B-PER', 'index': 1},\r\n{'word': '##uelo', 'score': 0.802545428276062, 'entity': 'B-PER', 'index': 2}\r\n]\r\n```\r\nAnd honestly I don't know why subwords shouldn't be ignored for most cases. (Unless there is need for some custom logic that determines a words tag; ie by averaging the wordpieces etc etc. In which case grouped_entities shouldn't be used 🤔 )\r\nIMO Mid-word inconsistencies made by the model while `ignore_subwords = False` shouldn't effect pipelines output logic.\r\n\r\n[todo]\r\n- torch tests are passing for now but probably should add more cases? (I can't see why the tf tests are failing though, don't have dev env for that)\r\n- should add the new parameters to the doc strings.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=h1) Report\n> Merging [#5970](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7087d9b1c07781cc2eee45c97d3eadf6a1ba2b44?el=desc) will **increase** coverage by `26.30%`.\n> The diff coverage is `71.87%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5970 +/- ##\n===========================================\n+ Coverage 52.05% 78.36% +26.30% \n===========================================\n Files 236 168 -68 \n Lines 43336 32338 -10998 \n===========================================\n+ Hits 22560 25341 +2781 \n+ Misses 20776 6997 -13779 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.59% <71.87%> (+61.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-60.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-56.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-29.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-11.00%)` | :arrow_down: |\n| [src/transformers/data/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL19faW5pdF9fLnB5) | `100.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `23.47% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <0.00%> (ø)` | |\n| ... and [217 more](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=footer). Last update [7087d9b...47797d1](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Dear @cceyda,\r\n\r\nLast two days I worked on your branch to see how it performs on my own input texts. \r\nHowever, I came accross the following issue I would like to point out to you:\r\n\r\nWhen I use the following line of code (as you suggest under 'Usage' above):\r\n\r\npipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[], grouped_entities=True, skip_special_tokens=True, ignore_subwords=True)\r\n\r\n\r\nI get the error:\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'skip_special_tokens'.\r\n\r\nWhen looking in the file transformer.pipelines and looking specifically for the tokenclassificationpipeline, it seems that it is not yet implemented. Or am I missing something?\r\n\r\nBest, \r\n\r\nMonique",
"@Monique497 sorry for the delay\r\nA couple of things have changed since I first wrote that example:\r\n \r\n- special tokens ([CLS][PAD][SEP]) are always skipped (per comments above) so you don't need that kwarg. This is also valid for `grouped_entities=False` \r\n\r\n``` py\r\nfrom transformers import (\r\n AutoModelForTokenClassification,\r\n AutoTokenizer,\r\n pipeline,\r\n)\r\n\r\nmodel = AutoModelForTokenClassification.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) # note the fast tokenizer use\r\n# ignore_subwords = True by default\r\nnlp = pipeline(\"ner\",model=model,tokenizer=tokenizer, grouped_entities=True)\r\ninputs=\"test sentence\"\r\noutput=nlp(inputs)\r\n```\r\n\r\n- Another important thing is you have to **use a fast tokenizer** OR pass `offset_mapping` as a parameter because the [UNK] token resolution depends on this. (maybe I should rename this to offset_mappings). This is also valid for `grouped_entities=False` \r\n\r\n```py\r\n# you can pass it like this\r\nnlp(inputs,offset_mapping=mappings_you_calculate)\r\n\r\n```\r\n- If you are using a custom tokenizer that treats subwords differently (ie not starting with '##'), you can pass a function implementing your custom logic through `tokenizer.is_subword_fn` and `tokenizer.convert_tokens_to_string` \r\nI don't know if this is the best way to handle non standard tokenizations, but I use some custom non-standard tokenizers for Korean and this solution gave me enough flexibility.\r\n\r\nsomething like this:\r\n\r\n```py\r\ndef sub_fn(token):\r\n if token.starts_with(\"%%\"): return True\r\ntokenizer.is_subword_fn=sub_fn\r\n\r\ndef convert_tokens_to_string(self, tokens):\r\n out_string = \" \".join(tokens).replace(\" %%\", \"\").strip()\r\n return out_string\r\ntokenizer.convert_tokens_to_string=convert_tokens_to_string\r\n```\r\n\r\n@enzoampil what are your thoughts on this?\r\n",
"@cceyda Sorry for taking a while, lemme do another review!",
"@LysandreJik @julien-c This looks good to me. Please advise if this is good to merge or if you think there's still anything missing before merging :grin:",
"Thanks for iterating! I'll check this today.",
"Merging this as soon as it's green, thank you for iterating on the PR! Sorry this took so long to merge.",
"Thanks @LysandreJik and congrats @cceyda !! :smile:",
"fix_bart_gpu",
"FYI this broke the NER pipeline:\r\n\r\n```py\r\nfrom transformers import pipeline\r\n\r\nnlp = pipeline(\"ner\")\r\n\r\nnlp(\"My name is Alex and I live in New York\")\r\n```\r\n\r\ncrashes with the following error:\r\n```\r\n raise Exception(\"To decode [UNK] tokens use a fast tokenizer or provide offset_mapping parameter\")\r\nException: To decode [UNK] tokens use a fast tokenizer or provide offset_mapping parameter\r\n```\r\n\r\nTrying to see if this can be quickly patched, otherwise we'll revert the PR while we patch this.",
"oops! although returning unk tokens with slow tokenizers are not the best, I agree not forcing a fast tokenizer with a default of ignore_subword=True looks better for keeping the compatibility. I saw a bit late the _args_parser line was mis-merged during this pr merge and I see it is fixed/improved on the patch. I wasn't sure on how to test for the offset_mapping argument with the new test structure (which looks to be good at the patch). Sorry for the trouble 😅 @LysandreJik ",
"No worries, thanks for taking a look at the patch!"
] | 1,595 | 1,604 | 1,604 | CONTRIBUTOR | null | There are many issues with ner pipeline using grouped_entities=True
https://github.com/huggingface/transformers/issues/5077
https://github.com/huggingface/transformers/issues/4816
https://github.com/huggingface/transformers/issues/5730
https://github.com/huggingface/transformers/issues/5609
https://github.com/huggingface/transformers/issues/6514
https://github.com/huggingface/transformers/issues/5541
- [x] [Bug Fix] add an option `ignore_subwords` to ignore subsequent ##wordpieces in predictions. Because some models train on only the first token of a word and not on the subsequent wordpieces (BERT NER default). So it makes sense doing the same thing at inference time.
- The simplest fix is to just group the subwords with the first wordpiece.
- [TODO] how to handle ignored scores? just set them to 0 and calculate zero invariant mean ?
- [TODO] handle different wordpiece_prefix ## ? possible approaches:
get it from tokenizer? but currently most tokenizers dont have a wordpiece_prefix property?
have an _is_subword(token)
- [x] [Feature add] added option to `skip_special_tokens`. Cause It was harder to remove them after grouping.
- [x] [Additional Changes] remove B/I prefix on returned grouped_entities
Edit: Ignored subwords' scores are also ignored by setting them to nan and using nanmean
Edit: B entities of different type are separated (as per BIO tag definition)
Edit: skip_special_tokens is now the default behavior
Edit: ignore_subwords is now the default behavior
Edit: more flexibility for custom non-standard tokenizers through tokenizer.is_subword_fn, tokenizer.convert_tokens_to_string
Edit: [fix UNK token related bugs by mapping UNK tokens to the correct original string] Use fast tokenizer or pass offset_mapping
# Usage
`pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[], grouped_entities=True, ignore_subwords=True)`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5970",
"html_url": "https://github.com/huggingface/transformers/pull/5970",
"diff_url": "https://github.com/huggingface/transformers/pull/5970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5970.patch",
"merged_at": 1604442065000
} |
https://api.github.com/repos/huggingface/transformers/issues/5969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5969/comments | https://api.github.com/repos/huggingface/transformers/issues/5969/events | https://github.com/huggingface/transformers/issues/5969 | 663,744,142 | MDU6SXNzdWU2NjM3NDQxNDI= | 5,969 | run_squad example doesn't work with XLM model | {
"login": "sshearing",
"id": 19912805,
"node_id": "MDQ6VXNlcjE5OTEyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/19912805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshearing",
"html_url": "https://github.com/sshearing",
"followers_url": "https://api.github.com/users/sshearing/followers",
"following_url": "https://api.github.com/users/sshearing/following{/other_user}",
"gists_url": "https://api.github.com/users/sshearing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshearing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshearing/subscriptions",
"organizations_url": "https://api.github.com/users/sshearing/orgs",
"repos_url": "https://api.github.com/users/sshearing/repos",
"events_url": "https://api.github.com/users/sshearing/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshearing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"see also #3535",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM
Language I am using the model on (English, Chinese ...): Korean
The problem arises when using:
* [ ] the official example scripts: run_squad.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: KorQuAD v1.0
## To reproduce
Steps to reproduce the behavior:
1. Download the KorQuAD v1.0 data (it is formatted exactly as SQuAD v1.0)
2. Run the run_squad script with KorQuAD instead of SQuAD
3. Use the XLM model type (with specific model: xlm-mlm-100-1280_384)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Since the data is formatted exactly the same as the english version, except its in Korean now, I expect the script to be able to fully train and evaluate on the new data. In fact, the script successfully does this when you use multi-lingual BERT instead of XLM. However, when you use XLM, you get invalid input in the forward call:
"cls_index" and "p_mask" are unexpected inputs.
I tried hacking in a fix for this, specifically just deleting cls_index and p_mask from the inputs when using XLM instead of Bert. This caused the model to be able to train correctly, but when we try to evaluate, we end up crashing on a new error. I don't know if its related or not, but this is in the squad metrics, not within run_squad.py, so I was less excited to start messing with that and instead decided to put up this issue.
It specifically crashes here:
transformers/data/metrics/squad_metrics.py", line 629, in compute_predictions_log_probs
cur_null_score = result.cls_logits
AttributeError: 'SquadResult' object has no attribute 'cls_logits'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No (Just 1 GPU)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5969/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5968/comments | https://api.github.com/repos/huggingface/transformers/issues/5968/events | https://github.com/huggingface/transformers/issues/5968 | 663,708,806 | MDU6SXNzdWU2NjM3MDg4MDY= | 5,968 | Loss becoming nearly zero in first 5K steps when training LM from scratch | {
"login": "008karan",
"id": 18630864,
"node_id": "MDQ6VXNlcjE4NjMwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/008karan",
"html_url": "https://github.com/008karan",
"followers_url": "https://api.github.com/users/008karan/followers",
"following_url": "https://api.github.com/users/008karan/following{/other_user}",
"gists_url": "https://api.github.com/users/008karan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/008karan/subscriptions",
"organizations_url": "https://api.github.com/users/008karan/orgs",
"repos_url": "https://api.github.com/users/008karan/repos",
"events_url": "https://api.github.com/users/008karan/events{/privacy}",
"received_events_url": "https://api.github.com/users/008karan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! This is an interesting question, have you tried asking it over on the forums at https://discuss.huggingface.co ? You'll probably get more answers over there.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,602 | 1,602 | NONE | null | I am training the ALBERT LM model from scratch.
I have already trained it for Hindi and Bangla and it was working fine but when I am training on Gujarati, the loss is becoming zero in 5K steps.
What could be the reason for the sudden drop in the loss? Can anyone suggest what could be the cause or how to debug such issue?
Any suggestions?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5968/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5967/comments | https://api.github.com/repos/huggingface/transformers/issues/5967/events | https://github.com/huggingface/transformers/pull/5967 | 663,695,546 | MDExOlB1bGxSZXF1ZXN0NDU1MDYzNTU5 | 5,967 | Actually the extra_id are from 0-99 and not from 1-100 | {
"login": "orena1",
"id": 8983713,
"node_id": "MDQ6VXNlcjg5ODM3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orena1",
"html_url": "https://github.com/orena1",
"followers_url": "https://api.github.com/users/orena1/followers",
"following_url": "https://api.github.com/users/orena1/following{/other_user}",
"gists_url": "https://api.github.com/users/orena1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orena1/subscriptions",
"organizations_url": "https://api.github.com/users/orena1/orgs",
"repos_url": "https://api.github.com/users/orena1/repos",
"events_url": "https://api.github.com/users/orena1/events{/privacy}",
"received_events_url": "https://api.github.com/users/orena1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=h1) Report\n> Merging [#5967](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5967 +/- ##\n=======================================\n Coverage 78.51% 78.51% \n=======================================\n Files 146 146 \n Lines 26214 26214 \n=======================================\n Hits 20581 20581 \n Misses 5633 5633 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=footer). Last update [ae67b24...e8e003f](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @patrickvonplaten, can you have a look?",
"Thansk @orena1 !",
"pinging @patrickvonplaten for notification"
] | 1,595 | 1,596 | 1,596 | CONTRIBUTOR | null | ```
a = tokenizer.encode("we got a <extra_id_99>", return_tensors='pt',add_special_tokens=True)
print(a)
>tensor([[ 62, 530, 3, 9, 32000]])
a = tokenizer.encode("we got a <extra_id_100>", return_tensors='pt',add_special_tokens=True)
print(a)
>tensor([[ 62, 530, 3, 9, 3, 2, 25666, 834, 23, 26,
834, 2915, 3155]])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5967",
"html_url": "https://github.com/huggingface/transformers/pull/5967",
"diff_url": "https://github.com/huggingface/transformers/pull/5967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5967.patch",
"merged_at": 1596104009000
} |
https://api.github.com/repos/huggingface/transformers/issues/5966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5966/comments | https://api.github.com/repos/huggingface/transformers/issues/5966/events | https://github.com/huggingface/transformers/pull/5966 | 663,682,179 | MDExOlB1bGxSZXF1ZXN0NDU1MDUyMTg3 | 5,966 | Bug fix: NER pipeline shouldn't group separate entities of same type | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | ## Effects
nlp=pipeline('ner', ... , grouped_entities=True)
## Fixes
Separate entities of same type shouldn't be grouped together even if they are same type
( B-type1 B-type1 ) != ( B-type1 I-type1 )
## Example
"something something Istanbul Los Angeles something something"
Current output: [ (O O) (B-type1 B-type1 I-type1) (O O) ]
Fixed output: [ (O O) (B-type1) (B-type1 I-type1) (O O) ]
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5966",
"html_url": "https://github.com/huggingface/transformers/pull/5966",
"diff_url": "https://github.com/huggingface/transformers/pull/5966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5966.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5965/comments | https://api.github.com/repos/huggingface/transformers/issues/5965/events | https://github.com/huggingface/transformers/issues/5965 | 663,591,586 | MDU6SXNzdWU2NjM1OTE1ODY= | 5,965 | BerTweet tokenizer issue | {
"login": "Shiro-LK",
"id": 26505641,
"node_id": "MDQ6VXNlcjI2NTA1NjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26505641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shiro-LK",
"html_url": "https://github.com/Shiro-LK",
"followers_url": "https://api.github.com/users/Shiro-LK/followers",
"following_url": "https://api.github.com/users/Shiro-LK/following{/other_user}",
"gists_url": "https://api.github.com/users/Shiro-LK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shiro-LK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shiro-LK/subscriptions",
"organizations_url": "https://api.github.com/users/Shiro-LK/orgs",
"repos_url": "https://api.github.com/users/Shiro-LK/repos",
"events_url": "https://api.github.com/users/Shiro-LK/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shiro-LK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"This model is defined as a `roberta` model but its tokenizer seems to be a Wordpiece tokenizer (based on the `vocab.txt` file), whereas Roberta uses a Byte-level BPE.\r\n\r\nThis is not currently supported out of the box by our `AutoTokenizer/AutoModel` features (model type ≠ tokenizer type) nor by our Pipelines but I'd like to support this in the future.",
"For now, you'll have to initialize this tokenizer + model independently.\r\n\r\n```\r\nBertTokenizer.from_pretrained(\"...\")\r\nAutoModel.from_pretrained(\"...\")\r\n```\r\n\r\nAlso cc'ing model author @datquocnguyen",
"I am working on it (I just have uploaded the model to huggingface yesterday). \r\nI will create pull requests soon, so that users can make use of the following scripts:\r\n\r\n tokenizer = BertweetTokenizer.from_pretrained(\"vinai/bertweet-base\")\r\n bertweet = BertweetModel.from_pretrained(\"vinai/bertweet-base\")\r\n\r\nPlease stay tuned!\r\n",
"@julien-c @datquocnguyen Thanks for your answer.\r\nI just tried the AutoModel, I had some weird \"CUDA illegal memory access error\" after 2 steps. It works fine with other models such as electra or roberta. I do not know if it is related to some wrong encoding with the tokenizer (I am using the fairseq tokenizer as the tokenizer from huggingface is not working even with BertTokenizer) or something else.\r\n\r\nupdate: I may have found the issue. It may come from the max length which seems to be 130, contrary to regular Bert Base model. I was using a longer length sequence.",
"> I am working on it (I just have uploaded the model to huggingface yesterday).\r\n> I will create pull requests soon, so that users can make use of the following scripts:\r\n> \r\n> ```\r\n> tokenizer = BertweetTokenizer.from_pretrained(\"vinai/bertweet-base\")\r\n> bertweet = BertweetModel.from_pretrained(\"vinai/bertweet-base\")\r\n> ```\r\n> \r\n> Please stay tuned!\r\n\r\nLooking forward to it !",
"@nightlessbaron @Shiro-LK @julien-c FYI, I have just created a pull request #6129 for adding BERTweet and PhoBERT into transformers\r\n@nightlessbaron In case you want to use BERTweet right away, you might have a look at this fork https://github.com/datquocnguyen/transformers \r\nCheers,\r\nDat.",
"@datquocnguyen Looks like the error still exists. From https://huggingface.co/vinai/bertweet-base, I run \r\ntokenizer = AutoTokenizer.from_pretrained(\"vinai/bertweet-base\") \r\nIt gives:\r\nOSError: Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n\r\nAlso, https://huggingface.co/vinai/bertweet-base?text=Paris+is+the+%3Cmask%3E+of+France gives an error",
"I had the same error @steveguang had. Is there any solution?",
"@steveguang @SergioBarretoJr your issue has now been solved.\r\nAlso to @Shiro-LK @nightlessbaron Please check https://github.com/datquocnguyen/transformers\r\n@julien-c Please help review this pull request #6129 BERTweet now works in Auto mode and without an additional dependency fastBPE.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This was solved by @datquocnguyen "
] | 1,595 | 1,604 | 1,604 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Bertweet
## To reproduce
Steps to reproduce the behavior:
1. tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
2.
3.
OSError: Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
## Expected behavior
the tokenizer should be loaded correctly.
https://huggingface.co/vinai/bertweet-base?text=Paris+is+the+%3Cmask%3E+of+France.
## Environment info
- `transformers` version: 2.10.0
- Platform: ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: v100
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5965/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5964/comments | https://api.github.com/repos/huggingface/transformers/issues/5964/events | https://github.com/huggingface/transformers/issues/5964 | 663,582,779 | MDU6SXNzdWU2NjM1ODI3Nzk= | 5,964 | text classification reuse without classifier | {
"login": "YuBeomGon",
"id": 44599580,
"node_id": "MDQ6VXNlcjQ0NTk5NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/44599580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YuBeomGon",
"html_url": "https://github.com/YuBeomGon",
"followers_url": "https://api.github.com/users/YuBeomGon/followers",
"following_url": "https://api.github.com/users/YuBeomGon/following{/other_user}",
"gists_url": "https://api.github.com/users/YuBeomGon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YuBeomGon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YuBeomGon/subscriptions",
"organizations_url": "https://api.github.com/users/YuBeomGon/orgs",
"repos_url": "https://api.github.com/users/YuBeomGon/repos",
"events_url": "https://api.github.com/users/YuBeomGon/events{/privacy}",
"received_events_url": "https://api.github.com/users/YuBeomGon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This doesn't currently work, because it's trying to instantiate a classification layer with 10 labels, while the checkpoint's classification layer has 19 labels. \r\n\r\nWhat you want is to keep the base layers, but ignore the classification layers. In order to do so, you can load your checkpoint in a base `RobertaModel`. This will only load the base model. You can then save that model to a checkpoint, and load that checkpoint from a `RobertaForSequenceClassification` model.\r\n\r\nHere's how to do this:\r\n\r\n```py\r\nfrom transformers import RobertaForSequenceClassification, RobertaConfig, RobertaModel\r\n\r\nmodel = RobertaModel.from_pretrained(pretrained_path)\r\nmodel.save_pretrained(f\"{pretrained_path}-base-model\")\r\n\r\nconfig = RobertaConfig(num_labels=10)\r\nmodel = RobertaForSequenceClassification.from_pretrained(f\"{pretrained_path}-base-model\", config=config)\r\n```\r\n\r\nYou should see the output:\r\n\r\n```\r\nSome weights of RobertaForSequenceClassification were not initialized from the model checkpoint at here-base-model and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nThis means that it has correctly loaded the entire base model, and has randomly initialized the classifier weights. Hope that helps!"
] | 1,595 | 1,596 | 1,596 | NONE | null | thanks in advance.
I want to train first more data, it mans more label.
(its like pretraining)
after that I choose some label only, and train again.
But error happened during below code
tokenizer = RobertaTokenizer.from_pretrained(pretrained_path, do_lower_case=False)
model = RobertaForSequenceClassification.from_pretrained(pretrained_path, num_labels=10)
error message is like below.
Error(s) in loading state_dict for RobertaForSequenceClassification:
size mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([19, 768]) from checkpoint, the shape in current model is torch.Size([10, 768]).
size mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([19]) from checkpoint, the shape in current model is torch.Size([10]).
how can I do it?? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5964/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5963/comments | https://api.github.com/repos/huggingface/transformers/issues/5963/events | https://github.com/huggingface/transformers/issues/5963 | 663,494,106 | MDU6SXNzdWU2NjM0OTQxMDY= | 5,963 | Bert forward reports error on GPU; but runs fine on CPU | {
"login": "RayLei",
"id": 1709968,
"node_id": "MDQ6VXNlcjE3MDk5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1709968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RayLei",
"html_url": "https://github.com/RayLei",
"followers_url": "https://api.github.com/users/RayLei/followers",
"following_url": "https://api.github.com/users/RayLei/following{/other_user}",
"gists_url": "https://api.github.com/users/RayLei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RayLei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RayLei/subscriptions",
"organizations_url": "https://api.github.com/users/RayLei/orgs",
"repos_url": "https://api.github.com/users/RayLei/repos",
"events_url": "https://api.github.com/users/RayLei/events{/privacy}",
"received_events_url": "https://api.github.com/users/RayLei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CUDA errors are always very cryptic 😕.\r\n\r\nDo you mind giving us an example that we can reproduce (e.g. with an article you're trying to encode that fails), so that we can see what's going on?",
"Thank you for the comment. I split my code into 2 script files: one is tokenize; the other is transform. Somehow it works. Probably it was caused by some issue in the remote server, or my custom data class. I will close the topic. Thank you for your time. "
] | 1,595 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I would like to encode articles using Bert. My code runs fine on CPU, but failed on GPU. The GPU is a on a remote server.
```
def assign_gpu(token):
token_tensor = token['input_ids'].to('cuda')
token_typeid = token['token_type_ids'].to('cuda')
attention_mask = token['attention_mask'].to('cuda')
output = {'input_ids': token_tensor,
'token_type_ids': token_typeid,
'attention_mask': attention_mask}
return output
bs = 16
data_dl = DataLoader(PatentDataset(df), batch_size=bs, shuffle=False)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased/')
model = BertModel.from_pretrained('bert-base-uncased/')
inputs_list = []
for a in data_dl:
inputs_list.append(tokenizer.batch_encode_plus(a, pad_to_max_length=True, return_tensors='pt'))
# GPU part
model.cuda()
model.eval()
out_list = []
with torch.no_grad():
for i, inputs in enumerate(inputs_list):
inputs = assign_gpu(inputs)
output = model(**inputs)[0][:, 0, :]
out_list.append(output)
```
The error message is
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-f6979d077f60> in <module>
6 for i, inputs in enumerate(inputs_list):
7 inputs = assign_gpu(inputs)
----> 8 output = model(**inputs)[0][:, 0, :]
9 out_list.append(output)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/volta_pypkg/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states)
751
752 embedding_output = self.embeddings(
--> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
754 )
755 encoder_outputs = self.encoder(
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/volta_pypkg/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
--> 182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
183 embeddings = self.LayerNorm(embeddings)
184 embeddings = self.dropout(embeddings)
RuntimeError: CUDA error: device-side assert triggered
```
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/63026701/bert-forward-reports-error-on-gpu-but-runs-fine-on-cpu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5962/comments | https://api.github.com/repos/huggingface/transformers/issues/5962/events | https://github.com/huggingface/transformers/issues/5962 | 663,451,578 | MDU6SXNzdWU2NjM0NTE1Nzg= | 5,962 | tensorflow转为pytorch的两个文件在哪里? | {
"login": "ChrisChaw",
"id": 41299010,
"node_id": "MDQ6VXNlcjQxMjk5MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/41299010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisChaw",
"html_url": "https://github.com/ChrisChaw",
"followers_url": "https://api.github.com/users/ChrisChaw/followers",
"following_url": "https://api.github.com/users/ChrisChaw/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisChaw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisChaw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisChaw/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisChaw/orgs",
"repos_url": "https://api.github.com/users/ChrisChaw/repos",
"events_url": "https://api.github.com/users/ChrisChaw/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisChaw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If I understand correctly and you're asking what/where are those two files:\r\n\r\n- `discriminator.json` is the configuration file for the model you want to convert\r\n- `model.bin` is the location where the converted checkpoint will be saved."
] | 1,595 | 1,596 | 1,596 | NONE | null | 
大佬您好:
请问这两个文件在哪里?discriminator.json和model.bin | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5962/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5961/comments | https://api.github.com/repos/huggingface/transformers/issues/5961/events | https://github.com/huggingface/transformers/pull/5961 | 663,443,839 | MDExOlB1bGxSZXF1ZXN0NDU0ODU2MDA3 | 5,961 | [docs] Add integration test example to copy pasta template | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=h1) Report\n> Merging [#5961](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e714412fe6b38346a1f73525b701e030857b2f21&el=desc) will **decrease** coverage by `1.22%`.\n> The diff coverage is `25.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5961 +/- ##\n==========================================\n- Coverage 78.50% 77.27% -1.23% \n==========================================\n Files 146 146 \n Lines 26214 26218 +4 \n==========================================\n- Hits 20578 20259 -319 \n- Misses 5636 5959 +323 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <25.00%> (-3.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=footer). Last update [e714412...a57f645](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Encourage testing practices that have been encouraged since last update, namely:
- @slow tests that run on cuda and fp16 if possible and show that your model produces good outputs.
- more tests ~= better
- unindent the ModelTester
- call get_extended_attention_mask and delete massive comment.
My code is probably broken because this thing isn't tested!
Add:
"""
Try to make this test take a string and check that a resultant string == desired_result using your tokenizers encode and decode functions
"" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5961/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5961",
"html_url": "https://github.com/huggingface/transformers/pull/5961",
"diff_url": "https://github.com/huggingface/transformers/pull/5961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5961.patch",
"merged_at": 1595436519000
} |
https://api.github.com/repos/huggingface/transformers/issues/5960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5960/comments | https://api.github.com/repos/huggingface/transformers/issues/5960/events | https://github.com/huggingface/transformers/pull/5960 | 663,424,873 | MDExOlB1bGxSZXF1ZXN0NDU0ODQxMDA5 | 5,960 | Adding Minimal Reproducible Usage Example For TPU support on examples/seq2seq | {
"login": "AdityaSoni19031997",
"id": 22738086,
"node_id": "MDQ6VXNlcjIyNzM4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22738086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaSoni19031997",
"html_url": "https://github.com/AdityaSoni19031997",
"followers_url": "https://api.github.com/users/AdityaSoni19031997/followers",
"following_url": "https://api.github.com/users/AdityaSoni19031997/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaSoni19031997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaSoni19031997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaSoni19031997/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaSoni19031997/orgs",
"repos_url": "https://api.github.com/users/AdityaSoni19031997/repos",
"events_url": "https://api.github.com/users/AdityaSoni19031997/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaSoni19031997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The test will obviously break, right?",
"Any progress on merging this?",
"they won't accept it as the checks have failed? \nBut it's expected for the checks to fail as I have modified `modeling_bart` and it's gonna have one more Param count.",
"Is that so @sshleifer?",
"Thanks for the contribution, this looks awesome!\r\n\r\nWe can't merge with failing tests, but I think the tests can pass.\r\n\r\nCould you also check\r\n```\r\nRUN_SLOW=1 pytest tests/test_modeling_bart.py\r\nRUN_SLOW=1 pytest tests/test_modeling_marian.py\r\nRUN_SLOW=1 pytest test_modeling_mbart.py\r\n```\r\nadd the `USE_CUDA=1` prefix to make them run faster on GPU.",
"Actually can we add a `support_tpu` flag to BartConfig, init it to False, and only allocate `lm_head` if it's set to True. I'm concerned that we are wasting RAM when we train on GPU. (I would happily change my mind if I encountered evidence that this change doesn't change GPU RAM consumption.)",
"I tried this version and it seems to work but it stucks at \"Validation sanity check\". Working colab [here](https://colab.research.google.com/drive/1NA9_EPEBNmo7feQ60iiznsLP_XbWwQmC?usp=sharing)",
"Well I removed the validation check altogether by passing in the concerned\nflag to 0. Tried debugging to find out what's causing it, but I couldn't\nfigure it out.\n\nIf you will train and validate, it will work.\n\nOn Mon, 27 Jul 2020, 17:46 marton-avrios, <[email protected]> wrote:\n\n> I tried this version and it seems to work but it stucks at \"Validation\n> sanity check\".\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/5960#issuecomment-664360705>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFNPJJQYIPQDG5ZPYXDTTT3R5VV23ANCNFSM4PEHWJYA>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is now supported by `Seq2SeqTrainer`. Use that if you want TPU support!"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | Attempt to resolve https://github.com/huggingface/transformers/issues/5895.
Minimal Working [Colab Example](https://colab.research.google.com/drive/16q2GWrnZ0Tjg1OxJQUcaWKCWwn3Jh5z0?usp=sharing).
For using more than a single core, one needs to ensure that enough RAM is available else wait for PyTorch-XLA to release a stable version. They have also [released](https://github.com/pytorch/xla/issues/1870#issuecomment-623603323) a fix way back that prevent excessive memory usage for nightly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5960/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5960/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5960",
"html_url": "https://github.com/huggingface/transformers/pull/5960",
"diff_url": "https://github.com/huggingface/transformers/pull/5960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5960.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5959/comments | https://api.github.com/repos/huggingface/transformers/issues/5959/events | https://github.com/huggingface/transformers/issues/5959 | 663,423,234 | MDU6SXNzdWU2NjM0MjMyMzQ= | 5,959 | Can't load weights of models | {
"login": "jhqian0909",
"id": 47420814,
"node_id": "MDQ6VXNlcjQ3NDIwODE0",
"avatar_url": "https://avatars.githubusercontent.com/u/47420814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhqian0909",
"html_url": "https://github.com/jhqian0909",
"followers_url": "https://api.github.com/users/jhqian0909/followers",
"following_url": "https://api.github.com/users/jhqian0909/following{/other_user}",
"gists_url": "https://api.github.com/users/jhqian0909/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhqian0909/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhqian0909/subscriptions",
"organizations_url": "https://api.github.com/users/jhqian0909/orgs",
"repos_url": "https://api.github.com/users/jhqian0909/repos",
"events_url": "https://api.github.com/users/jhqian0909/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhqian0909/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you can try code :\r\n```\r\nfrom transformers import BertTokenizer, BertModel\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n```",
"> I think you can try code :\r\n> \r\n> ```\r\n> from transformers import BertTokenizer, BertModel\r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> model = BertModel.from_pretrained(\"bert-base-uncased\")\r\n> ```\r\nstill succeed in running tokenizer but fail with modeling",
"could due to the website limitaition of my company",
"I have been banging my head on the same issue for a few days now. \r\nJust observed, it downloads these weights from AWS links ( check configuration_bert.py or configuration_distilbert.py, you'll find these files in /Anaconda3/envs/envName/Lib/site-packages/transformers/ ) and my company blocks AWS and GCP links.\r\nThis, most likely, seems to be the issue.",
"> When I run below codes, I can successfully load the tokenizer but fail with loading the models.\r\n> from transformers import AutoTokenizer, AutoModelWithLMHead\r\n> tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> model = AutoModelWithLMHead.from_pretrained(\"bert-base-uncased\")\r\n> \r\n> Here is the error:\r\n> OSError: Can't load weights for 'bert-base-uncased'. Make sure that:\r\n> \r\n> * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n> * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n> \r\n> I can load models after manually downloading them but I do want to directly load them via transformers.\r\n\r\nhello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?",
"> > When I run below codes, I can successfully load the tokenizer but fail with loading the models.\r\n> > from transformers import AutoTokenizer, AutoModelWithLMHead\r\n> > tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> > model = AutoModelWithLMHead.from_pretrained(\"bert-base-uncased\")\r\n> > Here is the error:\r\n> > OSError: Can't load weights for 'bert-base-uncased'. Make sure that:\r\n> > \r\n> > * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n> > * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n> > \r\n> > I can load models after manually downloading them but I do want to directly load them via transformers.\r\n> \r\n> hello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?\r\n\r\n\r\nanywhere is ok, just put your file location in BertTokenizer.from_pretrained(location)",
"> > When I run below codes, I can successfully load the tokenizer but fail with loading the models.\r\n> > from transformers import AutoTokenizer, AutoModelWithLMHead\r\n> > tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n> > model = AutoModelWithLMHead.from_pretrained(\"bert-base-uncased\")\r\n> > Here is the error:\r\n> > OSError: Can't load weights for 'bert-base-uncased'. Make sure that:\r\n> > \r\n> > * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'\r\n> > * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\r\n> > \r\n> > I can load models after manually downloading them but I do want to directly load them via transformers.\r\n> \r\n> hello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?\r\n\r\nWhere did u download the model manually from?",
"You can browse to https://huggingface.co/bert-base-uncased/tree/main for example and download pretrained models."
] | 1,595 | 1,613 | 1,595 | NONE | null | When I run below codes, I can successfully load the tokenizer but fail with loading the models.
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelWithLMHead.from_pretrained("bert-base-uncased")
Here is the error:
OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
I can load models after manually downloading them but I do want to directly load them via transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5958/comments | https://api.github.com/repos/huggingface/transformers/issues/5958/events | https://github.com/huggingface/transformers/pull/5958 | 663,416,135 | MDExOlB1bGxSZXF1ZXN0NDU0ODM0MzA3 | 5,958 | Add functioning early stopping (patience) and weighted random sampling | {
"login": "Breakend",
"id": 1609857,
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Breakend",
"html_url": "https://github.com/Breakend",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"repos_url": "https://api.github.com/users/Breakend/repos",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=h1) Report\n> Merging [#5958](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/09a2f40684f77e62d0fd8485fe9d2d610390453f&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `13.04%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5958 +/- ##\n==========================================\n- Coverage 78.49% 78.39% -0.10% \n==========================================\n Files 146 146 \n Lines 26210 26252 +42 \n==========================================\n+ Hits 20573 20580 +7 \n- Misses 5637 5672 +35 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `35.50% <9.09%> (-2.34%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `78.00% <100.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=footer). Last update [09a2f40...cbc7c63](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"when this will be added to library?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,607 | 1,607 | NONE | null | This is to fix the issues in #4186 (hopefully) to get it merged in. Also adds weighted random sampling for imbalanced classes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5958",
"html_url": "https://github.com/huggingface/transformers/pull/5958",
"diff_url": "https://github.com/huggingface/transformers/pull/5958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5958.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5957/comments | https://api.github.com/repos/huggingface/transformers/issues/5957/events | https://github.com/huggingface/transformers/issues/5957 | 663,413,769 | MDU6SXNzdWU2NjM0MTM3Njk= | 5,957 | NoneType error when using Trainer | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@aclifton314 just wondering if your custom dataset `__getitem__` is configured correctly? \r\ni.e I'm thinking it should be returning `return {'data': x, 'target': y}` or `return x, y`",
"@danieldiamond Thank you for the response. That's a good point about `__getitem__`. The data in my csv file is structured in the following way:\r\n```\r\nTitle, Raw\r\nDistillBERT a distilled version of BERT smaller faster cheaper and lighter, As transfer learning from large-scale pretrained models becomes more prevalent in Natural Language Processing (NLP) blah blah blah.\r\n``` \r\nI'm using the `Raw` columns in the csv as training data for my model, which inherits from `GPT2LMHeadModel`. I had assumed that there wouldn't be any need for targets for this model. But maybe there's something in `Trainer` that expects this particular type of format? In which case, I presume I could always set `y` to `None`?\r\n\r\nI was reading [this post](https://discuss.huggingface.co/t/dataset-expected-by-trainer/148/2) on the HF forum and there is the following mention:\r\n```\r\nMake sure the dataloader returns the dict with same key values forward method expects.\r\nInside _training_step, you’ll pass inputs to the function, and then after the inputs are passed kept on gpu, the function does:\r\noutput = model(**inputs)\r\n```\r\n\r\nI also looked through the debugger again. `inputs` for `tr_loss += self._training_step(model, inputs, optimizer)` looks like it is created in `for step, inputs in enumerate(epoch_iterator):` (this is in `trainer.py`). Looking at `epoch_iterator`, the `iterable` attribute contains the data from `sd_dataset` (i.e. `epoch_iterator -> iterable -> dataset`).\r\n\r\nIt looks like the data is being carried through `trainer.py`, but I can't seem to figure out why `inputs` would be empty.",
"Your dataset returns keys that are not known to the model. Also, it doesn't seem to tokenize your texts?\r\nIt should return a dict with the expected argument of your models, including the labels (that way the model will return the loss for the trainer). I have no idea what the code of `GPT2FinetunedWithNgrams` looks like, so that will depend on that.\r\n\r\nNote that `Trainer` is not an abstract training loop for all DL problems, it's customized to work with the transformers models, so your model should behave the same way as HF models if you want to use it.",
"@sgugger incorporating your changes and upgrading to Transformers 3.0.2 solved the problem for me. I've got a long thread about it [here](https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163/30).\r\n\r\nThanks for the help!"
] | 1,595 | 1,595 | 1,595 | NONE | null | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Background Info
I wasn't sure what the `training_dataset` parameter of `Trainer` was so I opted to create a custom Pytorch `DataSet` using this [tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
```python
from torch.utils.data import Dataset
import pandas as pd
import torch
class SDAbstractsDataset(Dataset):
def __init__(self, csv_file):
self.sd_abstracts_df = pd.read_csv(csv_file, encoding='ISO-8859-1')
def __len__(self):
return len(self.sd_abstracts_df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
sample = {'abstract_text': self.sd_abstracts_df.iloc[idx, 1]}
return sample
```
I instantiate the `SDAbstractsDataset` object, create the `TrainingArguments` object, create an object based of a customized model, then instantiate the `Trainer` object with `sd_dataset`.
```python
from text_gen_w_transformers.finetune_gpt2 import GPT2FinetunedWithNgrams
from text_gen_w_transformers.custom_dataset import SDAbstractsDataset
from transformers import TrainingArguments, Trainer
sd_dataset = SDAbstractsDataset('/path/to/samples_64.csv')
training_args = TrainingArguments(
output_dir='/path/to/output/dir',
do_train=True,
per_device_train_batch_size=4,
learning_rate=1e-3,
num_train_epochs=1
)
model = GPT2FinetunedWithNgrams.from_pretrained('gpt2')
trainer = Trainer(
model=model,
args=training_args,
train_dataset=sd_dataset
)
trainer.train()
```
Whenever I run the `trainer.train()` command, I get the following error:
```python
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/16 [00:00<?, ?it/s]Traceback (most recent call last):
File "/path/to/project/finetune_test.py", line 37, in <module>
trainer.train()
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 622, in _training_step
outputs = model(**inputs)
File "/path/to/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/path/to/project/finetune_gpt2.py", line 42, in forward
orig_input_str = self.tokenizer.decode(input_ids[0], skip_special_tokens=True)
TypeError: 'NoneType' object is not subscriptable
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/16 [00:00<?, ?it/s]
```
I did a little debugging and found that `inputs` in the line `tr_loss += self._training_step(model, inputs, optimizer)` was empty. Any thoughts on how to fix this?
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5956/comments | https://api.github.com/repos/huggingface/transformers/issues/5956/events | https://github.com/huggingface/transformers/pull/5956 | 663,397,624 | MDExOlB1bGxSZXF1ZXN0NDU0ODE5NzI1 | 5,956 | [CI] Install examples/requirements.txt | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=h1) Report\n> Merging [#5956](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e714412fe6b38346a1f73525b701e030857b2f21&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5956 +/- ##\n==========================================\n+ Coverage 78.50% 78.51% +0.01% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n+ Hits 20578 20581 +3 \n+ Misses 5636 5633 -3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=footer). Last update [e714412...ca424ca](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"merging to rerun."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5956",
"html_url": "https://github.com/huggingface/transformers/pull/5956",
"diff_url": "https://github.com/huggingface/transformers/pull/5956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5956.patch",
"merged_at": 1595380068000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5955/comments | https://api.github.com/repos/huggingface/transformers/issues/5955/events | https://github.com/huggingface/transformers/issues/5955 | 663,376,572 | MDU6SXNzdWU2NjMzNzY1NzI= | 5,955 | module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices' | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
A bunch of tf tests fail with:
```module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'```
```
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
```
`tf.config.list_physical_devices` seems to be added in tf-2.1, so unless `transformers` starts to require tf >= 2.1, this breaks for tf < 2.1.
Depending on what you decide I can send a PR to fix this with either:
a. require tf-2.1+ (simplest)
b. write a wrapper `list_physical_devices` that uses `tf.config.experimental.list_physical_devices` for tf < 2.1, and `tf.config.list_physical_devices` otherwise, and switch to using it.
c. do nothing?
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5955/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5954/comments | https://api.github.com/repos/huggingface/transformers/issues/5954/events | https://github.com/huggingface/transformers/pull/5954 | 663,307,198 | MDExOlB1bGxSZXF1ZXN0NDU0NzQ1MjA0 | 5,954 | [pack_dataset] don't sort before packing, only pack train | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=h1) Report\n> Merging [#5954](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5954 +/- ##\n==========================================\n+ Coverage 78.52% 78.56% +0.03% \n==========================================\n Files 146 146 \n Lines 26314 26314 \n==========================================\n+ Hits 20664 20674 +10 \n+ Misses 5650 5640 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=footer). Last update [d1d15d6...97c473a](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | this is better than sorting.
But for best metrics, don't pack. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5954/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5954",
"html_url": "https://github.com/huggingface/transformers/pull/5954",
"diff_url": "https://github.com/huggingface/transformers/pull/5954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5954.patch",
"merged_at": 1595866463000
} |
https://api.github.com/repos/huggingface/transformers/issues/5953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5953/comments | https://api.github.com/repos/huggingface/transformers/issues/5953/events | https://github.com/huggingface/transformers/pull/5953 | 663,281,157 | MDExOlB1bGxSZXF1ZXN0NDU0NzIzNTU3 | 5,953 | CL util to convert models to fp16 before upload | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=h1) Report\n> Merging [#5953](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5953 +/- ##\n==========================================\n+ Coverage 78.52% 78.56% +0.03% \n==========================================\n Files 146 146 \n Lines 26314 26314 \n==========================================\n+ Hits 20664 20674 +10 \n+ Misses 5650 5640 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=footer). Last update [d1d15d6...055e604](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | This should probably go to `commands/` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5953/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5953",
"html_url": "https://github.com/huggingface/transformers/pull/5953",
"diff_url": "https://github.com/huggingface/transformers/pull/5953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5953.patch",
"merged_at": 1595866885000
} |
https://api.github.com/repos/huggingface/transformers/issues/5952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5952/comments | https://api.github.com/repos/huggingface/transformers/issues/5952/events | https://github.com/huggingface/transformers/pull/5952 | 663,262,770 | MDExOlB1bGxSZXF1ZXN0NDU0NzA4MjY0 | 5,952 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5952/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5952",
"html_url": "https://github.com/huggingface/transformers/pull/5952",
"diff_url": "https://github.com/huggingface/transformers/pull/5952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5952.patch",
"merged_at": 1595614335000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5951/comments | https://api.github.com/repos/huggingface/transformers/issues/5951/events | https://github.com/huggingface/transformers/pull/5951 | 663,257,921 | MDExOlB1bGxSZXF1ZXN0NDU0NzA0MjQx | 5,951 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=h1) Report\n> Merging [#5951](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95d1962b9c8460b4cec5a88eb9915e8e25f5bc1e&el=desc) will **decrease** coverage by `1.41%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5951 +/- ##\n==========================================\n- Coverage 78.69% 77.27% -1.42% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20628 20258 -370 \n- Misses 5586 5956 +370 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=footer). Last update [95d1962...89670f7](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5951",
"html_url": "https://github.com/huggingface/transformers/pull/5951",
"diff_url": "https://github.com/huggingface/transformers/pull/5951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5951.patch",
"merged_at": 1595614330000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5950/comments | https://api.github.com/repos/huggingface/transformers/issues/5950/events | https://github.com/huggingface/transformers/issues/5950 | 663,252,101 | MDU6SXNzdWU2NjMyNTIxMDE= | 5,950 | seq2seq: checkpoint callback seems messed up | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"wrong thats just the directory. contents make more sense."
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | 
^^ lightning checkpoint saved much later than `best_tfmr` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5949/comments | https://api.github.com/repos/huggingface/transformers/issues/5949/events | https://github.com/huggingface/transformers/pull/5949 | 663,244,917 | MDExOlB1bGxSZXF1ZXN0NDU0NjkzNjUw | 5,949 | seq2seq/run_eval.py can take decoder_start_token_id | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=h1) Report\n> Merging [#5949](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95d1962b9c8460b4cec5a88eb9915e8e25f5bc1e&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5949 +/- ##\n==========================================\n- Coverage 78.69% 78.51% -0.18% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20628 20581 -47 \n- Misses 5586 5633 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=footer). Last update [95d1962...450d753](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | Also: batch_decode: document kwargs for autocomplete | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5949",
"html_url": "https://github.com/huggingface/transformers/pull/5949",
"diff_url": "https://github.com/huggingface/transformers/pull/5949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5949.patch",
"merged_at": 1595365125000
} |
https://api.github.com/repos/huggingface/transformers/issues/5948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5948/comments | https://api.github.com/repos/huggingface/transformers/issues/5948/events | https://github.com/huggingface/transformers/issues/5948 | 663,242,736 | MDU6SXNzdWU2NjMyNDI3MzY= | 5,948 | Exporting T5 to ONNX | {
"login": "jdsirota",
"id": 25303500,
"node_id": "MDQ6VXNlcjI1MzAzNTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/25303500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jdsirota",
"html_url": "https://github.com/jdsirota",
"followers_url": "https://api.github.com/users/jdsirota/followers",
"following_url": "https://api.github.com/users/jdsirota/following{/other_user}",
"gists_url": "https://api.github.com/users/jdsirota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jdsirota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jdsirota/subscriptions",
"organizations_url": "https://api.github.com/users/jdsirota/orgs",
"repos_url": "https://api.github.com/users/jdsirota/repos",
"events_url": "https://api.github.com/users/jdsirota/events{/privacy}",
"received_events_url": "https://api.github.com/users/jdsirota/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Could you please provide your modified script?",
"fwiw, I'm seeing the same error when trying to export a T5 model for translation with both PyTorch and TensorFlow and the latest version of Transformers (3.3.1).\r\n\r\n```txt\r\npython3 path/to/convert_graph_to_onnx.py --model t5-base translation_en_to_de.onnx --pipeline translation_en_to_de --framework pt\r\n\r\n====== Converting model to ONNX ======\r\nONNX opset version set to: 11\r\nLoading pipeline (model: t5-base, tokenizer: t5-base)\r\nUsing framework PyTorch: 1.6.0\r\nError while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n```\r\n\r\n```txt\r\npython3 path/to/convert_graph_to_onnx.py --model t5-base translation_en_to_de.onnx --pipeline translation_en_to_de --framework tf\r\n\r\n====== Converting model to ONNX ======\r\nONNX opset version set to: 11\r\nLoading pipeline (model: t5-base, tokenizer: t5-base)\r\n/usr/local/lib/python3.8/site-packages/transformers/modeling_tf_auto.py:690: FutureWarning: The class `TFAutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `TFAutoModelForCausalLM` for causal language models, `TFAutoModelForMaskedLM` for masked language models and `TFAutoModelForSeq2SeqLM` for encoder-decoder models.\r\n warnings.warn(\r\n2020-09-30 16:29:58.866842: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2020-09-30 16:29:58.883705: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe05e6c8780 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\r\n2020-09-30 16:29:58.883726: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\r\n2020-09-30 16:29:58.893052: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\r\nAll model checkpoint weights were used when initializing TFT5ForConditionalGeneration.\r\n\r\nAll the weights of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-base.\r\nIf your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.\r\n/!\\ Please note TensorFlow doesn't support exporting model > 2Gb /!\\\r\nUsing framework TensorFlow: 2.3.1, keras2onnx: 1.7.0\r\nError while converting the model: You have to specify either inputs or inputs_embeds\r\n```",
"Has anyone solved the issue??\r\n",
"Having this issue too, has anyone found a workaround?",
"I had the same issue but[ this](https://stackoverflow.com/a/66117248/13273054) post gave an insight on what was causing the error.",
"I had the same issue. Has anyone solved the issue?\r\nThanks!",
"@suyuzhang @Anku5hk @howlinghuffy @ankane @LowinLi please have a look at the [fastT5 ](https://github.com/Ki6an/fastT5) library. it **exports** any t5 model to onnx, quantized it, runs it on onnxruntime. you can speed up the t5 models up to 5x and can reduce the model size to 3x. for more info check out the repo [here ](https://github.com/Ki6an/fastT5).",
"Looks appropriate, Thanks!",
"I had the same issue. Has anyone solved the issue?\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,595 | 1,622 | 1,622 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): t5-small (T5ForConditionalGeneration)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The problem arises when running `convert_graph_to_onnx.py`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
I am using a T5ForConditionalGeneration for machine translation.
## To reproduce
Steps to reproduce the behavior:
1. Run `python transformers/convert_graph_to_onnx.py --framework pt --model t5-small --tokenizer t5-small --opset 12 t5-small.onnx`
```
ONNX opset version set to: 12
Loading pipeline (model: t5-small, tokenizer: t5-small)
/Users/joshuasirota/onnx_env/lib/python3.6/site-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 242M/242M [01:09<00:00, 3.50MB/s]
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using framework PyTorch: 1.5.1
Error while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
## Expected behavior
An ONNX export should be created
## Environment info
- `transformers` version: 3.0.2
- Platform: Darwin-18.6.0-x86_64-i386-64bit
- Python version: 3.6.5
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5948/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5947/comments | https://api.github.com/repos/huggingface/transformers/issues/5947/events | https://github.com/huggingface/transformers/issues/5947 | 663,238,256 | MDU6SXNzdWU2NjMyMzgyNTY= | 5,947 | Test on a new string of gpt2 fine tuned | {
"login": "vyaslkv",
"id": 33617789,
"node_id": "MDQ6VXNlcjMzNjE3Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/33617789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyaslkv",
"html_url": "https://github.com/vyaslkv",
"followers_url": "https://api.github.com/users/vyaslkv/followers",
"following_url": "https://api.github.com/users/vyaslkv/following{/other_user}",
"gists_url": "https://api.github.com/users/vyaslkv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyaslkv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyaslkv/subscriptions",
"organizations_url": "https://api.github.com/users/vyaslkv/orgs",
"repos_url": "https://api.github.com/users/vyaslkv/repos",
"events_url": "https://api.github.com/users/vyaslkv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyaslkv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have these files generated\r\n<img width=\"1389\" alt=\"Screenshot 2020-07-22 at 12 39 19 AM\" src=\"https://user-images.githubusercontent.com/33617789/88096141-d1f06a80-cbb3-11ea-8468-9284fb074bef.png\">\r\n",
"can you please provide a code snippet",
"Not sure what you mean by test, do you want to generate using the model or calculate loss and perplexity for the test data ?\r\n\r\nFor generation you can use the `.generate` method. This [blog post ](https://huggingface.co/blog/how-to-generate) explains generate very well.",
"got it thanks :)",
"Closing this as the issue is solved. Feel free to re-open if you still face issues."
] | 1,595 | 1,596 | 1,596 | NONE | null | I fine tuned GPT2 on my own dataset now I have fine tuned model how can I test it on new string??
GPT-2/GPT and causal language modeling
The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2 (no tokens were replaced before the tokenization). The loss here is that of causal language modeling.
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5946/comments | https://api.github.com/repos/huggingface/transformers/issues/5946/events | https://github.com/huggingface/transformers/pull/5946 | 663,226,550 | MDExOlB1bGxSZXF1ZXN0NDU0Njc4NDk4 | 5,946 | Update doc to new model outputs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The other problem is that it downloads all models since it tests all examples. The PR fixes some docstrings so I know some of them were broken.\r\n\r\nSide note, the correct command is:\r\n```\r\nRUN_SLOW=yes pytest tests/test_doc_samples.py\r\n```\r\nsince they are all marked as slow.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=h1) Report\n> Merging [#5946](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/604a2355dc61f2888d68aab3adb0c5b648a4f42d&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5946 +/- ##\n==========================================\n+ Coverage 78.47% 78.51% +0.03% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n+ Hits 20572 20582 +10 \n+ Misses 5642 5632 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <ø> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=footer). Last update [604a235...766911a](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Quite a few of those tests are failing but it appears TensorFlow outputs are unreliable: they are printed at full precision and my guess is that depending on your GPU, you get some different digits starting at 1e-6. Will discuss with @LysandreJik on how to make this more reliable when he's back, in the meantime, merging this one."
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | This is a follow-up from #5438, adapting doc pages and examples. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5946",
"html_url": "https://github.com/huggingface/transformers/pull/5946",
"diff_url": "https://github.com/huggingface/transformers/pull/5946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5946.patch",
"merged_at": 1595369636000
} |
https://api.github.com/repos/huggingface/transformers/issues/5945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5945/comments | https://api.github.com/repos/huggingface/transformers/issues/5945/events | https://github.com/huggingface/transformers/pull/5945 | 663,217,174 | MDExOlB1bGxSZXF1ZXN0NDU0NjcwODE3 | 5,945 | consistently use True/False for `return_tuple` | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Mmm, it should be `None` all the time except if the model doesn't have an inner config. The current behavior for all other flags (`output_hidden_states` and `output_attentions`) is that passing an argument supersedes the config. So for instance, passing `return_tuple=False` to the model even if `config.return_tuple = True` should end up with no `return_tuple`.",
"But why is there a need for a 3rd state, why not just default to `False`?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=h1) Report\n> Merging [#5945](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/604a2355dc61f2888d68aab3adb0c5b648a4f42d&el=desc) will **decrease** coverage by `0.17%`.\n> The diff coverage is `98.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5945 +/- ##\n==========================================\n- Coverage 78.47% 78.29% -0.18% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20572 20525 -47 \n- Misses 5642 5689 +47 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `24.10% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <100.00%> (ø)` | |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=footer). Last update [604a235...f5be8a4](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That is the way those flags are set and consistency with the others is less likely to surprise users:\r\n`None` -> use the value in config (defaulting to `False`)\r\n`False` -> force-use `return_tuple=False` even if the config says `True`\r\n`True` -> force-use `return_tuple=True` even if the config says `False`\r\n\r\nOne can imagine the user has set the config a certain way but needs to bypass it. Returning a tuple for a punctual conversion to ONNX for instance, or not returning one on a model set for jit-tracing/ONNX just to get the documented output while doing a punctual test.",
"I agree that nullable boolean are a good API design 💯 ",
"Thank you for your explanation, @sgugger!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | …
consistently use True/False for `return_tuple` (currently it's sometimes None, sometimes False) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5945",
"html_url": "https://github.com/huggingface/transformers/pull/5945",
"diff_url": "https://github.com/huggingface/transformers/pull/5945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5945.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5944/comments | https://api.github.com/repos/huggingface/transformers/issues/5944/events | https://github.com/huggingface/transformers/issues/5944 | 663,172,864 | MDU6SXNzdWU2NjMxNzI4NjQ= | 5,944 | process stuck at LineByLineTextDataset. training not starting | {
"login": "arvikumar83",
"id": 68608459,
"node_id": "MDQ6VXNlcjY4NjA4NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/68608459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arvikumar83",
"html_url": "https://github.com/arvikumar83",
"followers_url": "https://api.github.com/users/arvikumar83/followers",
"following_url": "https://api.github.com/users/arvikumar83/following{/other_user}",
"gists_url": "https://api.github.com/users/arvikumar83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arvikumar83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arvikumar83/subscriptions",
"organizations_url": "https://api.github.com/users/arvikumar83/orgs",
"repos_url": "https://api.github.com/users/arvikumar83/repos",
"events_url": "https://api.github.com/users/arvikumar83/events{/privacy}",
"received_events_url": "https://api.github.com/users/arvikumar83/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, this is because it's tokenizing the entire dataset in a single thread, so it's bound to be a bit slow. This class caches the result though, so you will only have to do this step once!"
] | 1,595 | 1,596 | 1,596 | NONE | null | # ❓ Questions & Help
I am using below python code.
BASE_MODEL = "/data/nlp/bert-base-uncased"
CACHE_DIR = "/data/nlp/cache"
model = AutoModelWithLMHead.from_pretrained(BASE_MODEL,cache_dir=CACHE_DIR)
t_tokenizer = ByteLevelBPETokenizer()
t_tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
new_vocab = t_tokenizer.get_vocab()
tokenizer = BertTokenizerFast.from_pretrained(BASE_MODEL)
num_added_toks = tokenizer.add_tokens(list(new_vocab.keys()))
model.resize_token_embeddings(len(tokenizer))
dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=DATA_FILE,block_size=64)
it works fine for small files but for file > 600 MB the process gets stuck at dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=DATA_FILE,block_size=64)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5944/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5943/comments | https://api.github.com/repos/huggingface/transformers/issues/5943/events | https://github.com/huggingface/transformers/pull/5943 | 663,145,104 | MDExOlB1bGxSZXF1ZXN0NDU0NjEyOTg4 | 5,943 | [Doc] explaining romanian postprocessing for MBART BLEU hacking | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=h1) Report\n> Merging [#5943](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbf74a685ae24bd1a0ba1325e4e9a9d62bbb2fa&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5943 +/- ##\n==========================================\n- Coverage 77.31% 77.31% -0.01% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20268 20267 -1 \n- Misses 5946 5947 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=footer). Last update [ccbf74a...163ba92](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer just a quick comment on this, from here: https://github.com/pytorch/fairseq/issues/1758#issuecomment-625214318\r\n\r\nfor EN to RO, there is no point of maximising BLEU by removing diacritics when in real world (WMT human evaluation + SMT Matrix) it is clearly compared with a reference which HAS diacritics. But lot of papers do not do the things right.",
"That makes sense @vince62s! This is mostly so I can have a link to paste into a github issue when people ask me why their BLEU score is 27 :) \r\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5943",
"html_url": "https://github.com/huggingface/transformers/pull/5943",
"diff_url": "https://github.com/huggingface/transformers/pull/5943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5943.patch",
"merged_at": 1595355169000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5942/comments | https://api.github.com/repos/huggingface/transformers/issues/5942/events | https://github.com/huggingface/transformers/issues/5942 | 663,128,288 | MDU6SXNzdWU2NjMxMjgyODg= | 5,942 | Converting GPT2 logits to token ids directly | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This is a very interesting question, I think you would have additional answers if you asked it over on the forums: https://discuss.huggingface.co"
] | 1,595 | 1,597 | 1,597 | NONE | null | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Background Info
Here is the constructor and forward method for the model I am trying to build. Ultimately it will finetune GPT2 with a loss from a ngrams model I made. The loss isn't implemented yet because I want to test the GPT2 generation part first.
```python
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
temperature = 0.85
tmp_input_ids = input_ids
max_gen_length = 30
counter = 0
orig_input_str = self.tokenizer.decode(input_ids[0], skip_special_tokens=True)
strs_to_join = orig_input_str.split()
while counter < max_gen_length:
transformer_outputs = self.transformer(
tmp_input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states) / (temperature)
last_token = lm_logits[:, -1]
last_token_softmax = torch.softmax(last_token, dim=-1).squeeze()
next_token = torch.argmax(last_token_softmax).tolist()
next_gen_token_str = self.tokenizer.decode(next_token, clean_up_tokenization_spaces=True).strip()
strs_to_join.append(next_gen_token_str)
new_str_input = ' '.join(strs_to_join)
tmp_input_ids = self.tokenizer.encode(new_str_input, return_tensors='pt')
counter += 1
return new_str_input
```
Right now the code will take the `lm_logits`, calculate the softmax, and then get the next token predicted by GPT2. I then add that next token to the original input sequence and feed that combination back into GPT2, until the `max_gen_length` is reached. Finally it returns the original input sequence with the generated sequence appended to it.
## Question 1
Is there a way to directly go from logits to token ids that I am missing in HF Transformers? Or better yet, is there a better, more efficient way of doing this?
## Question 2
Given that I am using things like a `max_length` and `temperature`, I feel like the `generate()` method would be useful here. However, I am not exactly sure how to implement that. Since ultimately I want my model to finetune GPT2 based off a ngrams loss, I don't know if calling `generate()` will call that method from the GPT2 model that is involved in the finetuning or some fixed GPT2 model coming elsewhere. Part of this is my nonfamiliarity with how pytorch trains (which I'm working on understanding better).
Said differently, if I can call `generate()` within the above forward method (I imagine something like `super().generate()`) will that generate a sequence using the GPT2 model whose weights are currently being modified based off my dataset and finetuning, or will it generate a sequence from some static version of GPT2 that is not having its weights modified?
I hope those questions aren't too convoluted. I can elaborate more if needed. Thanks in advance for your help!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5941/comments | https://api.github.com/repos/huggingface/transformers/issues/5941/events | https://github.com/huggingface/transformers/pull/5941 | 663,125,548 | MDExOlB1bGxSZXF1ZXN0NDU0NTk3MTMw | 5,941 | a transparent solution for DataParallel.gather not supporting ModelOutput (dataclass) | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This solves the problems encountered after the model outputs PR and doesn't break anything in existing PyTorch behavior. I don't know how the rest of the team feels about patching over libraries methods though.\r\n\r\nOther possible fixes are:\r\n- breaking changes and just use dicts everywhere (supported by DataParallel and supported by JIT in PyTorch 1.6.0)\r\n- adding a hook at init of `PretrainedModel` to check at the first forward pass if the model has been wrapped in a DataParallel container and setting return_tuple in that case (but that's kind of ugly).\r\n- manually testing in every model during the forward pass if it has been wrapped in a DataParallel container and setting return_tuple in that case (by changing the test that sets `return_tuple`).",
"One extra note to @sgugger's comments is that even if pytorch fixes `gather` to support `dataclasses` - converting them to dicts, it still won't be sufficient, since we have models where the output dataclass has optional members (w/o defaults) followed by required members, e.g. `MaskedLMOutput`, so such possibly modified by pytorch `gather` would fail here, unless the optional members are looked up and `None` is passed to the constructor - I guess this could be done in pytorch's `gather` as well.\r\n\r\nPerhaps coming up with a pure `dataclasses` and not `ModelOutput`-specific implementation, that can be given to the pytorch team? and then this workaround will be needed only for older pytorch versions. ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=h1) Report\n> Merging [#5941](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `19.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5941 +/- ##\n==========================================\n- Coverage 78.51% 78.46% -0.06% \n==========================================\n Files 146 146 \n Lines 26214 26236 +22 \n==========================================\n+ Hits 20583 20585 +2 \n- Misses 5631 5651 +20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.58% <ø> (+0.12%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.73% <19.23%> (-4.77%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=footer). Last update [32883b3...528758e](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thinking of it a little bit more, if PyTorch starts to support dataclasses in `gather`, we can then drop the patching by having `= None` for every attribute of every type of `ModelOuput` (as is done in #5740).",
"Since the model outputs had the other problem of not working with TensorFlow, we are going on a different road (see [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8))."
] | 1,595 | 1,603 | 1,596 | CONTRIBUTOR | null | 1. Modify torch/nn/parallel/scatter_gather.gather function to support ModelOutput (dataclass) outputs. We override the torch.nn.DataParallel.gather method with this custom method.
2. remove previously committed workarounds
implementation: @sgugger
integration/testing: me
This should transparently solve https://github.com/huggingface/transformers/issues/5693
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5941/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5941",
"html_url": "https://github.com/huggingface/transformers/pull/5941",
"diff_url": "https://github.com/huggingface/transformers/pull/5941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5941.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5940/comments | https://api.github.com/repos/huggingface/transformers/issues/5940/events | https://github.com/huggingface/transformers/issues/5940 | 663,107,210 | MDU6SXNzdWU2NjMxMDcyMTA= | 5,940 | What is the difference between the function of add_tokens() and add_special_tokens() in tokenizer | {
"login": "kugwzk",
"id": 15382517,
"node_id": "MDQ6VXNlcjE1MzgyNTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15382517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kugwzk",
"html_url": "https://github.com/kugwzk",
"followers_url": "https://api.github.com/users/kugwzk/followers",
"following_url": "https://api.github.com/users/kugwzk/following{/other_user}",
"gists_url": "https://api.github.com/users/kugwzk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kugwzk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kugwzk/subscriptions",
"organizations_url": "https://api.github.com/users/kugwzk/orgs",
"repos_url": "https://api.github.com/users/kugwzk/repos",
"events_url": "https://api.github.com/users/kugwzk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kugwzk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For some reasons those functions do not appear in the documentation, will have a look at why. The docstrings state of `add_special_tokens` states:\r\n\r\n> Add a dictionary of special tokens (eos, pad, cls...) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).\r\n> Using `add_special_tokens` will ensure your special tokens can be used in several ways:\r\n> - special tokens are carefully handled by the tokenizer (they are never split)\r\n> - you can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This makes it easy to develop model-agnostic training and fine-tuning scripts.\r\n\r\nThough the second point would not apply in your case. The docstring of `add_tokens` states:\r\n\r\n> Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.\r\n When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be '[CLS]' and XLM's one is also registered to be '</s>')\r\n\r\nHope that helps!",
"Thank a lot for your help. Therefore, if I use the add_tokens(), the tokens may still split?",
"I would also appreciate some clarification on the difference between the function and when to use which one. "
] | 1,595 | 1,626 | 1,596 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
When I read the code of tokenizer, I have a problem if I want to use a pretrained model in NMT task, I need to add some tag tokens, such as '2English' or '2French'. I think these tokens are special tokens, so which function should I use: add_tokens() or add_special_tokens(). What is the difference between them?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5940/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5940/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5939/comments | https://api.github.com/repos/huggingface/transformers/issues/5939/events | https://github.com/huggingface/transformers/issues/5939 | 663,101,014 | MDU6SXNzdWU2NjMxMDEwMTQ= | 5,939 | Can't use BatchEncoding in the fit function | {
"login": "konstantin-doncov",
"id": 6806786,
"node_id": "MDQ6VXNlcjY4MDY3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6806786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstantin-doncov",
"html_url": "https://github.com/konstantin-doncov",
"followers_url": "https://api.github.com/users/konstantin-doncov/followers",
"following_url": "https://api.github.com/users/konstantin-doncov/following{/other_user}",
"gists_url": "https://api.github.com/users/konstantin-doncov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konstantin-doncov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konstantin-doncov/subscriptions",
"organizations_url": "https://api.github.com/users/konstantin-doncov/orgs",
"repos_url": "https://api.github.com/users/konstantin-doncov/repos",
"events_url": "https://api.github.com/users/konstantin-doncov/events{/privacy}",
"received_events_url": "https://api.github.com/users/konstantin-doncov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | I had a [few problems with the `transformers`](https://github.com/huggingface/transformers/issues/5555) and the related [`tensorflow` functionality](https://github.com/tensorflow/tensorflow/issues/41204), but now I've made some progress using trial and error method.
You can see the issue in this [gist](https://colab.research.google.com/drive/125jJ0qrXGIe6goNrH_Ja7XPZtYp7nMXU?usp=sharing), now I found out that the problem was due to `model.fit(...)` function.
Here is the first version which causes the error:
```
model.fit(
x=X_train, #transformers.tokenization_utils_base.BatchEncoding
y=targets,
epochs=3
)
```
Output:
```
ValueError: too many values to unpack (expected 2)
```
Second version which at least works:
```
model.fit(
x=X_train.values(), #dict_values
y=targets,
epochs=3
)
```
**So, now the main question is why does it work and is my code correct?** I'm asking because I have seen many code fragments which used `BatchEncoding` as `x` in the `fit`(e.g. [this](https://www.kaggle.com/definedennis/pretrained-bert-with-huggingface-tensorflow-2-1/) and [this](https://github.com/thakursc1/NLPKernels/blob/3bb1fcac60ab8cdc6f2f68a2d9b5b7a477873811/DisasterTweetClassification/Transformer.py)), but I just can't do the same thing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5939/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5938/comments | https://api.github.com/repos/huggingface/transformers/issues/5938/events | https://github.com/huggingface/transformers/issues/5938 | 663,092,524 | MDU6SXNzdWU2NjMwOTI1MjQ= | 5,938 | How does AdamW weight_decay works for L2 regularization? | {
"login": "AsmaTidafi",
"id": 36989932,
"node_id": "MDQ6VXNlcjM2OTg5OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/36989932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AsmaTidafi",
"html_url": "https://github.com/AsmaTidafi",
"followers_url": "https://api.github.com/users/AsmaTidafi/followers",
"following_url": "https://api.github.com/users/AsmaTidafi/following{/other_user}",
"gists_url": "https://api.github.com/users/AsmaTidafi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AsmaTidafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AsmaTidafi/subscriptions",
"organizations_url": "https://api.github.com/users/AsmaTidafi/orgs",
"repos_url": "https://api.github.com/users/AsmaTidafi/repos",
"events_url": "https://api.github.com/users/AsmaTidafi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AsmaTidafi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. General questions like this are probably better asked on the [forum](https://discuss.huggingface.co/). There is a research category that is exactly fitted for this. The `weight_decay` does correspond to the lambda you mention, though it's applied directly to the gradient, to avoid wasting compute with this huge some of all the weights scared.\r\n\r\nYou can also look at the [AdamW paper](https://arxiv.org/abs/1711.05101) for more information. ",
"@sgugger thank you for your answer. I will check out the paper."
] | 1,595 | 1,595 | 1,595 | NONE | null | Hello I have some questions about weight regularization in Adam.
Apparently the `weight_decay` in the AdamW function https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adamw-pytorch has the same impact as an `L2 regularization`.
My questions are: is that parameter the same as `lambda` that we have in the regularization term?

How does it exactly work? And what is its impact on the model complexity? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5937/comments | https://api.github.com/repos/huggingface/transformers/issues/5937/events | https://github.com/huggingface/transformers/pull/5937 | 662,995,460 | MDExOlB1bGxSZXF1ZXN0NDU0NDg5MzQz | 5,937 | typos in seq2seq/readme | {
"login": "AdityaSoni19031997",
"id": 22738086,
"node_id": "MDQ6VXNlcjIyNzM4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22738086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdityaSoni19031997",
"html_url": "https://github.com/AdityaSoni19031997",
"followers_url": "https://api.github.com/users/AdityaSoni19031997/followers",
"following_url": "https://api.github.com/users/AdityaSoni19031997/following{/other_user}",
"gists_url": "https://api.github.com/users/AdityaSoni19031997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdityaSoni19031997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdityaSoni19031997/subscriptions",
"organizations_url": "https://api.github.com/users/AdityaSoni19031997/orgs",
"repos_url": "https://api.github.com/users/AdityaSoni19031997/repos",
"events_url": "https://api.github.com/users/AdityaSoni19031997/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdityaSoni19031997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=h1) Report\n> Merging [#5937](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **decrease** coverage by `0.94%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5937 +/- ##\n==========================================\n- Coverage 78.25% 77.31% -0.95% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20515 20268 -247 \n- Misses 5699 5946 +247 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=footer). Last update [d322794...979055f](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! 🎉"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | same-as-title. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5937/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5937",
"html_url": "https://github.com/huggingface/transformers/pull/5937",
"diff_url": "https://github.com/huggingface/transformers/pull/5937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5937.patch",
"merged_at": 1595339099000
} |
https://api.github.com/repos/huggingface/transformers/issues/5936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5936/comments | https://api.github.com/repos/huggingface/transformers/issues/5936/events | https://github.com/huggingface/transformers/issues/5936 | 662,993,827 | MDU6SXNzdWU2NjI5OTM4Mjc= | 5,936 | Easy selection of a learning rate scheduler when initializing a Trainer | {
"login": "freeIsa",
"id": 61782307,
"node_id": "MDQ6VXNlcjYxNzgyMzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/61782307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freeIsa",
"html_url": "https://github.com/freeIsa",
"followers_url": "https://api.github.com/users/freeIsa/followers",
"following_url": "https://api.github.com/users/freeIsa/following{/other_user}",
"gists_url": "https://api.github.com/users/freeIsa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freeIsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freeIsa/subscriptions",
"organizations_url": "https://api.github.com/users/freeIsa/orgs",
"repos_url": "https://api.github.com/users/freeIsa/repos",
"events_url": "https://api.github.com/users/freeIsa/events{/privacy}",
"received_events_url": "https://api.github.com/users/freeIsa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # 🚀 Feature request
Please consider adding a new argument to the Trainer constructor to specify which among the available learning rate schedulers should be used during training.
## Motivation
Even though Trainer already has the option to specify a given optimizer and learning rate scheduler, you need to explicitly initialize both (even when you only want to change the scheduler) with parameters already available to the Trainer itself via TrainingArguments. It would be nicer and smoother to just provide Trainer with a string specifying which scheduler to use (e.g. 'constant_schedule', 'cosine_schedule_with_warmup', ...) and have `get_optimizers` implement the choice.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5936/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5935/comments | https://api.github.com/repos/huggingface/transformers/issues/5935/events | https://github.com/huggingface/transformers/issues/5935 | 662,990,175 | MDU6SXNzdWU2NjI5OTAxNzU= | 5,935 | Getting "AttributeError: 'Tensor' object has no attribute 'numpy'" while fine-tuning BERT for NER | {
"login": "mittalpatel",
"id": 200955,
"node_id": "MDQ6VXNlcjIwMDk1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/200955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mittalpatel",
"html_url": "https://github.com/mittalpatel",
"followers_url": "https://api.github.com/users/mittalpatel/followers",
"following_url": "https://api.github.com/users/mittalpatel/following{/other_user}",
"gists_url": "https://api.github.com/users/mittalpatel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mittalpatel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mittalpatel/subscriptions",
"organizations_url": "https://api.github.com/users/mittalpatel/orgs",
"repos_url": "https://api.github.com/users/mittalpatel/repos",
"events_url": "https://api.github.com/users/mittalpatel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mittalpatel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I run into the same error, installing Transformers with pip fix this (not a preferred solution but it works)\r\n`!pip install --upgrade --no-deps --force-reinstall transformers`\r\n\r\nfine-tuning BERT for NER also fail using `run_ner.py`. The error is \r\n```\r\nTraceback (most recent call last):\r\n File \"transformers/examples/token-classification/run_ner.py\", line 304, in <module>\r\n main()\r\n File \"transformers/examples/token-classification/run_ner.py\", line 266, in main\r\n predictions, label_ids, metrics = trainer.predict(test_dataset)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 781, in predict\r\n test_dataloader = self.get_test_dataloader(test_dataset)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 297, in get_test_dataloader\r\n if isinstance(self.test_dataset, torch.utils.data.IterableDataset):\r\nAttributeError: 'Trainer' object has no attribute 'test_dataset'\r\n```",
"Thanks for the input @kevin-yauris \r\n\r\n> I run into the same error, installing Transformers with pip fix this (not a preferred solution but it works)\r\n> !pip install --upgrade --no-deps --force-reinstall transformers\r\n\r\nThis fixed the issue for me too. "
] | 1,595 | 1,595 | 1,595 | NONE | null | As per https://github.com/huggingface/transformers/tree/master/examples/token-classification after doing the required setup and installing required libraries, when I run
```
python3 run_tf_ner.py --data_dir ./ \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
```
it stops at one point with error
```
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:488 _accumulate_next *
return self._accumulate_gradients(per_replica_features, per_replica_labels)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:498 _accumulate_gradients *
per_replica_loss = self.args.strategy.experimental_run_v2(
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:511 _forward *
per_example_loss, _ = self._run_model(features, labels, True)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:534 _run_model *
outputs = self.model(features, labels=labels, training=training)[:2]
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:879 call *
loss = self.compute_loss(labels, logits)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py:142 compute_loss *
if tf.math.reduce_any(labels == -1).numpy() is True:
AttributeError: 'Tensor' object has no attribute 'numpy'
```
Tensorflow version: 2.2.0
Numpy version: 1.19.0
CUDA version: 10.2
As per some possible solutions I have checked that tf.executing_eagerly() is True.
Tried on own computer and colab both and it ends up at the same point with same error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5935/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5934/comments | https://api.github.com/repos/huggingface/transformers/issues/5934/events | https://github.com/huggingface/transformers/issues/5934 | 662,857,729 | MDU6SXNzdWU2NjI4NTc3Mjk= | 5,934 | InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array. | {
"login": "Douboo",
"id": 32014271,
"node_id": "MDQ6VXNlcjMyMDE0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/32014271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Douboo",
"html_url": "https://github.com/Douboo",
"followers_url": "https://api.github.com/users/Douboo/followers",
"following_url": "https://api.github.com/users/Douboo/following{/other_user}",
"gists_url": "https://api.github.com/users/Douboo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Douboo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Douboo/subscriptions",
"organizations_url": "https://api.github.com/users/Douboo/orgs",
"repos_url": "https://api.github.com/users/Douboo/repos",
"events_url": "https://api.github.com/users/Douboo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Douboo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
def build_model(item_dim, num_layers, num_heads, max_len):
config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
bert = TFBertMainLayer(config=config)
inputs = Input(shape=(max_len, item_dim), dtype=tf.float32, name='inputs')
# pre-training vectors to bert
seq_emb = bert(inputs=None, inputs_embeds=inputs)[0]
print(seq_emb)
print(seq_emb[:, -1, :])
last_token_emb = seq_emb[:, -1, :]
outputs = Dense(1, activation='sigmoid')(last_token_emb)
model = Model(inputs=inputs, outputs=outputs)
return model
```
Errors with
`InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
* Run the code successfully when feed pre-training vectors to bert.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
https://colab.research.google.com/gist/Douboo/adf6e136e45f8406b1070d88f4041a49/untitled2.ipynb
- `transformers` version: 3.0.2
- Platform: google colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5934/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5933/comments | https://api.github.com/repos/huggingface/transformers/issues/5933/events | https://github.com/huggingface/transformers/issues/5933 | 662,792,766 | MDU6SXNzdWU2NjI3OTI3NjY= | 5,933 | How to get a language model score in BertModel? | {
"login": "oshindow",
"id": 49552492,
"node_id": "MDQ6VXNlcjQ5NTUyNDky",
"avatar_url": "https://avatars.githubusercontent.com/u/49552492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oshindow",
"html_url": "https://github.com/oshindow",
"followers_url": "https://api.github.com/users/oshindow/followers",
"following_url": "https://api.github.com/users/oshindow/following{/other_user}",
"gists_url": "https://api.github.com/users/oshindow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oshindow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oshindow/subscriptions",
"organizations_url": "https://api.github.com/users/oshindow/orgs",
"repos_url": "https://api.github.com/users/oshindow/repos",
"events_url": "https://api.github.com/users/oshindow/events{/privacy}",
"received_events_url": "https://api.github.com/users/oshindow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hi, thanks for your awesome works!
How to get a language model score which could judge whether a sentence conforms to grammatical rules by using BertModel ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5933/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5932/comments | https://api.github.com/repos/huggingface/transformers/issues/5932/events | https://github.com/huggingface/transformers/pull/5932 | 662,787,434 | MDExOlB1bGxSZXF1ZXN0NDU0MzAzMzQw | 5,932 | Expose padding_strategy on squad processor to fix QA pipeline performance regression | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=h1) Report\n> Merging [#5932](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `30.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5932 +/- ##\n==========================================\n+ Coverage 78.25% 78.49% +0.23% \n==========================================\n Files 146 146 \n Lines 26214 26223 +9 \n==========================================\n+ Hits 20515 20584 +69 \n+ Misses 5699 5639 -60 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <22.22%> (-0.40%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.00% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=footer). Last update [d322794...30d41ea](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Would be awesome to introduce perf regression testing in this repo too at some point (we have some crude timeouts but maybe something more fine-grained)",
"Linked issue https://github.com/huggingface/transformers/issues/6144"
] | 1,595 | 1,597 | 1,595 | MEMBER | null | **Before this PR**:
The squad processor was padding the sequence up to the provided `max_length` parameter which results in a tensor with 512 tokens, mostly padding, making the QA pipeline very slow.
- QA Pipeline (`model="distilbert-base-uncased-distilled-squad"`) = 4.8secs
**After this PR**:
The processor will not be padding at all when coming from the QA pipeline.
- QA Pipeline (`model="distilbert-base-uncased-distilled-squad"`) = 1.29secs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5932/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5932",
"html_url": "https://github.com/huggingface/transformers/pull/5932",
"diff_url": "https://github.com/huggingface/transformers/pull/5932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5932.patch",
"merged_at": 1595427118000
} |
https://api.github.com/repos/huggingface/transformers/issues/5931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5931/comments | https://api.github.com/repos/huggingface/transformers/issues/5931/events | https://github.com/huggingface/transformers/issues/5931 | 662,665,311 | MDU6SXNzdWU2NjI2NjUzMTE= | 5,931 | ALBERT tokenizer is not callable | {
"login": "guoxuxu",
"id": 29363464,
"node_id": "MDQ6VXNlcjI5MzYzNDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29363464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoxuxu",
"html_url": "https://github.com/guoxuxu",
"followers_url": "https://api.github.com/users/guoxuxu/followers",
"following_url": "https://api.github.com/users/guoxuxu/following{/other_user}",
"gists_url": "https://api.github.com/users/guoxuxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoxuxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoxuxu/subscriptions",
"organizations_url": "https://api.github.com/users/guoxuxu/orgs",
"repos_url": "https://api.github.com/users/guoxuxu/repos",
"events_url": "https://api.github.com/users/guoxuxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoxuxu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, you should upgrade your transformers library to v3. The version you have does not have callable tokenizers.\r\n\r\nAlternatively, the docs for v2.3.0 are [here](https://huggingface.co/transformers/v2.3.0/). You can use the dropdown menu on the left (just under the hugging face) to change the version of the documentation you are using.",
"Thanks very much",
"Alternatively, I found using tokenizer.ecode is ok. \r\nBut, How to view the old doc details for this version?\r\nhttps://huggingface.co/transformers/v2.3.0/model_doc/albert.html#\r\nThis link seems to be archived documentation. No examples provided here.",
"This is the documentation of the version 2.3.0. For better documentation, you should really consider upgrading your library :-)",
"Understand. Thanks very much."
] | 1,595 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
when running the follwoing given example:
from transformers import AlbertTokenizer, AlbertModel
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained('albert-base-v2')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
it alerts "*** TypeError: 'AlbertTokenizer' object is not callable"
examples link: https://huggingface.co/transformers/model_doc/albert.html#alberttokenizer
Model I am using (Bert, XLNet ...):
ALBERT
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [X ] the official example scripts: (give details below)
from transformers import AlbertTokenizer, AlbertModel
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained('albert-base-v2')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
link: https://huggingface.co/transformers/model_doc/albert.html#alberttokenizer'
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
*** TypeError: 'AlbertTokenizer' object is not callable
## Expected behavior
Should given a workable example of using ALBERT ?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5931/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5930/comments | https://api.github.com/repos/huggingface/transformers/issues/5930/events | https://github.com/huggingface/transformers/issues/5930 | 662,586,183 | MDU6SXNzdWU2NjI1ODYxODM= | 5,930 | code copy button on the website doesn't copy `...` lines | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"this seems to have been fixed in sphinx - I re-checked and it works now."
] | 1,595 | 1,602 | 1,601 | CONTRIBUTOR | null | # 🐛 Bug
## Information
When the copy button is clicked on a code like [this](https://huggingface.co/transformers/quicktour.html#using-the-tokenizer):
```
>>> pt_batch = tokenizer(
... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... return_tensors="pt"
... )
```
only the first line is copied.
All lines starting with `>>>` get copied, but `...` lines are ignored. So for example, [this](https://huggingface.co/transformers/quicktour.html#customizing-the-model) gets fully copied:
```
>>> from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification
>>> config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)
>>> tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
>>> model = DistilBertForSequenceClassification(config)
```
I was told that this is the issue with `sphinx_copybutton` and I found an already opened issue there:
https://github.com/executablebooks/sphinx-copybutton/issues/65
So hopefully it gets fixed over there, and then the website can be updated to include the fix.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5929/comments | https://api.github.com/repos/huggingface/transformers/issues/5929/events | https://github.com/huggingface/transformers/pull/5929 | 662,582,143 | MDExOlB1bGxSZXF1ZXN0NDU0MTIzMTA0 | 5,929 | Add DeBERTa model | {
"login": "BigBird01",
"id": 38195654,
"node_id": "MDQ6VXNlcjM4MTk1NjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/38195654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigBird01",
"html_url": "https://github.com/BigBird01",
"followers_url": "https://api.github.com/users/BigBird01/followers",
"following_url": "https://api.github.com/users/BigBird01/following{/other_user}",
"gists_url": "https://api.github.com/users/BigBird01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigBird01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigBird01/subscriptions",
"organizations_url": "https://api.github.com/users/BigBird01/orgs",
"repos_url": "https://api.github.com/users/BigBird01/repos",
"events_url": "https://api.github.com/users/BigBird01/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigBird01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=h1) Report\n> Merging [#5929](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `0.30%`.\n> The diff coverage is `73.13%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5929 +/- ##\n==========================================\n- Coverage 79.35% 79.05% -0.31% \n==========================================\n Files 181 184 +3 \n Lines 35800 36660 +860 \n==========================================\n+ Hits 28410 28982 +572 \n- Misses 7390 7678 +288 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `76.92% <50.00%> (-6.42%)` | :arrow_down: |\n| [src/transformers/tokenization\\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGViZXJ0YS5weQ==) | `69.76% <69.76%> (ø)` | |\n| [src/transformers/modeling\\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kZWJlcnRhLnB5) | `73.26% <73.26%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.39% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.34% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/configuration\\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2RlYmVydGEucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.11% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.64% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=footer). Last update [1fc4de6...0a08565](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Related issue #4858 ",
"Someone is waiting for fine-tuning a new model :)",
"Very cool, looking forward to that model!!",
"Hello, May I know when will the PR be merged?\r\n\r\n",
"@BigBird01 it will probably take between 1 and 3 weeks to merge. 2,500 lines is a lot to review :) \r\nI made some comments, and can take another pass on this when they're addressed.\r\n",
"> @BigBird01 it will probably take between 1 and 3 weeks to merge. 2,500 lines is a lot to review :)\r\n> I made some comments, and can take another pass on this when they're addressed.\r\n\r\nThanks!",
"Thanks for addressing the comments! Will take a look in a few days.",
"> Great, it's nearly done! Thanks a lot for your work on it.\r\n> \r\n> What's left to do is:\r\n> \r\n> * Ensure that the documentation is in the correct format\r\n> * Enable the remaining tests\r\n> \r\n> If you don't have time to work on it right now, let me know and I'll finish the implementation and merge it. Thanks!\r\n\r\n@LysandreJik Thanks for the comments. It will be great if you can work on the rest:) Feel free to let me know if you have any questions on the implementation. \r\n",
"Okay @BigBird01, I have the PyTorch version ready, passing all tests and the docs cleaned up as well. Should I push directly on your branch, or do you want me to open a PR on your fork so that you can check my changes before applying them?",
"Before we merge there will be one final item we'll have to take care of, that's integration tests! It's necessary to ensure we don't diverge from the original implementation. [Here's an example with RoBERTa.](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L397-L448)\r\n\r\nGiven that you have the original implementation, could you implement such tests? Thanks a lot!",
"> Okay @BigBird01, I have the PyTorch version ready, passing all tests and the docs cleaned up as well. Should I push directly on your branch, or do you want me to open a PR on your fork so that you can check my changes before applying them?\r\n\r\n@LysandreJik I just added you to the repo contributor. Please open a PR on it and I will merge your change into it then add integration tests. Thanks!\r\n",
"Hi @BigBird01, I just opened the pull request [here](https://github.com/BigBird01/transformers/pull/1).",
"Hi @BigBird01, did you get a chance to take a look at the PR?",
"> Hi @BigBird01, did you get a chance to take a look at the PR?\r\n\r\nSorry for the late reply. I just merged your changes and will try to add final test today. ",
"> Hi @BigBird01, did you get a chance to take a look at the PR?\r\n\r\n@LysandreJik I just finished the final test. But I hit a isort error after I pushed the code to the repo, but the tests get passed on my local node. Could you help to take a look at it?",
"> Hi, left a last few comments and questions. Let me know if you do not have time to implement/answer these last changes and I'll do the last updates and merge `DeBERTa`.\r\n\r\n@LysandreJik Thanks for the following up. I just replied most of your comments. Please feel free to make the last changes to merge the PR. Thanks in advance:)",
"I don't really understand the tracing changes, as the model does not pass the TorchScript tests. I'm removing this part, feel free to open a PR to add it back and set the `test_torchscript` flag to `True` in `test_modeling_deberta.py`. FYI, the error is the following:\r\n\r\n```\r\n def save(self, *args, **kwargs):\r\n r\"\"\"\r\n save(f, _extra_files=ExtraFilesMap{})\r\n \r\n See :func:`torch.jit.save <torch.jit.save>` for details.\r\n \"\"\"\r\n> return self._c.save(*args, **kwargs)\r\nE RuntimeError: \r\nE Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:\r\n\r\n```",
"There is an issue with the checkpoints uploaded on `huggingface.co`, as the base model identifier is `bert`, whereas this has been changed to `deberta`. This means that no weights are loaded on the model, as the base prefix doesn't fit.\r\n\r\nDo you mind if I update the weights on S3 with so that the state dict changes from this:\r\n\r\n```\r\n[...]\r\n'bert.embeddings.word_embeddings.weight', \r\n'bert.embeddings.position_embeddings.weight', \r\n'bert.embeddings.LayerNorm.weight',\r\n'bert.embeddings.LayerNorm.bias', \r\n'bert.encoder.layer.0.attention.self.q_bias', \r\n'bert.encoder.layer.0.attention.self.v_bias'\r\n[...]\r\n```\r\nto this?\r\n```\r\n[...]\r\n'deberta.embeddings.word_embeddings.weight', \r\n'deberta.embeddings.position_embeddings.weight', \r\n'deberta.embeddings.LayerNorm.weight',\r\n'deberta.embeddings.LayerNorm.bias', \r\n'deberta.encoder.layer.0.attention.self.q_bias', \r\n'deberta.encoder.layer.0.attention.self.v_bias'\r\n[...]\r\n```",
"> I don't really understand the tracing changes, as the model does not pass the TorchScript tests. I'm removing this part, feel free to open a PR to add it back and set the `test_torchscript` flag to `True` in `test_modeling_deberta.py`. FYI, the error is the following:\r\n> \r\n> ```\r\n> def save(self, *args, **kwargs):\r\n> r\"\"\"\r\n> save(f, _extra_files=ExtraFilesMap{})\r\n> \r\n> See :func:`torch.jit.save <torch.jit.save>` for details.\r\n> \"\"\"\r\n> > return self._c.save(*args, **kwargs)\r\n> E RuntimeError: \r\n> E Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:\r\n> ```\r\n\r\nSure. Let's remove it first and I can try to fix it later in a separate PR.",
"> There is an issue with the checkpoints uploaded on `huggingface.co`, as the base model identifier is `bert`, whereas this has been changed to `deberta`. This means that no weights are loaded on the model, as the base prefix doesn't fit.\r\n> \r\n> Do you mind if I update the weights on S3 with so that the state dict changes from this:\r\n> \r\n> ```\r\n> [...]\r\n> 'bert.embeddings.word_embeddings.weight', \r\n> 'bert.embeddings.position_embeddings.weight', \r\n> 'bert.embeddings.LayerNorm.weight',\r\n> 'bert.embeddings.LayerNorm.bias', \r\n> 'bert.encoder.layer.0.attention.self.q_bias', \r\n> 'bert.encoder.layer.0.attention.self.v_bias'\r\n> [...]\r\n> ```\r\n> \r\n> to this?\r\n> \r\n> ```\r\n> [...]\r\n> 'deberta.embeddings.word_embeddings.weight', \r\n> 'deberta.embeddings.position_embeddings.weight', \r\n> 'deberta.embeddings.LayerNorm.weight',\r\n> 'deberta.embeddings.LayerNorm.bias', \r\n> 'deberta.encoder.layer.0.attention.self.q_bias', \r\n> 'deberta.encoder.layer.0.attention.self.v_bias'\r\n> [...]\r\n> ```\r\n\r\nSure.",
"Okay @BigBird01, everything seems good to go, this should be merged very soon :)\r\n\r\nJust one last question, for your models on the hub (`microsoft/deberta-base` and `microsoft/deberta-large`) in your configuration there is `position_biased_input` set to `False`, which means that in the embedding layer, the `position_embeddings` will be set to `None`:\r\n\r\n```py\r\n if not self.position_biased_input:\r\n self.position_embeddings = None\r\n```\r\n\r\nHowever, in the model state dicts in the `pytorch_model.bin`, there is a layer containing the `position_embeddings`. What is correct here? Should there be `position_biased_input = True` in the configurations, or should this layer be removed from the state dicts? \r\n\r\nThanks!",
"> Okay @BigBird01, everything seems good to go, this should be merged very soon :)\r\n> \r\n> Just one last question, for your models on the hub (`microsoft/deberta-base` and `microsoft/deberta-large`) in your configuration there is `position_biased_input` set to `False`, which means that in the embedding layer, the `position_embeddings` will be set to `None`:\r\n> \r\n> ```python\r\n> if not self.position_biased_input:\r\n> self.position_embeddings = None\r\n> ```\r\n> \r\n> However, in the model state dicts in the `pytorch_model.bin`, there is a layer containing the `position_embeddings`. What is correct here? Should there be `position_biased_input = True` in the configurations, or should this layer be removed from the state dicts?\r\n> \r\n> Thanks!\r\n\r\nYes. It's used in the mask decoding part(EMD). We are still polishing that part and will release it once ready.",
"Thanks @patrickvonplaten, @sgugger for the reviews. Will implement the changes tonight.",
"Thanks for your work on this @BigBird01 :)",
"> Thanks for your work on this @BigBird01 :)\r\n\r\nThank you all to merge the code into master @LysandreJik @patrickvonplaten \r\nOne question, why after the merge we can't find the document of deberta model at https://huggingface.co/transformers/\r\nCould you help to check that?",
"The documentation is online, you just have to click on `master` on the top left right under the hugging face logo as is done here: https://huggingface.co/transformers/master/. The next release will then show deberta docs as a default :-) ",
"@BigBird01, two slow tests are failing with the DeBERTa models. Could you show how you implemented the integration tests so that I may investigate?",
"> @BigBird01, two slow tests are failing with the DeBERTa models. Could you show how you implemented the integration tests so that I may investigate?\r\n\r\n@LysandreJik In the integration tests, I just feed the model with a fake input data and verify the output of the model. It's similar to RoBERTa tests. I may take a took at it today. ",
"Thanks! The DeBERTa may not be working correctly right now, knowing the source of the issue would be great. "
] | 1,595 | 1,612 | 1,601 | CONTRIBUTOR | null | Add DeBERTa model to hf transformers. DeBERTa applies two techniques to improve RoBERTa, one is disentangled attention, the other is enhanced mask decoder. With 80GB training data, DeBERTa outperform RoBERTa on a majority of NLU tasks, e.g. SQUAD, MNLI and RACE. Paper link: https://arxiv.org/abs/2006.03654 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5929",
"html_url": "https://github.com/huggingface/transformers/pull/5929",
"diff_url": "https://github.com/huggingface/transformers/pull/5929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5929.patch",
"merged_at": 1601464050000
} |
https://api.github.com/repos/huggingface/transformers/issues/5928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5928/comments | https://api.github.com/repos/huggingface/transformers/issues/5928/events | https://github.com/huggingface/transformers/issues/5928 | 662,451,121 | MDU6SXNzdWU2NjI0NTExMjE= | 5,928 | Feed forward chunking for all pretrained models | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Any opinions here? I will create a PR if there is interest and would like to get your ideas and suggestions. @patrickvonplaten @sshleifer ",
"@patrickvonplaten would be the point person and he is on Vacation until August 3.\r\nIn the interim, if you want to start working on this go right ahead. Make sure it's actually faster/needed before you start though. I don't really know.",
"Hey @Pradhy729, \r\n\r\nYes it would be great to start a PR to add feed forward chunking to other models. Maybe you can start with BERT in your PR and ping us to get Feedback :-) \r\n\r\nA couple of things to consider:\r\n\r\n1) You should probably move the config param `config.chunk_size_feed_forward` to the general `configuration_utils.py` file.\r\n\r\n2) As @sshleifer said it would be good to benchmark the gains in a very similar way to this Notebook:\r\nhttps://github.com/patrickvonplaten/notebooks/blob/master/Reformer_2_4.ipynb\r\n\r\n3) as said earlier we should start with BERT and `config.chunk_size_feed_forward`.",
"Awesome! I will start with BERT and share with you for feedback.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | CONTRIBUTOR | null | Based on this card: [Feed forward chunking](https://github.com/huggingface/transformers/projects/9#card-39483681)
@patrickvonplaten
I'd like to help contribute and implement this for all the other models if this is still pending? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5928/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5927/comments | https://api.github.com/repos/huggingface/transformers/issues/5927/events | https://github.com/huggingface/transformers/pull/5927 | 662,428,082 | MDExOlB1bGxSZXF1ZXN0NDUzOTg1Mzgw | 5,927 | [CI] self-scheduled runner tests examples/ | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=h1) Report\n> Merging [#5927](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4781afd045b4722e7f28347f1c4f42a56a4550e8&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5927 +/- ##\n==========================================\n- Coverage 78.69% 78.59% -0.10% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20628 20603 -25 \n- Misses 5586 5611 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=footer). Last update [4781afd...a25be30](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Gunna merge this and make sure that it runs!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5927/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5927",
"html_url": "https://github.com/huggingface/transformers/pull/5927",
"diff_url": "https://github.com/huggingface/transformers/pull/5927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5927.patch",
"merged_at": 1595365268000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5926/comments | https://api.github.com/repos/huggingface/transformers/issues/5926/events | https://github.com/huggingface/transformers/pull/5926 | 662,243,987 | MDExOlB1bGxSZXF1ZXN0NDUzODIwMzU2 | 5,926 | DataParallel fix: multi gpu evaluation | {
"login": "csarron",
"id": 8440740,
"node_id": "MDQ6VXNlcjg0NDA3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8440740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csarron",
"html_url": "https://github.com/csarron",
"followers_url": "https://api.github.com/users/csarron/followers",
"following_url": "https://api.github.com/users/csarron/following{/other_user}",
"gists_url": "https://api.github.com/users/csarron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csarron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csarron/subscriptions",
"organizations_url": "https://api.github.com/users/csarron/orgs",
"repos_url": "https://api.github.com/users/csarron/repos",
"events_url": "https://api.github.com/users/csarron/events{/privacy}",
"received_events_url": "https://api.github.com/users/csarron/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this was missing in #5733, thanks for adding it!"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | The DataParallel training was fixed in https://github.com/huggingface/transformers/pull/5733, this commit also fixes the evaluation. It's more convenient when the user enables both `do_train` and `do_eval`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5926/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5926",
"html_url": "https://github.com/huggingface/transformers/pull/5926",
"diff_url": "https://github.com/huggingface/transformers/pull/5926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5926.patch",
"merged_at": 1595282049000
} |
https://api.github.com/repos/huggingface/transformers/issues/5925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5925/comments | https://api.github.com/repos/huggingface/transformers/issues/5925/events | https://github.com/huggingface/transformers/pull/5925 | 662,164,692 | MDExOlB1bGxSZXF1ZXN0NDUzNzQ5MjUw | 5,925 | Allow user to see actual error if a download has failed | {
"login": "festeh",
"id": 6877858,
"node_id": "MDQ6VXNlcjY4Nzc4NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6877858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/festeh",
"html_url": "https://github.com/festeh",
"followers_url": "https://api.github.com/users/festeh/followers",
"following_url": "https://api.github.com/users/festeh/following{/other_user}",
"gists_url": "https://api.github.com/users/festeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/festeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/festeh/subscriptions",
"organizations_url": "https://api.github.com/users/festeh/orgs",
"repos_url": "https://api.github.com/users/festeh/repos",
"events_url": "https://api.github.com/users/festeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/festeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=h1) Report\n> Merging [#5925](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5925 +/- ##\n==========================================\n- Coverage 78.51% 78.51% -0.01% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20583 20582 -1 \n- Misses 5631 5632 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <ø> (-0.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.14% <ø> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=footer). Last update [32883b3...80220ae](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,595 | 1,601 | 1,601 | NONE | null | Fixes #5869. Adds new logic of exception handling.
* If there was a caught exception in `cached_path` during download, and `force_download` is True, then raise it
* If `force_download` is False, save this exception and raise it later, it file is not in local cache
* If download exception in `cached_path` was raised, then re-raise it in calling function. If there's no exception, but resolved file is None, keep old message ("...is a correct model identifier...")
It would be great to know if there's a better way to dealing with this issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5925",
"html_url": "https://github.com/huggingface/transformers/pull/5925",
"diff_url": "https://github.com/huggingface/transformers/pull/5925.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5925.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5924/comments | https://api.github.com/repos/huggingface/transformers/issues/5924/events | https://github.com/huggingface/transformers/pull/5924 | 662,133,341 | MDExOlB1bGxSZXF1ZXN0NDUzNzIxMzk1 | 5,924 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5924",
"html_url": "https://github.com/huggingface/transformers/pull/5924",
"diff_url": "https://github.com/huggingface/transformers/pull/5924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5924.patch",
"merged_at": 1595317298000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5923/comments | https://api.github.com/repos/huggingface/transformers/issues/5923/events | https://github.com/huggingface/transformers/issues/5923 | 662,124,693 | MDU6SXNzdWU2NjIxMjQ2OTM= | 5,923 | how (if at all) are those models related... | {
"login": "codingbutstillalive",
"id": 10086172,
"node_id": "MDQ6VXNlcjEwMDg2MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/10086172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codingbutstillalive",
"html_url": "https://github.com/codingbutstillalive",
"followers_url": "https://api.github.com/users/codingbutstillalive/followers",
"following_url": "https://api.github.com/users/codingbutstillalive/following{/other_user}",
"gists_url": "https://api.github.com/users/codingbutstillalive/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codingbutstillalive/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingbutstillalive/subscriptions",
"organizations_url": "https://api.github.com/users/codingbutstillalive/orgs",
"repos_url": "https://api.github.com/users/codingbutstillalive/repos",
"events_url": "https://api.github.com/users/codingbutstillalive/events{/privacy}",
"received_events_url": "https://api.github.com/users/codingbutstillalive/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think they are not related at all.\r\nWhich models are you talking about?",
"Hi! :)\n\nI am talking about the implementation that is used for the text summarization example under transformers/examples and a BERT-based implementation that is described here:\n\n\nhttps://paperswithcode.com/paper/text-summarization-with-pretrained-encoders\n\nI wondered if this might be the same models or even implementations.",
"That is at `examples/seq2seq/bertabs`, but not actively maintained.\r\nMore recent models can be finetuned with the code at `examples/seq2seq/finetune.py`.",
"Could you please describe what is meant by \"finetuning more recent models\"? Does it mean that it is a generic script that can work with arbitrary NLP models which are implemented with Pytorch?",
"it can work with the following classes of models in our model hub:\r\n```\r\nBartForConditonalGeneration\r\nT5ForConditonalGeneration\r\nMarianMTModel\r\n```\r\n\r\nwhich in total is over 1100 checkpoints!\r\n",
"Oh, I see. The field is progressing so quickly. So I understand that BART is an alternative architecture to the one from the paper \"Text Summarization with Pretrained Encoders\". I will investigate this. Thank you!"
] | 1,595 | 1,595 | 1,595 | NONE | null | https://github.com/huggingface/transformers/tree/master/examples/seq2seq
and...
https://github.com/nlpyang/PreSumm
are they the same? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5922/comments | https://api.github.com/repos/huggingface/transformers/issues/5922/events | https://github.com/huggingface/transformers/pull/5922 | 662,107,907 | MDExOlB1bGxSZXF1ZXN0NDUzNjk4OTQy | 5,922 | Avoid unnecessary warnings when loading pretrained model | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=h1) Report\n> Merging [#5922](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9827d666ebdf959aa9dfe3627ccb80592b378b77&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5922 +/- ##\n==========================================\n- Coverage 78.64% 78.51% -0.13% \n==========================================\n Files 146 146 \n Lines 26244 26252 +8 \n==========================================\n- Hits 20639 20612 -27 \n- Misses 5605 5640 +35 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.75% <100.00%> (+<0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.92% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.14% <100.00%> (+0.03%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.30% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+3.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=footer). Last update [9827d66...56688d2](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | COLLABORATOR | null | Currently, the following commands will produce a lot of warnings, because some weights are not saved with the model by design.
GPT-2 (see #5800, #5814)
```
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2')
```
T5 (see #3553, #5348)
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained('t5-small')
```
Bart
```
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
```
This PR introduces a new class attributes that you can tune per model to ignore some keys during loading and avoid those warnings (that scare users something went wrong for no reason). It fixes the above issues and gives us an API to fix any similar issues on other models.
Pinging @patrickvonplaten since you mentioned this recently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5922",
"html_url": "https://github.com/huggingface/transformers/pull/5922",
"diff_url": "https://github.com/huggingface/transformers/pull/5922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5922.patch",
"merged_at": 1595542416000
} |
https://api.github.com/repos/huggingface/transformers/issues/5921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5921/comments | https://api.github.com/repos/huggingface/transformers/issues/5921/events | https://github.com/huggingface/transformers/pull/5921 | 662,107,312 | MDExOlB1bGxSZXF1ZXN0NDUzNjk4NDEy | 5,921 | Create README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=h1) Report\n> Merging [#5921](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `1.20%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5921 +/- ##\n==========================================\n- Coverage 78.51% 77.31% -1.21% \n==========================================\n Files 146 146 \n Lines 26214 26214 \n==========================================\n- Hits 20583 20268 -315 \n- Misses 5631 5946 +315 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=footer). Last update [32883b3...f002de2](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,595 | 1,595 | 1,595 | CONTRIBUTOR | null | - Maybe the result of this query answers the question You did some days ago @julien-c ;-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5921/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5921",
"html_url": "https://github.com/huggingface/transformers/pull/5921",
"diff_url": "https://github.com/huggingface/transformers/pull/5921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5921.patch",
"merged_at": 1595317973000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.