url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/4112 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4112/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4112/comments | https://api.github.com/repos/huggingface/transformers/issues/4112/events | https://github.com/huggingface/transformers/pull/4112 | 611,101,246 | MDExOlB1bGxSZXF1ZXN0NDEyMzk1Mjcz | 4,112 | Albert large QA model pretrained from baidu webqa and baidu dureader datasets. | {
"login": "wptoux",
"id": 6761483,
"node_id": "MDQ6VXNlcjY3NjE0ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6761483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wptoux",
"html_url": "https://github.com/wptoux",
"followers_url": "https://api.github.com/users/wptoux/followers",
"following_url": "https://api.github.com/users/wptoux/following{/other_user}",
"gists_url": "https://api.github.com/users/wptoux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wptoux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wptoux/subscriptions",
"organizations_url": "https://api.github.com/users/wptoux/orgs",
"repos_url": "https://api.github.com/users/wptoux/repos",
"events_url": "https://api.github.com/users/wptoux/events{/privacy}",
"received_events_url": "https://api.github.com/users/wptoux/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=h1) Report\n> Merging [#4112](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4112 +/- ##\n==========================================\n- Coverage 78.84% 78.84% -0.01% \n==========================================\n Files 114 114 \n Lines 18688 18688 \n==========================================\n- Hits 14735 14734 -1 \n- Misses 3953 3954 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4112/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=footer). Last update [d713cfc...a442791](https://codecov.io/gh/huggingface/transformers/pull/4112?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Really cool. [Model page](https://huggingface.co/wptoux/albert-chinese-large-qa)"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4112/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4112/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4112",
"html_url": "https://github.com/huggingface/transformers/pull/4112",
"diff_url": "https://github.com/huggingface/transformers/pull/4112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4112.patch",
"merged_at": 1588431132000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4111 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4111/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4111/comments | https://api.github.com/repos/huggingface/transformers/issues/4111/events | https://github.com/huggingface/transformers/issues/4111 | 611,075,427 | MDU6SXNzdWU2MTEwNzU0Mjc= | 4,111 | BERT as encoder and a transformer as a decoder. | {
"login": "ayanamongol",
"id": 6296279,
"node_id": "MDQ6VXNlcjYyOTYyNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6296279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayanamongol",
"html_url": "https://github.com/ayanamongol",
"followers_url": "https://api.github.com/users/ayanamongol/followers",
"following_url": "https://api.github.com/users/ayanamongol/following{/other_user}",
"gists_url": "https://api.github.com/users/ayanamongol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayanamongol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayanamongol/subscriptions",
"organizations_url": "https://api.github.com/users/ayanamongol/orgs",
"repos_url": "https://api.github.com/users/ayanamongol/repos",
"events_url": "https://api.github.com/users/ayanamongol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayanamongol/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The encoder-decoder or bart may be what you want.",
"I think you should take a look into the ``encoder-decoder`` framework: https://huggingface.co/transformers/model_doc/encoderdecoder.html",
"Note that currently only Bert2Bert is possible."
] | 1,588 | 1,591 | 1,591 | NONE | null | Hi, there
Is there probability to build a model of the BERT as the encoder and the transformer as the decoder?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4111/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4110 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4110/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4110/comments | https://api.github.com/repos/huggingface/transformers/issues/4110/events | https://github.com/huggingface/transformers/pull/4110 | 611,019,876 | MDExOlB1bGxSZXF1ZXN0NDEyMzMxMDg2 | 4,110 | NER: parse args from .args file or JSON | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | COLLABORATOR | null | Hi,
thanks to @julien-c , args parsing from `.args` or json-based configuration files were introduced in #3934 into the internal argparser class.
This PR adds support for it in the `run_ner.py` script.
It also extends the NER documentation and shows how to use a json-based configuration with `run_ner.py`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4110/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4110",
"html_url": "https://github.com/huggingface/transformers/pull/4110",
"diff_url": "https://github.com/huggingface/transformers/pull/4110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4110.patch",
"merged_at": 1588429758000
} |
https://api.github.com/repos/huggingface/transformers/issues/4109 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4109/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4109/comments | https://api.github.com/repos/huggingface/transformers/issues/4109/events | https://github.com/huggingface/transformers/pull/4109 | 610,984,723 | MDExOlB1bGxSZXF1ZXN0NDEyMzEzODUy | 4,109 | Fix #2941 | {
"login": "xxbidiao",
"id": 1439638,
"node_id": "MDQ6VXNlcjE0Mzk2Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1439638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxbidiao",
"html_url": "https://github.com/xxbidiao",
"followers_url": "https://api.github.com/users/xxbidiao/followers",
"following_url": "https://api.github.com/users/xxbidiao/following{/other_user}",
"gists_url": "https://api.github.com/users/xxbidiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxbidiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxbidiao/subscriptions",
"organizations_url": "https://api.github.com/users/xxbidiao/orgs",
"repos_url": "https://api.github.com/users/xxbidiao/repos",
"events_url": "https://api.github.com/users/xxbidiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxbidiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Uh, I guess it should be `reshape(-1, 1)` instead of `reshape(-1,1)` regarding code quality issues, but I'm not sure whether it's the correct fix.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=h1) Report\n> Merging [#4109](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d713cfc5ebfb1ed83de1fce55dd7279f9db30672&el=desc) will **not change** coverage.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4109 +/- ##\n=======================================\n Coverage 78.84% 78.84% \n=======================================\n Files 114 114 \n Lines 18688 18688 \n=======================================\n Hits 14735 14735 \n Misses 3953 3953 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.94% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.14% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=footer). Last update [d713cfc...e12b3d5](https://codecov.io/gh/huggingface/transformers/pull/4109?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Reshaped score array to avoid `numpy` ValueError. This should allow the sentiment analyzer pipeline to run. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4109",
"html_url": "https://github.com/huggingface/transformers/pull/4109",
"diff_url": "https://github.com/huggingface/transformers/pull/4109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4109.patch",
"merged_at": 1588432831000
} |
https://api.github.com/repos/huggingface/transformers/issues/4108 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4108/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4108/comments | https://api.github.com/repos/huggingface/transformers/issues/4108/events | https://github.com/huggingface/transformers/pull/4108 | 610,956,973 | MDExOlB1bGxSZXF1ZXN0NDEyMjkxODI3 | 4,108 | Feature/torchserve interface [WIP] | {
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | Work in progress PR to add a CLI interface for easy packaging of transformers models
for serving in [pytorch/serve](https://github.com/pytorch/serve).
Intended usage example:
`transformers-cli torchserve --checkpoint="distilbert-base-uncased-finetuned-sst-2-english" --tokenizer="distilbert-base-uncased" --model-name="distilbert" --task="sentiment-analysis"`
The call above produces a MAR (model archive) file that can be served directly by the `torchserve` binary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4108/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4108",
"html_url": "https://github.com/huggingface/transformers/pull/4108",
"diff_url": "https://github.com/huggingface/transformers/pull/4108.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4108.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4107 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4107/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4107/comments | https://api.github.com/repos/huggingface/transformers/issues/4107/events | https://github.com/huggingface/transformers/pull/4107 | 610,947,472 | MDExOlB1bGxSZXF1ZXN0NDEyMjg0Mjgz | 4,107 | Fix `RobertaClassificationHead` style consistency. | {
"login": "ranamihir",
"id": 8270471,
"node_id": "MDQ6VXNlcjgyNzA0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8270471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranamihir",
"html_url": "https://github.com/ranamihir",
"followers_url": "https://api.github.com/users/ranamihir/followers",
"following_url": "https://api.github.com/users/ranamihir/following{/other_user}",
"gists_url": "https://api.github.com/users/ranamihir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranamihir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranamihir/subscriptions",
"organizations_url": "https://api.github.com/users/ranamihir/orgs",
"repos_url": "https://api.github.com/users/ranamihir/repos",
"events_url": "https://api.github.com/users/ranamihir/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranamihir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=h1) Report\n> Merging [#4107](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.21%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4107 +/- ##\n==========================================\n- Coverage 77.83% 76.62% -1.22% \n==========================================\n Files 141 141 \n Lines 24634 24634 \n==========================================\n- Hits 19175 18876 -299 \n- Misses 5459 5758 +299 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.82% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4107/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=footer). Last update [58cca47...21fae42](https://codecov.io/gh/huggingface/transformers/pull/4107?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Creating new one with all checks passed."
] | 1,588 | 1,593 | 1,593 | NONE | null | There's a slight inconsistency in `RobertaClassificationHead` in that it takes in the whole sequence output from the `RobertaModel`, and extracts the pooled output inside its own forward method, seen [here](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_roberta.py#L573).
This is different from other models, where the pooled output is computed beforehand and directly passed on to the classifier. E.g. in [`BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_bert.py#L1147), [`DistilBertForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_distilbert.py#L614), [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/d713cfc5ebfb1ed83de1fce55dd7279f9db30672/src/transformers/modeling_bart.py#L1097), etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4107",
"html_url": "https://github.com/huggingface/transformers/pull/4107",
"diff_url": "https://github.com/huggingface/transformers/pull/4107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4107.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4106 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4106/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4106/comments | https://api.github.com/repos/huggingface/transformers/issues/4106/events | https://github.com/huggingface/transformers/pull/4106 | 610,935,997 | MDExOlB1bGxSZXF1ZXN0NDEyMjc1MDky | 4,106 | FIXME(Actually test multi input pipelines) | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing in favor of #4154 "
] | 1,588 | 1,588 | 1,588 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4106",
"html_url": "https://github.com/huggingface/transformers/pull/4106",
"diff_url": "https://github.com/huggingface/transformers/pull/4106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4106.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4105 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4105/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4105/comments | https://api.github.com/repos/huggingface/transformers/issues/4105/events | https://github.com/huggingface/transformers/issues/4105 | 610,935,631 | MDU6SXNzdWU2MTA5MzU2MzE= | 4,105 | model from path 16-bits training:True but float16 false | {
"login": "mahdirezaey",
"id": 34715488,
"node_id": "MDQ6VXNlcjM0NzE1NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/34715488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdirezaey",
"html_url": "https://github.com/mahdirezaey",
"followers_url": "https://api.github.com/users/mahdirezaey/followers",
"following_url": "https://api.github.com/users/mahdirezaey/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdirezaey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdirezaey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdirezaey/subscriptions",
"organizations_url": "https://api.github.com/users/mahdirezaey/orgs",
"repos_url": "https://api.github.com/users/mahdirezaey/repos",
"events_url": "https://api.github.com/users/mahdirezaey/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdirezaey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi all\r\n\r\nI am gonna run \"run_language_modeling.py\" on 1 GPU 1080 Ti \r\nand loading distill bert from directory \r\n\r\nand using fp16 , apex\r\n\r\nit is run OK \r\nand i am getting trace back of \r\n\r\n\r\n05/01/2020 22:24:38 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: store_true\r\n05/01/2020 22:24:38 - INFO - transformers.configuration_utils - loading configuration file ./save_pre_trn_model/config.json\r\n05/01/2020 22:24:38 - INFO - transformers.configuration_utils - Model config DistilBertConfig {\r\n \"_num_labels\": 2,\r\n \"activation\": \"gelu\",\r\n \"architectures\": [\r\n \"DistilBertModel\"\r\n ],\r\n \"attention_dropout\": 0.1,\r\n \"bad_words_ids\": null,\r\n \"bos_token_id\": null,\r\n \"decoder_start_token_id\": null,\r\n \"dim\": 768,\r\n \"do_sample\": false,\r\n \"dropout\": 0.1,\r\n \"early_stopping\": false,\r\n \"eos_token_id\": null,\r\n \"finetuning_task\": null,\r\n \"hidden_dim\": 3072,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"is_decoder\": false,\r\n \"is_encoder_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"min_length\": 0,\r\n \"model_type\": \"distilbert\",\r\n \"n_heads\": 12,\r\n \"n_layers\": 6,\r\n \"no_repeat_ngram_size\": 0,\r\n \"num_beams\": 1,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"prefix\": null,\r\n \"pruned_heads\": {},\r\n \"qa_dropout\": 0.1,\r\n \"repetition_penalty\": 1.0,\r\n \"seq_classif_dropout\": 0.2,\r\n \"sinusoidal_pos_embds\": false,\r\n \"task_specific_params\": null,\r\n \"temperature\": 1.0,\r\n \"tie_weights_\": true,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 30000\r\n}\r\n",
"Is it OK to see \r\n16-bits training:True (i have written \"store_true\" in the code of it instead of True but those are doing the same thing)\r\nand\r\n\"use_bfloat16\": false\r\n\r\nwith each other ?\r\n\r\nwhat does \"use_bfloat16\": false means ?",
"[As far as I can see](https://github.com/huggingface/transformers/search?q=use_bfloat16&type=Code) `bfloat16` is only relevant for Tensorflow XLNet, so no need to worry.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4105/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4104 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4104/comments | https://api.github.com/repos/huggingface/transformers/issues/4104/events | https://github.com/huggingface/transformers/pull/4104 | 610,888,536 | MDExOlB1bGxSZXF1ZXN0NDEyMjM3MTky | 4,104 | Fix pytorch lighting examples | {
"login": "simonepri",
"id": 3505087,
"node_id": "MDQ6VXNlcjM1MDUwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3505087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepri",
"html_url": "https://github.com/simonepri",
"followers_url": "https://api.github.com/users/simonepri/followers",
"following_url": "https://api.github.com/users/simonepri/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepri/subscriptions",
"organizations_url": "https://api.github.com/users/simonepri/orgs",
"repos_url": "https://api.github.com/users/simonepri/repos",
"events_url": "https://api.github.com/users/simonepri/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@simonepri can you point to the examples that need fixing?",
"https://github.com/huggingface/transformers/blob/master/examples/glue/run_pl_glue.py\r\nhttps://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | It throws the exception 'Trainer' object has no attribute 'avg_loss' because since version 0.7.2 they removed the avg_loss field from the Trainer class.
Also `get_tqdm_dict` is deprecated since 0.7.3.
See https://github.com/huggingface/transformers/pull/2890#issuecomment-613066707 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4104",
"html_url": "https://github.com/huggingface/transformers/pull/4104",
"diff_url": "https://github.com/huggingface/transformers/pull/4104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4104.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4103 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4103/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4103/comments | https://api.github.com/repos/huggingface/transformers/issues/4103/events | https://github.com/huggingface/transformers/issues/4103 | 610,871,472 | MDU6SXNzdWU2MTA4NzE0NzI= | 4,103 | AttributeError: 'NoneType' object has no attribute 'abs' when run example/run_bertology.py | {
"login": "ThomasSYT",
"id": 41875489,
"node_id": "MDQ6VXNlcjQxODc1NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/41875489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThomasSYT",
"html_url": "https://github.com/ThomasSYT",
"followers_url": "https://api.github.com/users/ThomasSYT/followers",
"following_url": "https://api.github.com/users/ThomasSYT/following{/other_user}",
"gists_url": "https://api.github.com/users/ThomasSYT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThomasSYT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThomasSYT/subscriptions",
"organizations_url": "https://api.github.com/users/ThomasSYT/orgs",
"repos_url": "https://api.github.com/users/ThomasSYT/repos",
"events_url": "https://api.github.com/users/ThomasSYT/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThomasSYT/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also have this error, anyone has the solution?"
] | 1,588 | 1,618 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Bert
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
example/run_bertology.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
GLUE mnli
## To reproduce
Steps to reproduce the behavior:
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
After 2 Iterations:
Traceback (most recent call last):
File "run_bertology.py", line 427, in <module>
main()
File "run_bertology.py", line 422, in main
head_mask = mask_heads(args, model, eval_dataloader)
File "run_bertology.py", line 180, in mask_heads
args, model, eval_dataloader, compute_entropy=False, head_mask=new_head_mask
File "run_bertology.py", line 105, in compute_heads_importance
head_importance += head_mask.grad.abs().detach()
AttributeError: 'NoneType' object has no attribute 'abs'
I printed the 'head_mask' when the error occurs:
head_mask: tensor([[1., 0., 1., 1., 1., 1., 0., 0., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 0., 0.],
[0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 0., 1., 0., 1., 1., 0., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 0., 1., 1., 1., 0., 0., 1.],
[1., 0., 1., 1., 0., 1., 0., 1., 1., 0., 1., 1.],
[1., 0., 0., 1., 0., 0., 0., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.]], device='cuda:0',
grad_fn=<ViewBackward>)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
At the 3 positions of tensor, it should be "requires_grad=True".
head_mask: tensor([[1., 1., 1., 1., 1., 1., 0., 0., 1., 0., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1.],
[1., 1., 1., 1., 1., 1., 0., 1., 1., 0., 1., 1.],
[1., 0., 1., 1., 1., 0., 0., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.]], device='cuda:0',
requires_grad=True)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.8.0
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):1.15
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4103/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4102 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4102/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4102/comments | https://api.github.com/repos/huggingface/transformers/issues/4102/events | https://github.com/huggingface/transformers/pull/4102 | 610,833,785 | MDExOlB1bGxSZXF1ZXN0NDEyMTk0Mzc1 | 4,102 | Added huseinzol05/gpt2-345M-bahasa-cased | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4102",
"html_url": "https://github.com/huggingface/transformers/pull/4102",
"diff_url": "https://github.com/huggingface/transformers/pull/4102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4102.patch",
"merged_at": 1588431075000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4101 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4101/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4101/comments | https://api.github.com/repos/huggingface/transformers/issues/4101/events | https://github.com/huggingface/transformers/pull/4101 | 610,793,968 | MDExOlB1bGxSZXF1ZXN0NDEyMTYzNzA1 | 4,101 | Docs: add XLM-RoBERTa to multi-lingual section | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | COLLABORATOR | null | Hi,
this PR adds a short description of available XLM-R models to the multi-lingual documentation :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4101/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4101",
"html_url": "https://github.com/huggingface/transformers/pull/4101",
"diff_url": "https://github.com/huggingface/transformers/pull/4101.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4101.patch",
"merged_at": 1588345618000
} |
https://api.github.com/repos/huggingface/transformers/issues/4100 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4100/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4100/comments | https://api.github.com/repos/huggingface/transformers/issues/4100/events | https://github.com/huggingface/transformers/issues/4100 | 610,781,655 | MDU6SXNzdWU2MTA3ODE2NTU= | 4,100 | Masking in Bert | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't fully understand the question, but love the color theme :)",
"> I don't fully understand the question, but love the color theme :)\r\n\r\nHey thanks😄its cobalt on vscode. Have edited the question with tldr pls let me know on it.",
"That being said, please don't use screenshots. They are hard to read (especially on phones) and make it impossible to copy-and-paste your code. Use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead. I also don't quite understand your question. Do you have a general question about how attention works? Please use [Stack Overflow](https://stackoverflow.com/) for this, as the template clearly mentions.",
"> That being said, please don't use screenshots. They are hard to read (especially on phones) and make it impossible to copy-and-paste your code. Use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks) instead. I also don't quite understand your question. Do you have a general question about how attention works? Please use [Stack Overflow](https://stackoverflow.com/) for this, as the template clearly mentions.\r\n\r\nSure will follow the guidelines, closing the issue."
] | 1,588 | 1,588 | 1,588 | NONE | null | I am not able to grasp the concept of attention masking in bert or the other transformers.
<img width="790" alt="Screenshot 2020-05-01 at 7 56 06 PM" src="https://user-images.githubusercontent.com/45225143/80812561-d37a4a00-8be5-11ea-9da6-469b0d2d3f8f.png">
According to the documentation I tried to experiment it out.
Here it is clearly mentioned that the 1's in position are going to be masked and 0's are not.
So i tried to experiment it out and got this result.
<img width="510" alt="Screenshot 2020-05-01 at 8 00 02 PM" src="https://user-images.githubusercontent.com/45225143/80813016-d1fd5180-8be6-11ea-8fcc-c86943fc003f.png">
Here are my results the numbers on the left are 0 and right non zero.
I still dont get the whole picture of how this code fits into the whole picture.
TLDR
In general terms, if any one could explain me attention masking in the self attention part of the code in transformers and its variants(possibly with code) it would be great. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4100/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4099 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4099/comments | https://api.github.com/repos/huggingface/transformers/issues/4099/events | https://github.com/huggingface/transformers/pull/4099 | 610,768,360 | MDExOlB1bGxSZXF1ZXN0NDEyMTQ0MDM2 | 4,099 | Added GePpeTto card | {
"login": "LoreDema",
"id": 7656158,
"node_id": "MDQ6VXNlcjc2NTYxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7656158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoreDema",
"html_url": "https://github.com/LoreDema",
"followers_url": "https://api.github.com/users/LoreDema/followers",
"following_url": "https://api.github.com/users/LoreDema/following{/other_user}",
"gists_url": "https://api.github.com/users/LoreDema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoreDema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoreDema/subscriptions",
"organizations_url": "https://api.github.com/users/LoreDema/orgs",
"repos_url": "https://api.github.com/users/LoreDema/repos",
"events_url": "https://api.github.com/users/LoreDema/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoreDema/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Awesome :-) "
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4099/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4099",
"html_url": "https://github.com/huggingface/transformers/pull/4099",
"diff_url": "https://github.com/huggingface/transformers/pull/4099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4099.patch",
"merged_at": 1588348002000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4098 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4098/comments | https://api.github.com/repos/huggingface/transformers/issues/4098/events | https://github.com/huggingface/transformers/pull/4098 | 610,739,286 | MDExOlB1bGxSZXF1ZXN0NDEyMTIxNTY3 | 4,098 | [Fix #3963] GPT2 FP16 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Gunna merge this at 7pm EST barring objections @thomwolf @julien-c ",
"Pinging @thomwolf especially as he'd want to review this imo"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | fix #3963 GPT2 failing (through run_language_modeling.py) in fp16 mode. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4098/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4098/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4098",
"html_url": "https://github.com/huggingface/transformers/pull/4098",
"diff_url": "https://github.com/huggingface/transformers/pull/4098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4098.patch",
"merged_at": 1588954018000
} |
https://api.github.com/repos/huggingface/transformers/issues/4097 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4097/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4097/comments | https://api.github.com/repos/huggingface/transformers/issues/4097/events | https://github.com/huggingface/transformers/pull/4097 | 610,738,482 | MDExOlB1bGxSZXF1ZXN0NDEyMTIwOTgw | 4,097 | Fix gpt2 fp16 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | fix #3963 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4097",
"html_url": "https://github.com/huggingface/transformers/pull/4097",
"diff_url": "https://github.com/huggingface/transformers/pull/4097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4097.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4096 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4096/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4096/comments | https://api.github.com/repos/huggingface/transformers/issues/4096/events | https://github.com/huggingface/transformers/issues/4096 | 610,677,489 | MDU6SXNzdWU2MTA2Nzc0ODk= | 4,096 | Defaults models for different pipelines | {
"login": "p-christ",
"id": 26346243,
"node_id": "MDQ6VXNlcjI2MzQ2MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-christ",
"html_url": "https://github.com/p-christ",
"followers_url": "https://api.github.com/users/p-christ/followers",
"following_url": "https://api.github.com/users/p-christ/following{/other_user}",
"gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-christ/subscriptions",
"organizations_url": "https://api.github.com/users/p-christ/orgs",
"repos_url": "https://api.github.com/users/p-christ/repos",
"events_url": "https://api.github.com/users/p-christ/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-christ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Defaults are at https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1459 . There is often a different default for tf and pt (pytorch).",
"Closing this as i think the question is resolved (also it's probably a better match for Stack Overflow)",
"What is the default model for the 'fill-mask' pipeline? I'm not able to tell from the previous answer in this thread. \r\n\r\nAny assistance much appreciated.",
"The defaults are defined [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L2987-L3087). The `fill-mask` pipeline uses the `distilroberta-base` checkpoint.",
"The above answers are out of date.\r\nThe defaults are now defined in [pipelines/__init__.py](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/__init__.py) (in the values of the SUPPORTED_TASKS dictionary)."
] | 1,588 | 1,643 | 1,588 | NONE | null | Hey,
Where can I find the default models that are used for the different pipelines? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4096/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4095 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4095/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4095/comments | https://api.github.com/repos/huggingface/transformers/issues/4095/events | https://github.com/huggingface/transformers/pull/4095 | 610,672,677 | MDExOlB1bGxSZXF1ZXN0NDEyMDY5NTkz | 4,095 | Fix object is not subscriptable error in BertEncoder (#1188) | {
"login": "mklimasz",
"id": 16540593,
"node_id": "MDQ6VXNlcjE2NTQwNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/16540593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mklimasz",
"html_url": "https://github.com/mklimasz",
"followers_url": "https://api.github.com/users/mklimasz/followers",
"following_url": "https://api.github.com/users/mklimasz/following{/other_user}",
"gists_url": "https://api.github.com/users/mklimasz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mklimasz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mklimasz/subscriptions",
"organizations_url": "https://api.github.com/users/mklimasz/orgs",
"repos_url": "https://api.github.com/users/mklimasz/repos",
"events_url": "https://api.github.com/users/mklimasz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mklimasz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=h1) Report\n> Merging [#4095](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4095 +/- ##\n=======================================\n Coverage 78.84% 78.84% \n=======================================\n Files 114 114 \n Lines 18691 18693 +2 \n=======================================\n+ Hits 14737 14739 +2 \n Misses 3954 3954 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.68% <50.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4095/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=footer). Last update [b868617...d506143](https://codecov.io/gh/huggingface/transformers/pull/4095?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | Fix object is not subscriptable error in BertEncoder when head mask is None.
Issue #1188 describes problem.
BertLayer accepts head_mask as None, however if BertEncoder gets head_mask as None - it tries to index None. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4095",
"html_url": "https://github.com/huggingface/transformers/pull/4095",
"diff_url": "https://github.com/huggingface/transformers/pull/4095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4095.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4094 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4094/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4094/comments | https://api.github.com/repos/huggingface/transformers/issues/4094/events | https://github.com/huggingface/transformers/issues/4094 | 610,665,902 | MDU6SXNzdWU2MTA2NjU5MDI= | 4,094 | Negative dimension when initialising the XLNetModel | {
"login": "yxu132",
"id": 20841873,
"node_id": "MDQ6VXNlcjIwODQxODcz",
"avatar_url": "https://avatars.githubusercontent.com/u/20841873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yxu132",
"html_url": "https://github.com/yxu132",
"followers_url": "https://api.github.com/users/yxu132/followers",
"following_url": "https://api.github.com/users/yxu132/following{/other_user}",
"gists_url": "https://api.github.com/users/yxu132/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yxu132/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yxu132/subscriptions",
"organizations_url": "https://api.github.com/users/yxu132/orgs",
"repos_url": "https://api.github.com/users/yxu132/repos",
"events_url": "https://api.github.com/users/yxu132/events{/privacy}",
"received_events_url": "https://api.github.com/users/yxu132/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I can't reproduce this on `master`, could you upgrade your library to the latest version and let me know if you face the same issue?",
"I think torch-1.5.0 and pytorch-transformers-1.2.0 are the latest versions, no?\r\nI upgraded to python3.7 and tried the above again and still get the same issue. ",
"That's because `pytorch-transformers` became `transformers` in September!",
"Thanks! It works now. Sorry for asking such a dum question...\r\n",
"No worries, gald you could make it work!",
"Cool! Problem solved. Will close this issue. \r\nNice weekend!",
"> That's because `pytorch-transformers` became `transformers` in September!\r\n\r\nwhat does that mean? Sorry, I didn't get your point and I have the same issue.",
"> > That's because `pytorch-transformers` became `transformers` in September!\r\n> \r\n> what does that mean? Sorry, I didn't get your point and I have the same issue.\r\n\r\nInstead of install and use pytorch-transformers, install and use transformers. For examples,\r\n\r\n> from transformers import *"
] | 1,588 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
XLNetModel:
## To reproduce
Steps to reproduce the behavior:
```
import torch
from pytorch_transformers import *
# PyTorch-Transformers has a unified API
# for 7 transformer architectures and 30 pretrained weights.
# Model | Tokenizer | Pretrained weights shortcut
MODELS = [(XLNetModel, XLNetTokenizer, 'xlnet-base-cased')]
# Let's encode some text in a sequence of hidden-states using each model:
for model_class, tokenizer_class, pretrained_weights in MODELS:
# Load pretrained model/tokenizer
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
```
with the code above I got following errors:
```
Traceback (most recent call last):
File "/Users/xx/xxx/xxx/test.py", line 424, in <module>
model = model_class.from_pretrained(pretrained_weights)
File "/Users/yxu132/pyflow3.6/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 536, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/Users/yxu132/pyflow3.6/lib/python3.6/site-packages/pytorch_transformers/modeling_xlnet.py", line 731, in __init__
self.word_embedding = nn.Embedding(config.n_token, config.d_model)
File "/Users/xx/pyflow3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 97, in __init__
self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 768]
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 1.2.0
- Platform: MacOS
- Python version: Python3.6
- PyTorch version (GPU?): Torch 1.5 (with and without GPU)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4094/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4093 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4093/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4093/comments | https://api.github.com/repos/huggingface/transformers/issues/4093/events | https://github.com/huggingface/transformers/pull/4093 | 610,643,223 | MDExOlB1bGxSZXF1ZXN0NDEyMDQ2NjEx | 4,093 | Fix overwrite_cache behaviour for pytorch lightning examples | {
"login": "simonepri",
"id": 3505087,
"node_id": "MDQ6VXNlcjM1MDUwODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3505087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonepri",
"html_url": "https://github.com/simonepri",
"followers_url": "https://api.github.com/users/simonepri/followers",
"following_url": "https://api.github.com/users/simonepri/following{/other_user}",
"gists_url": "https://api.github.com/users/simonepri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonepri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonepri/subscriptions",
"organizations_url": "https://api.github.com/users/simonepri/orgs",
"repos_url": "https://api.github.com/users/simonepri/repos",
"events_url": "https://api.github.com/users/simonepri/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonepri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=h1) Report\n> Merging [#4093](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4093 +/- ##\n=======================================\n Coverage 78.84% 78.85% \n=======================================\n Files 114 114 \n Lines 18691 18691 \n=======================================\n+ Hits 14737 14738 +1 \n+ Misses 3954 3953 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=footer). Last update [b868617...d0f7228](https://codecov.io/gh/huggingface/transformers/pull/4093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Not sure when it got broken, but this looks ok at first glance to me. I'll pull it down tonight as a sanity check to run it. \r\n\r\nThank you! ",
"LGTM too 😄 Thank you again, @simonepri "
] | 1,588 | 1,588 | 1,588 | NONE | null | cc: @nateraw
Ref: #3290 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4093/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4093",
"html_url": "https://github.com/huggingface/transformers/pull/4093",
"diff_url": "https://github.com/huggingface/transformers/pull/4093.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4093.patch",
"merged_at": 1588782289000
} |
https://api.github.com/repos/huggingface/transformers/issues/4092 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4092/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4092/comments | https://api.github.com/repos/huggingface/transformers/issues/4092/events | https://github.com/huggingface/transformers/issues/4092 | 610,441,952 | MDU6SXNzdWU2MTA0NDE5NTI= | 4,092 | Finue-tuning T5 model | {
"login": "Palipoor",
"id": 16380397,
"node_id": "MDQ6VXNlcjE2MzgwMzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Palipoor",
"html_url": "https://github.com/Palipoor",
"followers_url": "https://api.github.com/users/Palipoor/followers",
"following_url": "https://api.github.com/users/Palipoor/following{/other_user}",
"gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions",
"organizations_url": "https://api.github.com/users/Palipoor/orgs",
"repos_url": "https://api.github.com/users/Palipoor/repos",
"events_url": "https://api.github.com/users/Palipoor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Palipoor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052847,
"node_id": "MDU6TGFiZWwxODM0MDUyODQ3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Finetuning)",
"name": "Ex: LM (Finetuning)",
"color": "26FFF8",
"default": false,
"description": "Related to language modeling fine-tuning"
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"+1. I'm also confused on how to structure the lm_labels and the decoder_input_ids.",
"Given T5's universal text-to-text objective, I'm under the impression that the [T5 summarization](https://github.com/huggingface/transformers/tree/master/examples/summarization/t5) example should be applicable for all T5 tasks, as long as the input and target sequences are correctly structured for the specified task. Hope this can be confirmed!\r\n\r\nSample input and target structures for specific tasks can be found at Appendix D in the T5 [paper](https://arxiv.org/pdf/1910.10683.pdf).",
"To correctly train T5 one should follow the instructions at https://huggingface.co/transformers/model_doc/t5.html#training . \r\n\r\nFor training, there is no need to provide the `decoder_input_ids` - they are created automatically. One only has to provide the `lm_labels`.\r\n\r\nAs @enzoampil, Appendix D of the paper gives good input/output examples. \r\n\r\n",
"@patrickvonplaten What exactly would be the `lm_labels` for something like summarization?\r\n\r\n**Example Usecase**\r\nText: \"ABC\" with maximum length 500\r\nSummary: \"XYZ\" with maximum length 50\r\n\r\nI understand that we can prepare `input_ids` and `attention_mask` like this for the document.\r\n```python\r\nx = tokenizer.encode_plus(sentence, \r\n max_length=500, \r\n pad_to_max_length=True, \r\n return_tensors='pt')\r\n```\r\nNow for the lm_labels i.e. summary, **is simply doing this enough**?\r\n```python\r\nlm_labels = tokenizer.encode(summary, \r\n return_tensors='pt', \r\n max_length=50, \r\n pad_to_max_length=True)\r\n```\r\n\r\nAnd the model as \r\n```python\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small')\r\nmodel(input_ids=..., lm_labels=lm_labels, attention_mask=...)\r\n```\r\n\r\nIn your examples folder for summarization, I've seen some preprocessing like this for lm_labels. I didn't understand why this is being done.\r\n```python\r\ny_ids = y[:, :-1].contiguous()\r\nlm_labels = y[:, 1:].clone()\r\nlm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100\r\n```",
"Hi @amitness,\r\n\r\nFor T5 summarization you will have to append the prefix \"summarize: \" to every input data. But you are more or less right. All you have to do is:\r\n1. Prepare input data\r\n```python\r\nx = tokenizer.encode_plus(\"summarize: \" + sentence, \r\n max_length=500, \r\n pad_to_max_length=True, \r\n return_tensors='pt')\r\n```\r\n2. Prepare labels\r\n```python\r\nlm_labels = tokenizer.encode_plus(summary, \r\n return_tensors='pt', \r\n max_length=50, \r\n pad_to_max_length=True)\r\n```\r\n3. For tokens that are padded (which is only relevant if you train with batch_size > 1) you need to make sure that no loss is calculated on those tokens, so\r\n```python\r\nlm_labels[lm_labels == tokenizer.pad_token_id] = -100\r\n```\r\n\r\nThere is no need to shift the tokens as you show at the end of your comment because T5 does that automatically - see https://github.com/huggingface/transformers/blob/6af3306a1da0322f58861b1fbb62ce5223d97b8a/src/transformers/modeling_t5.py#L1063.\r\n\r\nThis is also explained in https://huggingface.co/transformers/model_doc/t5.html#training .",
"Thanks for this clarification @patrickvonplaten ! Finally got it to work from my side 😄 \r\n\r\nGotcha for me was that the `decoder_input_ids` at inference should be prepended by the padding token as stated in the [docs](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) for `T5ForConditionalGeneration`.",
"@enzoampil Can you give an example code of what you meant by prepending padding token at inference time?",
"@patrickvonplaten Thank you.\r\n\r\nBesides the inbuilt prefix like summarize:, translate: etc, can I train with my own prefix? Let's say there is a prefix called \"simplify:\" and I have pairs of datasets. Is adding the prefix and preparing data in the format you mentioned above enough?",
"@amitness \r\n\r\nE.g. in your summarization case, it would look something like:\r\n\r\n```\r\nfrom transformers import T5Tokenizer, T5Model\r\n\r\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\r\nmodel = T5Model.from_pretrained('t5-small')\r\ninput_ids = tokenizer.encode(\"summarize: Hello, my dog is cute\", return_tensors=\"pt\")\r\ndecoder_input_ids = tokenizer.encode(\"<pad>\", return_tensors=\"pt\") \r\noutputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)\r\noutputs[0]\r\n```\r\n\r\nDo note that `T5ForConditionalGeneration` already prepends the padding by default. Above is only necessary if you're doing a forward pass straight from `T5Model`.\r\n\r\nRegarding your question about making your own prefix, yes, you should be able to train on your own prefix. This is the whole point of T5's text-to-text approach. You should be able to specify any problem through this kind of approach (e.g. Appendix D in the T5 paper).",
"@enzoampil Makes sense. Thank you so much.",
"> @patrickvonplaten Thank you.\r\n> \r\n> Besides the inbuilt prefix like summarize:, translate: etc, can I train with my own prefix? Let's say there is a prefix called \"simplify:\" and I have pairs of datasets. Is adding the prefix and preparing data in the format you mentioned above enough?\r\n\r\nSure, you can train with your own prefix.",
"> Thanks for this clarification @patrickvonplaten ! Finally got it to work from my side \r\n> \r\n> Gotcha for me was that the `decoder_input_ids` at inference should be prepended by the padding token as stated in the [docs](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) for `T5ForConditionalGeneration`.\r\n\r\nYeah that's actually a bit hidden in the code. So to clarify: \r\nDuring training, there is no need to prepend the padding token since this is done automatically in T5 when `lm_labels` is provided. \r\nDuring evaluation, one has to prepend the PAD token as you stated in your example. \r\n\r\nAfter training, the mode can be used with the `generate()` method (which actually powers the `summarization`, `translation` and `text-generation` pipeline).\r\nIn the `generate()` method, the padding token is automatically prepended.",
"@patrickvonplaten One thing I've noticed is the discrepancy between huggingface's and the original google-research tokenization. \r\n\r\nIn the official colab by the paper authors, they seem to add `</s>` when tokenizing to the end of each text. But, when we use tokenizers from hugging face, it is not added. Not sure if it is a problem or not. \r\nHere is an excerpt from their official [colab](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb#scrollTo=I64TqHGxbOJ2)\r\n```python\r\n'inputs_plaintext': b'trivia question: what is the population of fayetteville north carolina?', 'inputs': array([22377, 822, 10, 125, 19, 8, 2074, 13, 3,\r\n 89, 9, 63, 1954, 1420, 3457, 443, 12057, 9,\r\n 58, 1])\r\n```\r\nYou can see 1 added at the end of the token_ids. But if we tokenize this same sentence with huggingface tokenizer, we don't get 1 at end.\r\n```python\r\ntokenizer.encode('trivia question: what is the population of fayetteville north carolina?')\r\n# [22377, 822, 10, 125, 19, 8, 2074, 13, 3, 89, 9, 63, 1954, 1420, 3457, 443, 12057, 9, 58]\r\n```\r\n\r\nWhen I was prototyping with the models, I tried preparing data like this to solve it. This adds 1 to the end. Not sure if we need to do this or not.\r\n```\r\ntokenizer.encode(\"summarize: Hello world</s>\", return_tensors=\"pt\")\r\n```\r\n",
"Yes you are right, you should add the `</s>` token to the end of a sentence. I think this is also shown in the docs: https://huggingface.co/transformers/model_doc/t5.html#training. ",
"Thanks to @patrickvonplaten for all clarification and others for their further questions that led to more details on the subject. ",
"Hello everyone, \r\n\r\nI am currently working on finetuning the TFT5ForConditionalGeneration model on a parallel dataset.\r\nQuestions:\r\n\r\n1. Can I call model.fit like this - `model.fit([x,y])` where x is input_ids and y is lm_labels?\r\n\r\nIf not, how do I pass in lm_labels and train with the model. \r\n\r\nThanks. ",
"@patrickvonplaten ",
"For the tensorflow version you have to input `input_ids`, `decoder_input_ids` and `lm_labels` yourself. The model should work fine with the `keras` framework!",
"I will soon add more documentation for T5 for tensorflow. It's true that there is not enough documentation for TF at the moment.",
"Okay, I would appreciate that. So, do I add the `input_ids`, `decoder_input_ids `and `lm_labels` as keywords when calling `model.fit`(which I doubt) or when do I do that?\r\n\r\n",
"I have not looked trained tensorflow using keras `model.fit` function yet. The forward pass in tensorflow's T5 implementation needs both `input_ids` and `decoder_input_ids` as you can see when going through this function: \r\nhttps://github.com/huggingface/transformers/blob/fd2174664c8879c747ada3e6e0a2486858808421/src/transformers/modeling_tf_t5.py#L980\r\n\r\nSo, depending on your code you will have to create `input_ids`, `decoder_input_ids` and `lm_labels` yourself. Feel free to share your code here if you have a working training pipeline for TFT5 :-) ",
"Hi Patrick. Got it to work with Pytorch. However, I have a question:\r\n\r\nIs it possible to use a different vocab size with this pretrained model? I have a trained sentence piece model and it only works with this pretrained t5 when I use a beam size of 1. I have manually changed the vocab size by setting `model.config.vocab_size = tokenizer.vocab_size` . However, the beam size problem still persists and it returns a shape mismatch error. \r\n\r\nPlease let me know if this is possible, thanks. \r\n\r\n\r\n",
"@patrickvonplaten ",
"I think it will work in case the targets pieces from the new vocab is the same in the old one.\r\nBesides, what is the benefit from the pretrained T5 if the sentence piece targets changed ?!!",
"Created a little repo for NMT finetuning https://github.com/keleog/finetune_huggingace_t5",
"> I have not looked trained tensorflow using keras `model.fit` function yet. The forward pass in tensorflow's T5 implementation needs both `input_ids` and `decoder_input_ids` as you can see when going through this function:\r\n> https://github.com/huggingface/transformers/blob/fd2174664c8879c747ada3e6e0a2486858808421/src/transformers/modeling_tf_t5.py#L980\r\n> \r\n> So, depending on your code you will have to create `input_ids`, `decoder_input_ids` and `lm_labels` yourself. Feel free to share your code here if you have a working training pipeline for TFT5 :-)\r\n\r\nHi @patrickvonplaten, I was able to create a data source with the input data and labels as you described.\r\nNow I'm trying to use that data for keras fit with the loss function `tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)`\r\nThe shape of the labels is (batch_size, seq_len), and I would expect that the model `TFT5ForConditionalGeneration` would return the logits of shape (batch_size, seq_len, vocab_size). However its call method returns this:\r\n` return decoder_outputs + encoder_outputs\r\n`\r\nso I get an error:\r\n `ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 27 array(s), for inputs ['output_1', 'output_2', 'output_3', 'output_4', 'output_5', 'output_6', 'output_7', 'output_8', 'output_9', 'output_10', 'output_11', 'output_12', 'output_13', 'output_14', 'output_15', 'output_16', 'output_17', 'output_18', 'output_19', 'output_20', 'output_21', 'output_22', 'output_23', 'output_24', 'output_25', 'output_26', 'output_27'] but instead got the following list of 1 arrays: [<tf.Tensor 'args_4:0' shape=(32, 128) dtype=int32>]...`\r\n\r\nI can think of two solutions, neither sounds good:\r\n1. override `call` method in a subclass and return only the decoder outputs\r\n2. use a custom loss function that extracts the decoder outputs from the model output\r\n\r\nWhat would you advice?",
"Hi @patrickvonplaten ,\r\nI am working on one question answering task using TFT5. I have done a text encoding step.\r\nMy raw input is question and target is the answer shown in the below image\r\n\r\n\r\nHow should I configure input so that I can pass it in model.fit() method\r\nlike this way!! I am able to get input_id and input mask.\r\n\r\n```\r\nmodel = TFT5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\noptimizer = keras.optimizers.Adam(lr=5e-5)\r\nmodel.compile(optimizer=optimizer)\r\nmodel.fit(\r\n x_train,\r\n y_train,\r\n epochs=1, \r\n verbose=2,\r\n batch_size=2,\r\n)\r\n```\r\n\r\nHere is the [Colab Notebook](https://github.com/bhadreshpsavani/EfficientQAExperiments/blob/master/NaturalQAT5TF.ipynb)",
"I'll start working on a TFT5 notebook this week. Related issues: \r\nhttps://discuss.huggingface.co/t/how-to-train-tft5forconditionalgeneration-model/888\r\nhttps://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641/6\r\nhttps://github.com/huggingface/transformers/issues/6876",
"Hello @patrickvonplaten \r\n\r\nI am working with T5 for paraphrase generation, but I wanted to know if there is a way to use my own custom defined loss function for training? \r\n\r\n",
"Sure! You can just output the language head logits with T5 and build your own loss with it :-) "
] | 1,588 | 1,682 | 1,588 | NONE | null | Hi,
I want to fine-tune T5 for a seq2seq task and I'm using the T5ForConditionalGeneration as it seems to have an LM decoder on top.
As there's no code example for this, I have lots of questions:
1. Am I doing the right thing?
2. I'm using the Adam optimizer. Is it ok?
3. I'm a bit confused about the `forward` inputs in the training phase. I read [this](https://huggingface.co/transformers/model_doc/t5.html#training) explanation over and over again and I don't understand whether I should just use `input_ids` and `lm_labels` for the training or not. Also somewhere in [this issue ](https://github.com/huggingface/transformers/issues/2213#issuecomment-567090553) someone's mentioned that:
> T5 input sequence should be formatted with [CLS] and [SEP] tokens
So which one is right? I'm super confused. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4092/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4092/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4091 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4091/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4091/comments | https://api.github.com/repos/huggingface/transformers/issues/4091/events | https://github.com/huggingface/transformers/pull/4091 | 610,385,338 | MDExOlB1bGxSZXF1ZXN0NDExODM1MTI5 | 4,091 | Add ForMultipleChoice for Electra and Albert [WIP] | {
"login": "ViktorAlm",
"id": 1090762,
"node_id": "MDQ6VXNlcjEwOTA3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ViktorAlm",
"html_url": "https://github.com/ViktorAlm",
"followers_url": "https://api.github.com/users/ViktorAlm/followers",
"following_url": "https://api.github.com/users/ViktorAlm/following{/other_user}",
"gists_url": "https://api.github.com/users/ViktorAlm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ViktorAlm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ViktorAlm/subscriptions",
"organizations_url": "https://api.github.com/users/ViktorAlm/orgs",
"repos_url": "https://api.github.com/users/ViktorAlm/repos",
"events_url": "https://api.github.com/users/ViktorAlm/events{/privacy}",
"received_events_url": "https://api.github.com/users/ViktorAlm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Accidentaly hit enter while reading the guide, sorry | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4091/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4091",
"html_url": "https://github.com/huggingface/transformers/pull/4091",
"diff_url": "https://github.com/huggingface/transformers/pull/4091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4091.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4090 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4090/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4090/comments | https://api.github.com/repos/huggingface/transformers/issues/4090/events | https://github.com/huggingface/transformers/issues/4090 | 610,351,708 | MDU6SXNzdWU2MTAzNTE3MDg= | 4,090 | Character level models? | {
"login": "KosayJabre",
"id": 20927595,
"node_id": "MDQ6VXNlcjIwOTI3NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/20927595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KosayJabre",
"html_url": "https://github.com/KosayJabre",
"followers_url": "https://api.github.com/users/KosayJabre/followers",
"following_url": "https://api.github.com/users/KosayJabre/following{/other_user}",
"gists_url": "https://api.github.com/users/KosayJabre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KosayJabre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KosayJabre/subscriptions",
"organizations_url": "https://api.github.com/users/KosayJabre/orgs",
"repos_url": "https://api.github.com/users/KosayJabre/repos",
"events_url": "https://api.github.com/users/KosayJabre/events{/privacy}",
"received_events_url": "https://api.github.com/users/KosayJabre/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I don't think we have any character-level language models available on [huggingface.co/models](https://huggingface.co/models), but they should be pretty straightforward to train.\r\n\r\nBTW I'm sure you saw it already but [PyTorch's nn.Transformer tuto](https://pytorch.org/tutorials/beginner/transformer_tutorial.html) is a char-level language model.",
"Hi, that's actually is word-level model.\r\nDo you know of any pretrained bidirectional transformer-based character level language models that I can use?",
"No I'm not aware of any pretrained one.",
"We will soon have a pretrained ReformerLM model on character level ",
"@patrickvonplaten just for my curiosity would this be the model trained on the \"Crime and Punishment\" from their Colab (but the vocab is not char-only), or do you have an own trained model 🤔 ",
"Yeah that was my bad definition of char-only I guess :D. The vocab has 320 tokens, so it's more like on \"very\" small word units level. \r\n\r\nExample: \r\n```python \r\ntok = ReformerTokenizer.from_pretrained(\"google/reformer-crime-and-punishment\")\r\ntokens = tok.encode(\"This is a test sentence\") # [108, 265, 24, 111, 4, 3, 249, 7, 76, 25, 69]\r\nprint([tok.decode(token) for token in tokens]) # ['T', 'h', 'is', 'is', 'a', 't', 'est', 's', 'ent', 'en', 'ce']\r\n```",
"\"True\" char-level is cool because obviously the tokenizer is then pretty trivial :)",
"Btw, we now have a \"True\" char-level reformer model here: https://huggingface.co/google/reformer-enwik8 :-) ",
"Any chance of having a TF version of the Reformer model?",
"Yes in ~2 month I would guess",
"@patrickvonplaten Is there any notebook/doc on how to fine tune the char level model using reformer-enwiki8 model? Having a doc showing how to pass training data to fine tune it would be helpful.",
"You should be able to leverage the code shown on the model card of reformer-enwik8 here: https://huggingface.co/google/reformer-enwik8#reformer-language-model-on-character-level-and-trained-on-enwik8 . It shows how data is passed to the model.",
"I don't understand how to use that code in place of a Tokenizer object. For example, to train a masked language model in [this example script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) the tokenizer is used in the data collator to do a lot more than just mapping to `input_ids`. I've tried loading tokenizers from character level models like CANINE but that doesn't work either. I could try instantiating an untrained tokenizer and passing all the characters in the corpus as special characters [like this person did](https://discuss.huggingface.co/t/character-level-tokenizer/12450) but it seems like I shouldn't have to do that to achieve something so basic."
] | 1,588 | 1,662 | 1,589 | NONE | null | Hi, are any character-level language models available? Transformer-XL mentions in their paper that they did both word level and character level stuff, yet here it seems only the word level one is available? Is that correct? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4090/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4090/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4089 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4089/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4089/comments | https://api.github.com/repos/huggingface/transformers/issues/4089/events | https://github.com/huggingface/transformers/issues/4089 | 610,219,099 | MDU6SXNzdWU2MTAyMTkwOTk= | 4,089 | GePpeTto! | {
"login": "LoreDema",
"id": 7656158,
"node_id": "MDQ6VXNlcjc2NTYxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7656158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoreDema",
"html_url": "https://github.com/LoreDema",
"followers_url": "https://api.github.com/users/LoreDema/followers",
"following_url": "https://api.github.com/users/LoreDema/following{/other_user}",
"gists_url": "https://api.github.com/users/LoreDema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoreDema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoreDema/subscriptions",
"organizations_url": "https://api.github.com/users/LoreDema/orgs",
"repos_url": "https://api.github.com/users/LoreDema/repos",
"events_url": "https://api.github.com/users/LoreDema/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoreDema/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Awesome! These instructions might help: \r\n\r\nhttps://github.com/huggingface/transformers#Quick-tour-of-model-sharing",
"Hi @LoreDema, thanks for sharing GePpeTto, it's really cool!\r\n\r\nYes you can juste create a user account and an organization on huggingface.co and share the weights there. (and add a model card here if you can)",
"Awesome, just pushed the model and created a pull request for the card https://github.com/huggingface/transformers/pull/4099"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🌟 New model addition
## Model description
GePpeTto is a GPT-2 model for italian. For further details checkout the paper: https://arxiv.org/abs/2004.14253
## Open source status
* [X ] the model implementation is available: we used huggingface's GPT-2 implementation.
* [X ] the model weights are available: you can find the model here: https://github.com/LoreDema/GePpeTto
* [X ] who are the authors: @LoreDema @michelecafagna26 @malvinanissim @Falice1977 @marcoguerini
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4089/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4088 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4088/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4088/comments | https://api.github.com/repos/huggingface/transformers/issues/4088/events | https://github.com/huggingface/transformers/issues/4088 | 610,085,173 | MDU6SXNzdWU2MTAwODUxNzM= | 4,088 | InvalidArgumentError: Incompatible shapes: [5,20] vs. [5,18] [Op:Less] | {
"login": "zirlman",
"id": 24474083,
"node_id": "MDQ6VXNlcjI0NDc0MDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/24474083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zirlman",
"html_url": "https://github.com/zirlman",
"followers_url": "https://api.github.com/users/zirlman/followers",
"following_url": "https://api.github.com/users/zirlman/following{/other_user}",
"gists_url": "https://api.github.com/users/zirlman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zirlman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zirlman/subscriptions",
"organizations_url": "https://api.github.com/users/zirlman/orgs",
"repos_url": "https://api.github.com/users/zirlman/repos",
"events_url": "https://api.github.com/users/zirlman/events{/privacy}",
"received_events_url": "https://api.github.com/users/zirlman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @zirlman,\r\n\r\nThanks a lot for catching the error and the detailed error description. The PR that will fix the error is linked to the issue :-) ",
"@patrickvonplaten that's great. Thank you 😁"
] | 1,588 | 1,589 | 1,589 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): T5 - TFT5ForConditionalGeneration
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: Question-Answering
## To reproduce
Steps to reproduce the behavior:
1. Download pre-trained T5 model & T5 tokenizer
2. Encode this sentence: `question: What is coronavirus? context: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in December 2019 in Wuhan, the capital of China's Hubei province, and has since spread globally, resulting in the ongoing 2019–20 coronavirus pandemic. As of 30 April 2020,[update] more than 3.19 million cases have been reported across 185 countries and territories, resulting in more than 227,000 deaths. More than 972,000 people have recovered.`
3. Generate N answer (the number doesn't matter, in my case it was 5/7/10)
Code:
```
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
model_str = "t5-base"
hyperparams = dict(
top_k = 50,
top_p = 0.95,
max_length = None,
temperature = 0.7,
num_return_sequences = 5,
do_sample=True,
use_cache=False)
tokenizer = T5Tokenizer.from_pretrained(model_str)
model = TFT5ForConditionalGeneration.from_pretrained(model_str)
def generate(input_ids):
outputs = model.generate(input_ids, **hyperparams)
all_outputs = []
if outputs is not None and outputs.shape[0] == 1:
outputs = tokenizer.decode(tf.squeeze(outputs), skip_special_tokens=True)
all_outputs.append(outputs)
elif outputs is not None:
all_outputs.extend([tokenizer.decode(o, skip_special_tokens=True) for o in outputs])
return all_outputs
sentence = """question: What is coronavirus? context: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in December 2019 in Wuhan, the capital of China's Hubei province, and has since spread globally, resulting in the ongoing 2019–20 coronavirus pandemic. As of 30 April 2020,[update] more than 3.19 million cases have been reported across 185 countries and territories, resulting in more than 227,000 deaths. More than 972,000 people have recovered.
""".replace("\n"," ")
input_ids = tokenizer.encode(sentence,return_tensors="tf")
generate(input_ids)
```
## Expected behavior
This error happens for some questions only. If you remove the question mark from the question you'll get an output. First I've thought that the question mark is the problem, but on other examples both with and without question mark resulted in the same error.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic (Google Colab)
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4088/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4087 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4087/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4087/comments | https://api.github.com/repos/huggingface/transformers/issues/4087/events | https://github.com/huggingface/transformers/pull/4087 | 609,930,495 | MDExOlB1bGxSZXF1ZXN0NDExNDQyMzMw | 4,087 | Added huseinzol05/gpt2-117M-bahasa-cased README.md | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=h1) Report\n> Merging [#4087](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e73595bd649a149879816a59b56f57c8b37c73d0&el=desc) will **increase** coverage by `0.92%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4087 +/- ##\n==========================================\n+ Coverage 77.99% 78.92% +0.92% \n==========================================\n Files 114 114 \n Lines 18667 18667 \n==========================================\n+ Hits 14559 14732 +173 \n+ Misses 4108 3935 -173 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.69%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (+0.82%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4087/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=footer). Last update [e73595b...a4a673a](https://codecov.io/gh/huggingface/transformers/pull/4087?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"You should rebase instead of merging `upstream/master`, this would make merging this easier :)\r\n\r\nAnyways, cherrypicked in 8829ace4aac6b2ebef0eb8753e6b58d7eafc7734, thanks!"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4087",
"html_url": "https://github.com/huggingface/transformers/pull/4087",
"diff_url": "https://github.com/huggingface/transformers/pull/4087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4087.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4086 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4086/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4086/comments | https://api.github.com/repos/huggingface/transformers/issues/4086/events | https://github.com/huggingface/transformers/issues/4086 | 609,799,601 | MDU6SXNzdWU2MDk3OTk2MDE= | 4,086 | Not able to reproduce same CoLA result as huggingface defualt | {
"login": "leo-liuzy",
"id": 11146950,
"node_id": "MDQ6VXNlcjExMTQ2OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11146950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leo-liuzy",
"html_url": "https://github.com/leo-liuzy",
"followers_url": "https://api.github.com/users/leo-liuzy/followers",
"following_url": "https://api.github.com/users/leo-liuzy/following{/other_user}",
"gists_url": "https://api.github.com/users/leo-liuzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leo-liuzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leo-liuzy/subscriptions",
"organizations_url": "https://api.github.com/users/leo-liuzy/orgs",
"repos_url": "https://api.github.com/users/leo-liuzy/repos",
"events_url": "https://api.github.com/users/leo-liuzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/leo-liuzy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you have a link to a TensorBoard or any other experiment tracking for your training?\r\n\r\nYou can also add the `--evaluate_during_training` flag to see the evolution of the eval metric during training.",
"Have you tried with fewer epochs? (e.g ```--num_train_epochs 3.0``` as stated in the repo's example)\r\n\r\nAlso what you need to consider is seed hyper-parameter.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
When I run your official `run_glue.py` script on CoLA, the performance is significantly lower than the ones in your website. Mine: ~10, Yours: ~50
## Information
Model I am using (Bert, XLNet ...):
bert-base-uncased
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run my provided command line
2.
3.
```
export GLUE_DIR=downstream_datasets/glue
export TASK_NAME=CoLA
SEED=43
export CUDA_VISIBLE_DEVICES=0
python run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--save_steps 200 \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 30 \
--seed $SEED \
--output_dir tmp/"$TASK_NAME"_seed"$SEED"/
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 1.17.2
- Platform: CentOS 7.7 64-bit
- Python version: python 3.7
- PyTorch version (GPU?): 1.5.0+cuda9.2
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No, single machine
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4086/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4085 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4085/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4085/comments | https://api.github.com/repos/huggingface/transformers/issues/4085/events | https://github.com/huggingface/transformers/issues/4085 | 609,761,124 | MDU6SXNzdWU2MDk3NjExMjQ= | 4,085 | Use roberta-med or scibert for fillmask | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten ",
"Can you install transformers from source?",
"Thanks worked for biomed-roberta\r\n\r\nHowever \r\n\r\n```\r\nnlp1 = pipeline(\"fill-mask\", model=\"allenai/scibert_scivocab_cased\", tokenizer=\"allenai/scibert_scivocab_cased\")\r\n\r\nnlp1(\"coinfection by multiple quasispecies is not uncommon in human <mask>, and that passage to Vero cells may either generate new mutations at a low rate, or titrates out one quasispecies in the transition.\")\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/nas/home/tuhinc/miniconda3/envs/robertamed/lib/python3.7/site-packages/transformers/pipelines.py\", line 743, in __call__\r\n masked_index = (input_ids == self.tokenizer.mask_token_id).nonzero().item()\r\nValueError: only one element tensors can be converted to Python scalars\r\n```",
"It’s not using the same mask token :)\r\n\r\nTry [MASK] here"
] | 1,588 | 1,619 | 1,588 | NONE | null | Hi
I have been trying to use roberta-med for fillmask
`
nlp = pipeline("fill-mask", model="allenai/biomed_roberta_base", tokenizer="allenai/biomed_roberta_base")`
However , I get this
**Model name 'allenai/biomed_roberta_base' was not found in model name list** (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased, openai-gpt, transfo-xl-wt103, gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2, ctrl, xlnet-base-cased, xlnet-large-cased, xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280, roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector, distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased, distilbert-base-uncased-finetuned-sst-2-english, albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2, camembert-base, umberto-commoncrawl-cased-v1, umberto-wikipedia-uncased-v1, t5-small, t5-base, t5-large, t5-3b, t5-11b, xlm-roberta-base, xlm-roberta-large, xlm-roberta-large-finetuned-conll02-dutch, xlm-roberta-large-finetuned-conll02-spanish, xlm-roberta-large-finetuned-conll03-english, xlm-roberta-large-finetuned-conll03-german, flaubert-small-cased, flaubert-base-uncased, flaubert-base-cased, flaubert-large-cased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/allenai/biomed_roberta_base/modelcard.json' was a path or url to a model card file named modelcard.json or a directory containing such a file but couldn't find any such file at this path or url.
Anyway I could use it for fillmask ? Facing the same for scibert too | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4085/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4084 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4084/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4084/comments | https://api.github.com/repos/huggingface/transformers/issues/4084/events | https://github.com/huggingface/transformers/issues/4084 | 609,657,514 | MDU6SXNzdWU2MDk2NTc1MTQ= | 4,084 | feature-extraction pipeline is second last layer better representation than the last hidden layer | {
"login": "kaushaltrivedi",
"id": 3465437,
"node_id": "MDQ6VXNlcjM0NjU0Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3465437?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaushaltrivedi",
"html_url": "https://github.com/kaushaltrivedi",
"followers_url": "https://api.github.com/users/kaushaltrivedi/followers",
"following_url": "https://api.github.com/users/kaushaltrivedi/following{/other_user}",
"gists_url": "https://api.github.com/users/kaushaltrivedi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaushaltrivedi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaushaltrivedi/subscriptions",
"organizations_url": "https://api.github.com/users/kaushaltrivedi/orgs",
"repos_url": "https://api.github.com/users/kaushaltrivedi/repos",
"events_url": "https://api.github.com/users/kaushaltrivedi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaushaltrivedi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is quite a general question, and I fear there is no one right answer. You can see the \"best layer to use\" as another hyperparameter to tune in your task: the right value will depend on your task, dataset, and query. Oftentimes paper do some kind of evaluation of this. For instance, in the original [BERT paper](https://arxiv.org/pdf/1810.04805.pdf) they use BERT for feature extraction and evaluate which layer combinations worked best (Table 7). They find that the second-to-last layer performs better than the last layer, but that a concatenation of the last four layers is best. But again, that is using BERT, on a specific task, with a specific dataset. Your results may differ, so it's best to test and tune to your problem.",
"@BramVanroy @kaushaltrivedi How do you customize what layers go into the feature representation (via the pipeline)?"
] | 1,588 | 1,623 | 1,588 | NONE | null | # 🐛 Bug
I see that for feature-extraction pipeline, you output the last hidden layer of the transformer. Another approach I have seen is people using second to last layer as the output (Bert-as-a-service). The rationale being that the last layer is too close to the model task such as masked language model objective. I am a bit confused as to what would be an ideal representation layer for sentence embedding.
Model I am using (Bert, XLNet ...): Bert
## Environment info
- `transformers` version: 2.8.0
- Platform: CPU
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4084/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4083 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4083/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4083/comments | https://api.github.com/repos/huggingface/transformers/issues/4083/events | https://github.com/huggingface/transformers/issues/4083 | 609,495,993 | MDU6SXNzdWU2MDk0OTU5OTM= | 4,083 | Why is masking ID optional? | {
"login": "tqdo",
"id": 53948469,
"node_id": "MDQ6VXNlcjUzOTQ4NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/53948469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tqdo",
"html_url": "https://github.com/tqdo",
"followers_url": "https://api.github.com/users/tqdo/followers",
"following_url": "https://api.github.com/users/tqdo/following{/other_user}",
"gists_url": "https://api.github.com/users/tqdo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tqdo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tqdo/subscriptions",
"organizations_url": "https://api.github.com/users/tqdo/orgs",
"repos_url": "https://api.github.com/users/tqdo/repos",
"events_url": "https://api.github.com/users/tqdo/events{/privacy}",
"received_events_url": "https://api.github.com/users/tqdo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In my past experience, I noticed that masked tokens would get very low attention scores(that's what self attention's supposed to do) and would not contribute to the model's accuracy. I wonder how would it be in a transformer architecture.",
"Not all models requires this feature, mainly Roberta so it includes XLM-R, Camembert, etc.",
"I think your question is why are the `attention_mask` model inputs optional?\r\nThis is because you need attention masks only when padding. If you're not batching your inputs, you do not need to pad and therefore do not need to specify an attention mask.",
"@LysandreJik I wonder if there is padding and the user does not provide `attention_mask`, does the model think there is no padding or there is a mechanism to figure it out uder the hood (maybe treat token ID 0 as padded token). I am just trying to fully understand the model's behavior. Thanks",
"No, if you don't provide any `attention_mask` then the model will attend to every token the same way. You absolutely should use the `attention_mask` as soon as you're padding, or the model will output flawed results!",
"Got it, thanks a lot."
] | 1,588 | 1,588 | 1,588 | NONE | null | I see many models have masking ID as an optional input. To my understanding, the models need to know which tokens are padded to avoid doing attention with those tokens. How can the models figure this out if we do not provide masking ID? Do these models consider tokens with id 0 to be the masked tokens? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4083/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4082 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4082/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4082/comments | https://api.github.com/repos/huggingface/transformers/issues/4082/events | https://github.com/huggingface/transformers/issues/4082 | 609,444,778 | MDU6SXNzdWU2MDk0NDQ3Nzg= | 4,082 | Zero-shot PPLs vary wildly across gpt2 model sizes | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | NONE | null | # ❓ Questions & Help
## Details
We've been experimenting with multiple datasets for fine-tuning both OpenAI GPT and GPT-2.
What we've noticed is that the zero-shot PPL of GPT-2 small and medium are in the order of `3e+40`. However, the zero-shot PPL of GPT-2 large and XL on the same datasets are much more reasonable, like ~30. The zero-shot PPL of OpenAI GPT on the same datasets are also reasonable, on the order of 200-300.
We're trying to understand why the pre-trained GPT-2 small and medium models display such vastly different zero-shot PPLs in comparison to large, XL and OpenAI GPT.
Has anyone else also noticed the same in their experiments? What might the cause for this be?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4082/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4081 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4081/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4081/comments | https://api.github.com/repos/huggingface/transformers/issues/4081/events | https://github.com/huggingface/transformers/issues/4081 | 609,424,983 | MDU6SXNzdWU2MDk0MjQ5ODM= | 4,081 | Examples link to the master branch which seems to be ahead of the pip install version? | {
"login": "shenkev",
"id": 5405172,
"node_id": "MDQ6VXNlcjU0MDUxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5405172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenkev",
"html_url": "https://github.com/shenkev",
"followers_url": "https://api.github.com/users/shenkev/followers",
"following_url": "https://api.github.com/users/shenkev/following{/other_user}",
"gists_url": "https://api.github.com/users/shenkev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenkev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenkev/subscriptions",
"organizations_url": "https://api.github.com/users/shenkev/orgs",
"repos_url": "https://api.github.com/users/shenkev/repos",
"events_url": "https://api.github.com/users/shenkev/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenkev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I would use the version installed from source but training a tokenizer is substantially slower than the pip install version. When I run the code snippet below, the pip install version finishesin ~5 minutes but the install-from-source version takes 20+ (I didn't wait for it to finish). For some reason there's a new \"Reading files\" stage which takes forever to finish.\r\n\r\n```import os\r\nfrom tokenizers import ByteLevelBPETokenizer, BertWordPieceTokenizer\r\n\r\n\r\ntokenizer = ByteLevelBPETokenizer(lowercase=False)\r\n\r\npaths = [\"./data.txt\"]\r\n\r\ntokenizer.train(files=paths, vocab_size=52000, min_frequency=2, special_tokens=[\r\n \"<s>\",\r\n \"<pad>\",\r\n \"</s>\",\r\n \"<unk>\",\r\n \"<mask>\",\r\n])\r\n\r\nout_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), \"token\")\r\nos.makedirs(out_path, exist_ok=True)\r\n\r\ntokenizer.save(out_path)```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | NONE | null | Documentation for training a language model points to https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py
The import
`from transformers import (
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
set_seed,
)`
leads to "No name ___ in module transformers" errors.
I installed transformers using pip.
`pip install -U transformers`
The import problems seem to be gone when I install from source. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4081/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4080 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4080/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4080/comments | https://api.github.com/repos/huggingface/transformers/issues/4080/events | https://github.com/huggingface/transformers/issues/4080 | 609,400,575 | MDU6SXNzdWU2MDk0MDA1NzU= | 4,080 | tokenizer.encode_plus stopped returning `attention_mask` | {
"login": "wasiahmad",
"id": 17520413,
"node_id": "MDQ6VXNlcjE3NTIwNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasiahmad",
"html_url": "https://github.com/wasiahmad",
"followers_url": "https://api.github.com/users/wasiahmad/followers",
"following_url": "https://api.github.com/users/wasiahmad/following{/other_user}",
"gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions",
"organizations_url": "https://api.github.com/users/wasiahmad/orgs",
"repos_url": "https://api.github.com/users/wasiahmad/repos",
"events_url": "https://api.github.com/users/wasiahmad/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasiahmad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"How did you resolve this? I am facing this issue as well.",
"@shikharsingla, what is your exact bug? Which tokenizer are you using, with which code?",
"Just saw your issue https://github.com/huggingface/transformers/issues/4868, looking into it."
] | 1,588 | 1,591 | 1,588 | NONE | null | I have a codebase which was working fine but today when I was trying to run, I observed that `tokenizer.encode_plus` stopped returning `attention_mask`. Is it removed in the latest release? Or, I need to do something?
The following piece of code was working for me.
```
encoded_dict = tokenizer.encode_plus(
truncated_query,
span_doc_tokens,
max_length=max_seq_length,
return_overflowing_tokens=True,
pad_to_max_length=True,
stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens,
truncation_strategy="only_second",
return_token_type_ids=True,
return_attention_mask=True
)
```
But now, I get only `dict_keys(['input_ids', 'token_type_ids'])` from `encode_plus`. Also, I realized that the returned `input_ids` are not padded to `max_length`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4080/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4079 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4079/comments | https://api.github.com/repos/huggingface/transformers/issues/4079/events | https://github.com/huggingface/transformers/issues/4079 | 609,384,312 | MDU6SXNzdWU2MDkzODQzMTI= | 4,079 | Load of pre-trained t5 model from Tf to HuggingFace | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @antoniomastro1996, \r\n\r\nCan you post your environment information here as written in the \"Issue\" template. Also are you sure that the path: `/Users/antonio/Downloads/tf_path/model.ckpt-306400.data-00000-of-00002` exist? \r\n\r\nAlso can you add the link the Raffel example script? ",
"Hi @patrickvonplaten this is my environment:\r\n\r\n<b>\r\n- `transformers` version: 2.8.0<br>\r\n- Platform: Darwin-19.4.0-x86_64-i386-64bit<br>\r\n- Python version: 3.7.6<br>\r\n- PyTorch version (GPU?): 1.4.0 (False)<br>\r\n- Tensorflow version (GPU?): 2.2.0-rc2 (False)<br>\r\n- Using GPU in script?: no<br>\r\n- Using distributed or parallel set-up in script?: no<br>\r\n</b>\r\n<br>\r\n\r\nAbout the training process, I have adapted the following Colab link: https://tiny.cc/t5-colab\r\nThis is the one provided by Raffel et all on their GitHub page.\r\n\r\nAbout the path, yes is fine, it exists. To me seem strange the error manifest itself on this global_step param. Furthermore I have seen the t5_modeling code and this global_step alongside with other variables is in a sort of black list if we perform fine-tuning.",
"Hi @patrickvonplaten\r\nI've just solved, I changed the pre-trained model and worked fine.\r\n\r\nHowever, if I try to load the model with the following command:\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"/content/drive/My Drive/t5_model.bin\")\r\n\r\nI get the following error: \r\n<b>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte</b>\r\n\r\nLike I said in the previous comment, I've used this config file\r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/t5-small-config.json\r\nwhere I have just removed the part related to the task in such a way the final output appears like this:\r\n\r\n{\r\n \"architectures\": [\r\n \"T5WithLMHeadModel\"\r\n ],\r\n \"d_ff\": 2048,\r\n \"d_kv\": 64,\r\n \"d_model\": 512,\r\n \"decoder_start_token_id\": 0,\r\n \"dropout_rate\": 0.1,\r\n \"eos_token_id\": 1,\r\n \"initializer_factor\": 1.0,\r\n \"is_encoder_decoder\": true,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"model_type\": \"t5\",\r\n \"n_positions\": 512,\r\n \"num_heads\": 8,\r\n \"num_layers\": 6,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"relative_attention_num_buckets\": 32,\r\n \"vocab_size\": 32128\r\n}\r\n\r\nWhere am I going wrong?",
"Sorry to answer this late! Loading the model via:\r\n```python\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"/content/drive/My Drive/\")\r\n```\r\n\r\nshould work :-) "
] | 1,588 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi everybody,
I have the following problem:
I want to convert my custom pre-trained t5 model(trained by following the example provided by Raffel et al on t5 GitHub page), however whenever I try to translate the pre-trained model from tf to pytorch with convert_t5_original_tf_checkpoint_to_pytorch.py script, after the instantiation of it I got the following error:
INFO:transformers.modeling_t5:Loading TF weight encoder/block_011/layer_001/layer_norm/scale_slot_v with shape [768]
INFO:transformers.modeling_t5:Loading TF weight encoder/final_layer_norm/scale with shape [768]
INFO:transformers.modeling_t5:Loading TF weight encoder/final_layer_norm/scale_slot_v with shape [768]
INFO:transformers.modeling_t5:Loading TF weight global_step with shape []
<b>File "/Users/antonio/opt/anaconda3/lib/python3.7/site-packages/tensorflow-2.2.0rc2-py3.7-macosx-10.9-x86_64.egg/tensorflow/python/training/py_checkpoint_reader.py", line 70, in get_tensor
self, compat.as_bytes(tensor_str))
RuntimeError: /Users/antonio/Downloads/tf_path/model.ckpt-306400.data-00000-of-00002; No such file or directory</b>
I can assume that the error is on global_step
One more thing, since I have trained a t5 base model, Is fine to use your json config t5-base with it? or I should create one from scratch for my specific case?
Thanks in advance and have a good day!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4079/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4078 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4078/comments | https://api.github.com/repos/huggingface/transformers/issues/4078/events | https://github.com/huggingface/transformers/issues/4078 | 609,322,877 | MDU6SXNzdWU2MDkzMjI4Nzc= | 4,078 | Using 'ner' task in pipeline with a non default model gives me entities as "LABEL-6" , "LABEL-8" instead of "I-ORG" and "I-LOC" | {
"login": "goutham794",
"id": 16307702,
"node_id": "MDQ6VXNlcjE2MzA3NzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/16307702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goutham794",
"html_url": "https://github.com/goutham794",
"followers_url": "https://api.github.com/users/goutham794/followers",
"following_url": "https://api.github.com/users/goutham794/following{/other_user}",
"gists_url": "https://api.github.com/users/goutham794/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goutham794/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goutham794/subscriptions",
"organizations_url": "https://api.github.com/users/goutham794/orgs",
"repos_url": "https://api.github.com/users/goutham794/repos",
"events_url": "https://api.github.com/users/goutham794/events{/privacy}",
"received_events_url": "https://api.github.com/users/goutham794/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Oh no, there's something wrong with the label alignment (can also be seen in the `config.json`):\r\n\r\n```json\r\n\"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\",\r\n \"2\": \"LABEL_2\",\r\n \"3\": \"LABEL_3\",\r\n \"4\": \"LABEL_4\",\r\n \"5\": \"LABEL_5\",\r\n \"6\": \"LABEL_6\",\r\n \"7\": \"LABEL_7\",\r\n \"8\": \"LABEL_8\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1,\r\n \"LABEL_2\": 2,\r\n \"LABEL_3\": 3,\r\n \"LABEL_4\": 4,\r\n \"LABEL_5\": 5,\r\n \"LABEL_6\": 6,\r\n \"LABEL_7\": 7,\r\n \"LABEL_8\": 8\r\n },\r\n```\r\n\r\nThanks for reporting, I'll try to fix it :)\r\n\r\nIn the meantime, you could use our large model `dbmdz/bert-large-cased-finetuned-conll03-english` 😅",
"Also I forgot to add, I see the [CLS], [SEP] tokens and also words like \"is\" , \"a\" in the NER results which are not present when I use the default dbmdz/bert-large-cased-finetuned-conll03-english.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-base-cased-finetuned-conll03-english/config.json\r\nThis file needs fixing.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm having the same problem with the model: dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish",
"> I'm having the same problem with the model: dbmdz/bert-base-multilingual-cased-finetuned-conll03-spanish\r\n\r\nOne fix is to download the model and config.json to your local and then just manually edit the \"config.json\" file.\r\nThis is what it should actually look like :- \r\nhttps://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-large-cased-finetuned-conll03-english/config.json",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,605 | 1,605 | NONE | null | model - dbmdz/bert-base-cased-finetuned-conll03-english
Language - english
The problem arises when using:
* [x] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: ner
## To reproduce
```
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-cased-finetuned-conll03-english")
model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-base-cased-finetuned-conll03-english")
ner_task = pipeline(task='ner', model=model, tokenizer=tokenizer)
ner_task('Hugging Face is a French company based in New-York.')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4078/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4077 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4077/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4077/comments | https://api.github.com/repos/huggingface/transformers/issues/4077/events | https://github.com/huggingface/transformers/issues/4077 | 609,320,732 | MDU6SXNzdWU2MDkzMjA3MzI= | 4,077 | [docs] bad RST in PretrainedModel.generate docstring | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"fixed in Marian PR"
] | 1,588 | 1,589 | 1,589 | CONTRIBUTOR | null | 
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L857 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4077/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4076 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4076/comments | https://api.github.com/repos/huggingface/transformers/issues/4076/events | https://github.com/huggingface/transformers/pull/4076 | 609,284,120 | MDExOlB1bGxSZXF1ZXN0NDEwODkzNDcz | 4,076 | Remove double bias in TF Albert | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also remove the now-unused bias variable on https://github.com/huggingface/transformers/blob/50c9c3e98235d447a7a2d0efebc9ff4e426fce2d/src/transformers/modeling_tf_albert.py#L467",
"You're absolutely right, thanks @jarednielsen !"
] | 1,588 | 1,591 | 1,591 | MEMBER | null | closes #3386 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4076",
"html_url": "https://github.com/huggingface/transformers/pull/4076",
"diff_url": "https://github.com/huggingface/transformers/pull/4076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4076.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4075 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4075/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4075/comments | https://api.github.com/repos/huggingface/transformers/issues/4075/events | https://github.com/huggingface/transformers/issues/4075 | 609,240,898 | MDU6SXNzdWU2MDkyNDA4OTg= | 4,075 | How to using vocab other language in run_ner.py? | {
"login": "TorRient",
"id": 39651857,
"node_id": "MDQ6VXNlcjM5NjUxODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/39651857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TorRient",
"html_url": "https://github.com/TorRient",
"followers_url": "https://api.github.com/users/TorRient/followers",
"following_url": "https://api.github.com/users/TorRient/following{/other_user}",
"gists_url": "https://api.github.com/users/TorRient/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TorRient/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TorRient/subscriptions",
"organizations_url": "https://api.github.com/users/TorRient/orgs",
"repos_url": "https://api.github.com/users/TorRient/repos",
"events_url": "https://api.github.com/users/TorRient/events{/privacy}",
"received_events_url": "https://api.github.com/users/TorRient/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which language do you want to use?\r\nYou can load a model that pre-trained with a certain language or you can use mBERT / XLM instead.",
"> Which language do you want to use?\r\n> You can load a model that pre-trained with a certain language or you can use mBERT / XLM instead.\r\n\r\nThank for reply\r\ni want to use BERT for Vietnamese language",
"Unfortunately, there is no BERT for Vietnamese available for now.\r\nYou will have to wait for somebody to pre-trained the Vietnamese model or you can use Multilingual-BERT (mBERT) instead:\r\n```bert-base-multilingual-cased``` or ```bert-base-multilingual-uncased```\r\n\r\nThese models were pre-trained with [top 100 languages in Wikipedia](https://meta.wikimedia.org/wiki/List_of_Wikipedias)."
] | 1,588 | 1,588 | 1,588 | NONE | null | Help me, thank you!!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4075/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4074 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4074/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4074/comments | https://api.github.com/repos/huggingface/transformers/issues/4074/events | https://github.com/huggingface/transformers/issues/4074 | 609,147,554 | MDU6SXNzdWU2MDkxNDc1NTQ= | 4,074 | Issue with non-text files and bertabs example | {
"login": "db1981",
"id": 64541288,
"node_id": "MDQ6VXNlcjY0NTQxMjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/64541288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/db1981",
"html_url": "https://github.com/db1981",
"followers_url": "https://api.github.com/users/db1981/followers",
"following_url": "https://api.github.com/users/db1981/following{/other_user}",
"gists_url": "https://api.github.com/users/db1981/gists{/gist_id}",
"starred_url": "https://api.github.com/users/db1981/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/db1981/subscriptions",
"organizations_url": "https://api.github.com/users/db1981/orgs",
"repos_url": "https://api.github.com/users/db1981/repos",
"events_url": "https://api.github.com/users/db1981/events{/privacy}",
"received_events_url": "https://api.github.com/users/db1981/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"So the issue is fixed for you then? "
] | 1,588 | 1,591 | 1,591 | NONE | null | Hello team,
I've been running the bertabs example on some plain txt files using the latest version available from the repository (pulled on 4/28/2020).
The folder where I placed the txt files for processing contains the configuration file automatically generated by the filesystem (".DS_store", as I'm running macos, but I suspect Windows would have similar issues).
Because the configuration file is binary, the following piece of code in util_summarization.py (lines 56-59) fails with a UnicodeDecodeError exception:
with open(document_path, encoding="utf-8") as source:
raw_story = source.read()
story_lines, summary_lines = process_story(raw_story)
return document_name, story_lines, summary_lines
I added a "try ... except" block to handle the UnicodeDecodeError exception and return an empty raw_story string if the exception is raised. This fixed the issue for me.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4074/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4073 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4073/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4073/comments | https://api.github.com/repos/huggingface/transformers/issues/4073/events | https://github.com/huggingface/transformers/issues/4073 | 609,064,800 | MDU6SXNzdWU2MDkwNjQ4MDA= | 4,073 | Save T5 model as H5 and convert it to model.json to be used in TensorflowJS | {
"login": "zirlman",
"id": 24474083,
"node_id": "MDQ6VXNlcjI0NDc0MDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/24474083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zirlman",
"html_url": "https://github.com/zirlman",
"followers_url": "https://api.github.com/users/zirlman/followers",
"following_url": "https://api.github.com/users/zirlman/following{/other_user}",
"gists_url": "https://api.github.com/users/zirlman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zirlman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zirlman/subscriptions",
"organizations_url": "https://api.github.com/users/zirlman/orgs",
"repos_url": "https://api.github.com/users/zirlman/repos",
"events_url": "https://api.github.com/users/zirlman/events{/privacy}",
"received_events_url": "https://api.github.com/users/zirlman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, our API has its own saving function: `model.save_pretrained(\"directory\")`, which saves the model alongside its configuration file. Can you let me know if this method fits your use case?",
"Hi, using that method I've managed to save the model. Unfortunately, when I convert the model into `model.json` load it later in my JS file I receive the following error: \r\n```\r\nTypeError: Cannot read property 'model_config' of null\r\n at models.ts:287\r\n at common.ts:14\r\n at Object.next (common.ts:14)\r\n at a (common.ts:14)\r\n```\r\nBellow you can steps I've followed (to try) to load the model into the JS file\r\n\r\nModel saving:\r\n```\r\nmodel_path = \"/content/t5_base_model\"\r\ntfjs_path = \"/content/t5_base_model_js\"\r\n\r\nmodel.save_pretrained(model_path)\r\nmodel_path += \"/tf_model.h5\"\r\n```\r\nConversion:\r\n```\r\n!tensorflowjs_converter --input_format keras \\\r\n $model_path \\\r\n $tfjs_path \r\n```\r\n\r\nJS file:\r\n```\r\nvar model;\r\nconst model_path = \"../libraries/models/t5_base_model_js/model.json\";\r\n\r\ndocument.querySelector(\"#msg\").innerText = \"[LOADING TF MODEL]\";\r\ntf.loadLayersModel(model_path)\r\n.then((loadedModel) => {\r\n model = loadedModel;\r\n document.querySelector(\"#msg\").innerText = \"[MODEL LOADED]\";\r\n}).catch(err => console.log(err));\r\n```\r\n\r\nI assume the problem occurs because `model.save_pretrained(dir)` saves the config in a separate file. I've tried to figure out if maybe `tf-converter` has a parameter for additional files to bundle into model.json, but with no success. @LysandreJik any idea what I could do to fix this error?\r\n\r\nP.S. I've tried to load mobilenet from TF-Hub to make sure my script works fine and it successfully loaded it.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any updates on this? Stuck on saving mobilebert as a tf layers model"
] | 1,588 | 1,601 | 1,593 | NONE | null | # ❓ Questions & Help
How can I save a T5 model as HDF5 file?
In the end, I want to load it in the browser via `tensorflow-converter `and `tensorflowjs`
Code:
```
model_str = "t5-small"
tokenizer = T5Tokenizer.from_pretrained(model_str)
model = TFT5ForConditionalGeneration.from_pretrained(model_str)
tokenizer_path = "/content/t5_tokenizer"
model_path = "/content/t5_base_model"
tfjs_path = "/content/t5_base_model_js"
model.save(model_path,save_format="h5")
```
When I run that code I receive the following error:
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-53-8343a8512f91> in <module>()
----> 1 model.save(model_path,save_format="h5")
1 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)
103 not isinstance(model, sequential.Sequential)):
104 raise NotImplementedError(
--> 105 'Saving the model to HDF5 format requires the model to be a '
106 'Functional model or a Sequential model. It does not work for '
107 'subclassed models, because such models are defined via the body of '
NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
```
The error says that the model needs to be a Functional/Sequential model, but as far as I know, TF_Transformers are based on TF 2.0 so this shouldn't be a problem. Also when I run this code `isinstance(model, tf.keras.models.Model)` it returns True
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4073/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4072 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4072/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4072/comments | https://api.github.com/repos/huggingface/transformers/issues/4072/events | https://github.com/huggingface/transformers/issues/4072 | 609,055,914 | MDU6SXNzdWU2MDkwNTU5MTQ= | 4,072 | How can I continue finetuning from checkpoint using the NER script? | {
"login": "ChessMateK",
"id": 48825535,
"node_id": "MDQ6VXNlcjQ4ODI1NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/48825535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChessMateK",
"html_url": "https://github.com/ChessMateK",
"followers_url": "https://api.github.com/users/ChessMateK/followers",
"following_url": "https://api.github.com/users/ChessMateK/following{/other_user}",
"gists_url": "https://api.github.com/users/ChessMateK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChessMateK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChessMateK/subscriptions",
"organizations_url": "https://api.github.com/users/ChessMateK/orgs",
"repos_url": "https://api.github.com/users/ChessMateK/repos",
"events_url": "https://api.github.com/users/ChessMateK/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChessMateK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! You're pointing the script to a file or folder `bert-base-256/checkpoint-10000`. Is that a folder containing model file, or is that a file in itself?",
"> Hello! You're pointing the script to a file or folder `bert-base-256/checkpoint-10000`. Is that a folder containing model file, or is that a file in itself?\r\n\r\nIt is the folder created by the script containing these files:\r\n- optimizer.pt and scheduler.pt\r\n- pytorch_model.bin and training_args.bin\r\n- config.json\r\n\r\n",
"Ah, I think I know where the issue stems from. We've changed the paradigm for our examples, which now rely on a `Trainer` abstraction. Until c811526 five days ago, the tokenizer was unfortunately not saved, which is fixed now.\r\n\r\nI would recommend you use the lastest script so that it doesn't happen anymore. \r\n\r\nTo fix this error, you could manually save your tokenizer in that folder as so:\r\n\r\n```py\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\ntokenizer.save_pretrained(\"/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000\")\r\n```\r\n\r\nThis will save the tokenizer file in the appropriate folder. Please keep in mind that this is to reload the original `bert-base-cased` tokenizer. If you have modified your tokenizer in any way, you should save that tokenizer in the aforementioned folder. Please let me know if I can be of further help!",
"> Ah, I think I know where the issue stems from. We've changed the paradigm for our examples, which now rely on a `Trainer` abstraction. Until [c811526](https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee) five days ago, the tokenizer was unfortunately not saved, which is fixed now.\r\n> \r\n> I would recommend you use the lastest script so that it doesn't happen anymore.\r\n> \r\n> To fix this error, you could manually save your tokenizer in that folder as so:\r\n> \r\n> ```python\r\n> from transformers import BertTokenizer\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\")\r\n> tokenizer.save_pretrained(\"/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000\")\r\n> ```\r\n> \r\n> This will save the tokenizer file in the appropriate folder. Please keep in mind that this is to reload the original `bert-base-cased` tokenizer. If you have modified your tokenizer in any way, you should save that tokenizer in the aforementioned folder. Please let me know if I can be of further help!\r\n\r\nI downloaded transformers library on 27 April, two days ago, after [c811526](https://github.com/huggingface/transformers/commit/c81152600452ad1bec4ab705356788d29a3573ee). So, does this problem still persist?",
"@ChessMateK \r\nThis might sound a bit silly but did you check if the drive was mounted in colab when you ran that command ?",
"@patil-suraj Yes, I checked. Furthermore, if I insert as output path the folder where all the checkpoints are saved, even if they are not loaded, the \"startup\" time of the script increases considerably.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm trying to execute [this script](https://github.com/huggingface/transformers/tree/master/examples/ner) using `run_ner.py` but everything I tried to continue fine tuning from checkpoint failed. Any ideas?
I run it using Google Colab. Hereafter the cell content I run:
%cd "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27"
%pip install .
%pip install --upgrade .
%pip install seqeval
from fastai import *
from transformers import *
%cd "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner"
!python "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py" --data_dir ./ \
--model_type bert \
--labels ./labels.txt \
--model_name_or_path "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000" \
--output_dir "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/check" \
--max_seq_length "256" \
--num_train_epochs "5" \
--per_gpu_train_batch_size "4" \
--save_steps "10000" \
--seed "1" \
--do_train --do_eval --do_predict
As you can see, I already tried to substitute model_name_or_path parameter value (that was "bert-base-cased") with checkpoint directory but several errors occurred, asking for the right model name and missing files.
04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Model name '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' is a path, a model identifier, or url to a directory containing tokenizer files.
04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/vocab.txt. We won't load it.
04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/added_tokens.json. We won't load it.
04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/special_tokens_map.json. We won't load it.
04/28/2020 15:16:36 - INFO - transformers.tokenization_utils - Didn't find file /content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000/tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py", line 290, in <module>
main()
File "/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/run_ner.py", line 149, in main
use_fast=model_args.use_fast,
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 197, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 868, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 971, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed '/content/drive/My Drive/Colab Notebooks/NER/Batteria/transformers-master_2020_04_27/examples/ner/bert-base-256/checkpoint-10000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
Thank you in advance.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/61482518/how-can-i-continue-finetuning-from-checkpoint-using-the-ner-script
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4072/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4071 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4071/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4071/comments | https://api.github.com/repos/huggingface/transformers/issues/4071/events | https://github.com/huggingface/transformers/issues/4071 | 609,046,777 | MDU6SXNzdWU2MDkwNDY3Nzc= | 4,071 | Native integration with pytorch/serve | {
"login": "MFreidank",
"id": 6368040,
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MFreidank",
"html_url": "https://github.com/MFreidank",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That would be awesome – and might be of interest to @mfuntowicz among others.\r\n\r\nWe'll help you if needed.",
"Awesome! Thanks for the upvotes and interest, very appreciated\r\n@julien-c, thanks for your support of the idea. \r\n\r\nI will create a PR and get started on a basic first version and cross-reference it here. \r\n@all: Feel free to chime in on the PR to help shape the interface :)",
"@MFreidank I was curious if you had an update here! I've implemented a version of a PyTorch serve handler that operates on the https://huggingface.co/transformers/main_classes/pipelines.html abstraction for a currently unreleased project, as opposed to a distinct model or tokenizer. I think it's a nice abstraction for this use-case, what do you think? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,599 | 1,599 | NONE | null | First off, many thanks to @julien-c for sharing my [blog post](https://medium.com/analytics-vidhya/deploy-huggingface-s-bert-to-production-with-pytorch-serve-27b068026d18) for an example custom handler integration of a BERT sequence classifier model with [pytorch/serve](https://github.com/pytorch/serve/).
# 🚀 Feature request
Provide scripts to automatically generate a [pytorch/serve](https://github.com/pytorch/serve/) custom handler for transformers models to be able to serve models after training.
Example usage could be:
generate_serving_handler --output-filename transformers_ner_handler.py --handler-type "ner" /path/to/checkpoint_directory
## Motivation
There is already an increasing need to serve transformers models with pytorch/serve, see for example:
https://github.com/pytorch/serve/issues/249
https://github.com/pytorch/serve/issues/267
My understanding of the goal of transformers is to be able to support NLP users starting with model prototyping, ranging over model training and evaluation all the way into their productive system. Native integration with a serving framework seems like a logical step given this understanding.
There is ongoing work in `pytorch/serve` to showcase how users can write custom handlers for their transformers models.
However, handler code will have to differ quite significantly for different transformers applications (NER, QA, Summarization, Classification) which will put a heavy burden on the user.
Technically, this feature could also be integrated on the `pytorch/serve` side, but I'm wondering if it would not be easier to maintain here as one could more easily test that new integrated models can still be served with the handler code.
## Your contribution
I would be happy to support and submit a PR - for simple sentence classification I have handler code ready, other applications would have to be designed and integrated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4071/reactions",
"total_count": 6,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4071/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4070 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4070/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4070/comments | https://api.github.com/repos/huggingface/transformers/issues/4070/events | https://github.com/huggingface/transformers/issues/4070 | 608,905,411 | MDU6SXNzdWU2MDg5MDU0MTE= | 4,070 | No CORS Policy for Write With Transformer Endpoints | {
"login": "ivokun",
"id": 13001265,
"node_id": "MDQ6VXNlcjEzMDAxMjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/13001265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivokun",
"html_url": "https://github.com/ivokun",
"followers_url": "https://api.github.com/users/ivokun/followers",
"following_url": "https://api.github.com/users/ivokun/following{/other_user}",
"gists_url": "https://api.github.com/users/ivokun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivokun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivokun/subscriptions",
"organizations_url": "https://api.github.com/users/ivokun/orgs",
"repos_url": "https://api.github.com/users/ivokun/repos",
"events_url": "https://api.github.com/users/ivokun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivokun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1565794707,
"node_id": "MDU6TGFiZWwxNTY1Nzk0NzA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Write%20With%20Transformer",
"name": "Write With Transformer",
"color": "a84bf4",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @ivokun – this was actually on purpose as someone was using it as an API\r\n\r\nWe'll secure it down the line if it gets abused."
] | 1,588 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Write with Transformer is a very good place for trying available models. But, there is no CORS policy to secure it. This could potentially be used by an unauthorized third-parties or attackers to DDoS.
## To reproduce
Steps to reproduce the behavior:
POST request to `/autocomplete/distilgpt2/small`
```
curl -d '{"context":"See how a modern neural network auto-completes your text 🤗","model_size":"distilgpt2/small","top_p":0.9,"temperature":1,"max_time":1}' -H 'Content-Type: application/json' https://transformer.huggingface.co/autocomplete/distilgpt2/small
```
Potential endpoints:
1. `/autocomplete/distilgpt2/small`
1. `/autocomplete/gpt2/small`
1. `/autocomplete/gpt2/medium`
1. `/autocomplete/gpt2/large`
1. `/autocomplete/gpt2/xl` <- 504 error
1. `/autocomplete/pplm` <- 504 error
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The endpoints could be used by unauthorized third-parties and risk of DDoS . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4070/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4069 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4069/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4069/comments | https://api.github.com/repos/huggingface/transformers/issues/4069/events | https://github.com/huggingface/transformers/pull/4069 | 608,831,142 | MDExOlB1bGxSZXF1ZXN0NDEwNTMyMzUw | 4,069 | Delete batch = tuple(t.to(args.device) for t in batch) for it perform… | {
"login": "DrJZhou",
"id": 17310840,
"node_id": "MDQ6VXNlcjE3MzEwODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/17310840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrJZhou",
"html_url": "https://github.com/DrJZhou",
"followers_url": "https://api.github.com/users/DrJZhou/followers",
"following_url": "https://api.github.com/users/DrJZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/DrJZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrJZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrJZhou/subscriptions",
"organizations_url": "https://api.github.com/users/DrJZhou/orgs",
"repos_url": "https://api.github.com/users/DrJZhou/repos",
"events_url": "https://api.github.com/users/DrJZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrJZhou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This seems reasonable, what do you think @suvrat96?",
"(never mind the failing CI test)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | …s two times | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4069",
"html_url": "https://github.com/huggingface/transformers/pull/4069",
"diff_url": "https://github.com/huggingface/transformers/pull/4069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4069.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4068 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4068/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4068/comments | https://api.github.com/repos/huggingface/transformers/issues/4068/events | https://github.com/huggingface/transformers/issues/4068 | 608,830,799 | MDU6SXNzdWU2MDg4MzA3OTk= | 4,068 | Is there pre-train bert or xlnet from scratch code ? | {
"login": "xealml",
"id": 12672103,
"node_id": "MDQ6VXNlcjEyNjcyMTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12672103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xealml",
"html_url": "https://github.com/xealml",
"followers_url": "https://api.github.com/users/xealml/followers",
"following_url": "https://api.github.com/users/xealml/following{/other_user}",
"gists_url": "https://api.github.com/users/xealml/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xealml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xealml/subscriptions",
"organizations_url": "https://api.github.com/users/xealml/orgs",
"repos_url": "https://api.github.com/users/xealml/repos",
"events_url": "https://api.github.com/users/xealml/events{/privacy}",
"received_events_url": "https://api.github.com/users/xealml/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | null | [] | [
"Yes, you can use the `run_language_modeling.py` script for pre-training e.g. BERT from scratch:\r\n\r\nhttps://huggingface.co/transformers/examples.html#language-model-training\r\n\r\n(Just leave the `model_name_or_path` parameter empty for pre-training from scratch)",
"@stefan-it thank u.",
"@xealml Curious whether you managed to train XLNet? If so, any pointers you could share? ",
"@jbmaxwell @xealml I am getting lost on this as well. How do you train XLNet from scratch? I can't find any field for inputting raw data to it"
] | 1,588 | 1,650 | 1,588 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4068/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4067 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4067/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4067/comments | https://api.github.com/repos/huggingface/transformers/issues/4067/events | https://github.com/huggingface/transformers/issues/4067 | 608,762,503 | MDU6SXNzdWU2MDg3NjI1MDM= | 4,067 | Why GPT2LMHeadModel's loss mismatches accuracy? | {
"login": "lx-kika",
"id": 62126666,
"node_id": "MDQ6VXNlcjYyMTI2NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/62126666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lx-kika",
"html_url": "https://github.com/lx-kika",
"followers_url": "https://api.github.com/users/lx-kika/followers",
"following_url": "https://api.github.com/users/lx-kika/following{/other_user}",
"gists_url": "https://api.github.com/users/lx-kika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lx-kika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lx-kika/subscriptions",
"organizations_url": "https://api.github.com/users/lx-kika/orgs",
"repos_url": "https://api.github.com/users/lx-kika/repos",
"events_url": "https://api.github.com/users/lx-kika/events{/privacy}",
"received_events_url": "https://api.github.com/users/lx-kika/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @lx-kika, \r\n\r\nCan you post a code snippet that calculates the different loss types you mention? \r\nAlso, I think it should be fine to just use the first method to calculate the loss - I don't really see a reason as why to use the second method, no? "
] | 1,588 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
Hello,
I'm pretraining GPT2LMHeadModel on my own data from scratch, however I find that loss decreases sharply, and the evaluation doesn't match the loss anymore.
## Details
I used two different ways to evaluate the LM model:
1.input [1,2,3...] label [2,3,4...] this way will compare the output of whole sentence with label once
2.input [1] label [2]
input [1,2] label [2,3] but we compare only the output of 2's position and label 3
input [1,2,3] label [2,3,4] but we compare only the output of 3's position and label 4
and so on (each time count only last position's accuracy)
finally sum up them
In my opinion, these two ways should give the same result, however when loss decreases sharply, these two results don't match anymore. This happens only in the case that learning rate is high, like 0.001 or even bigger.
Could you help me explain how it happens?

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4067/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4066 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4066/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4066/comments | https://api.github.com/repos/huggingface/transformers/issues/4066/events | https://github.com/huggingface/transformers/pull/4066 | 608,756,918 | MDExOlB1bGxSZXF1ZXN0NDEwNDcyNTA1 | 4,066 | create model_card camembert-base-wikipedia-4gb | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=h1) Report\n> Merging [#4066](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4066 +/- ##\n=======================================\n Coverage 78.91% 78.92% \n=======================================\n Files 114 114 \n Lines 18670 18670 \n=======================================\n+ Hits 14734 14735 +1 \n+ Misses 3936 3935 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4066/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=footer). Last update [6faca88...2af7f81](https://codecov.io/gh/huggingface/transformers/pull/4066?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4066",
"html_url": "https://github.com/huggingface/transformers/pull/4066",
"diff_url": "https://github.com/huggingface/transformers/pull/4066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4066.patch",
"merged_at": 1588299373000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4065 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4065/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4065/comments | https://api.github.com/repos/huggingface/transformers/issues/4065/events | https://github.com/huggingface/transformers/pull/4065 | 608,756,609 | MDExOlB1bGxSZXF1ZXN0NDEwNDcyMjY0 | 4,065 | Create model_card camembert-base-ccnet | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=h1) Report\n> Merging [#4065](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4065 +/- ##\n=======================================\n Coverage 78.91% 78.91% \n=======================================\n Files 114 114 \n Lines 18670 18670 \n=======================================\n Hits 14734 14734 \n Misses 3936 3936 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=footer). Last update [6faca88...5f06af6](https://codecov.io/gh/huggingface/transformers/pull/4065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4065/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4065",
"html_url": "https://github.com/huggingface/transformers/pull/4065",
"diff_url": "https://github.com/huggingface/transformers/pull/4065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4065.patch",
"merged_at": 1588299361000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4064 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4064/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4064/comments | https://api.github.com/repos/huggingface/transformers/issues/4064/events | https://github.com/huggingface/transformers/pull/4064 | 608,756,171 | MDExOlB1bGxSZXF1ZXN0NDEwNDcxOTI3 | 4,064 | Create model_card camembert-base-ccnet-4gb | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=h1) Report\n> Merging [#4064](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4064 +/- ##\n=======================================\n Coverage 78.91% 78.92% \n=======================================\n Files 114 114 \n Lines 18670 18670 \n=======================================\n+ Hits 14734 14735 +1 \n+ Misses 3936 3935 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=footer). Last update [6faca88...0c16603](https://codecov.io/gh/huggingface/transformers/pull/4064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4064",
"html_url": "https://github.com/huggingface/transformers/pull/4064",
"diff_url": "https://github.com/huggingface/transformers/pull/4064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4064.patch",
"merged_at": 1588299348000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4063 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4063/comments | https://api.github.com/repos/huggingface/transformers/issues/4063/events | https://github.com/huggingface/transformers/pull/4063 | 608,755,576 | MDExOlB1bGxSZXF1ZXN0NDEwNDcxNDkw | 4,063 | Create README.md model_card camembert/camembert-base-oscar-4gb | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=h1) Report\n> Merging [#4063](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4063 +/- ##\n=======================================\n Coverage 78.91% 78.91% \n=======================================\n Files 114 114 \n Lines 18670 18670 \n=======================================\n Hits 14734 14734 \n Misses 3936 3936 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=footer). Last update [6faca88...4030892](https://codecov.io/gh/huggingface/transformers/pull/4063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4063",
"html_url": "https://github.com/huggingface/transformers/pull/4063",
"diff_url": "https://github.com/huggingface/transformers/pull/4063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4063.patch",
"merged_at": 1588299339000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4062 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4062/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4062/comments | https://api.github.com/repos/huggingface/transformers/issues/4062/events | https://github.com/huggingface/transformers/pull/4062 | 608,753,047 | MDExOlB1bGxSZXF1ZXN0NDEwNDY5NDk1 | 4,062 | Create README.md | {
"login": "benjamin-mlr",
"id": 17753315,
"node_id": "MDQ6VXNlcjE3NzUzMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/17753315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjamin-mlr",
"html_url": "https://github.com/benjamin-mlr",
"followers_url": "https://api.github.com/users/benjamin-mlr/followers",
"following_url": "https://api.github.com/users/benjamin-mlr/following{/other_user}",
"gists_url": "https://api.github.com/users/benjamin-mlr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benjamin-mlr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjamin-mlr/subscriptions",
"organizations_url": "https://api.github.com/users/benjamin-mlr/orgs",
"repos_url": "https://api.github.com/users/benjamin-mlr/repos",
"events_url": "https://api.github.com/users/benjamin-mlr/events{/privacy}",
"received_events_url": "https://api.github.com/users/benjamin-mlr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=h1) Report\n> Merging [#4062](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6faca88ee0c472de8207e648b0999a1ee01ff127&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4062 +/- ##\n=======================================\n Coverage 78.91% 78.91% \n=======================================\n Files 114 114 \n Lines 18670 18670 \n=======================================\n Hits 14734 14734 \n Misses 3936 3936 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=footer). Last update [6faca88...6f20b16](https://codecov.io/gh/huggingface/transformers/pull/4062?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4062/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4062",
"html_url": "https://github.com/huggingface/transformers/pull/4062",
"diff_url": "https://github.com/huggingface/transformers/pull/4062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4062.patch",
"merged_at": 1588299324000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4061 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4061/comments | https://api.github.com/repos/huggingface/transformers/issues/4061/events | https://github.com/huggingface/transformers/issues/4061 | 608,746,404 | MDU6SXNzdWU2MDg3NDY0MDQ= | 4,061 | KeyError ' '- run_ner.py - Transformers 2.8.0 | {
"login": "calusbr",
"id": 25322394,
"node_id": "MDQ6VXNlcjI1MzIyMzk0",
"avatar_url": "https://avatars.githubusercontent.com/u/25322394?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calusbr",
"html_url": "https://github.com/calusbr",
"followers_url": "https://api.github.com/users/calusbr/followers",
"following_url": "https://api.github.com/users/calusbr/following{/other_user}",
"gists_url": "https://api.github.com/users/calusbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calusbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calusbr/subscriptions",
"organizations_url": "https://api.github.com/users/calusbr/orgs",
"repos_url": "https://api.github.com/users/calusbr/repos",
"events_url": "https://api.github.com/users/calusbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/calusbr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"Have you got two blank lines consecutive? Look at the end of the file, maybe you got an extra blank line or an empty line at the end of the file.",
"Additionally, the dataset format must be in a `Token Label` (delimiter is a space) format.\r\n\r\nThe error message shows, that an unexpected label ( `\"`) was found, so I could imagine, that the dataset format is not consistent.\r\n\r\nTo check this, just do the following on your training, development and test set:\r\n\r\n```bash\r\ncat train.txt dev.txt test.txt | grep -v \"^$\" | cut -d \" \" -f 2 | sort | uniq\r\n```\r\n\r\nThis should give you all labels. If you see other tokens, then there's something wrong in the dataset.\r\n\r\n",
"This problem also happened to me when I trained with my dataset.\r\n\r\nPossible causes:\r\n1. There is an extra character in labels\r\n1. The word and label is not separated by space but tab space `\\t` <- my main culprit",
"Hi can I look into this issue. I am very interested to start contributing in this project",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,596 | 1,596 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): PT
The problem arises when using:
* [ X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm having trouble running a dataset in Portuguese. It is in the conll pattern
Traceback (most recent call last):
File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/run_ner.py", line 292, in <module>
main()
File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/run_ner.py", line 170, in main
if training_args.do_train
File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/utils_ner.py", line 124, in __init__
pad_token_label_id=self.pad_token_label_id,
File "/home/lucasrodrigues/code/transformers-2.8.0/examples/ner/utils_ner.py", line 207, in convert_examples_to_features
print([label_map[label]] + [pad_token_label_id] * (len(word_tokens) - 1))
KeyError: ''
## Expected behavior
Execution of the NER.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4061/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4060 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4060/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4060/comments | https://api.github.com/repos/huggingface/transformers/issues/4060/events | https://github.com/huggingface/transformers/issues/4060 | 608,743,602 | MDU6SXNzdWU2MDg3NDM2MDI= | 4,060 | Cannot Load bert-base-japanese tokenizer | {
"login": "bayartsogt-ya",
"id": 43239645,
"node_id": "MDQ6VXNlcjQzMjM5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayartsogt-ya",
"html_url": "https://github.com/bayartsogt-ya",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions",
"organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs",
"repos_url": "https://api.github.com/users/bayartsogt-ya/repos",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I had the same issue and I solved it by downloading the required files locally with the steps below.\r\n1. Download [vocab.txt](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/tokenization_bert_japanese.py#L33), [config.json](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/configuration_bert.py#L42), [pytorch_model.bin](https://github.com/huggingface/transformers/blob/455c6390938a5c737fa63e78396cedae41e4e87e/src/transformers/modeling_bert.py#L51) from the source URL\r\n\r\n2. Enter the folder containing the three files in the from_pretrained method\r\ne.g.\r\n```\r\nmodel = BertModel.from_pretrained ('./models/bert-base-japanese/')\r\nconfig = BertConfig('./models/bert-base-japanese/')\r\ntokenizer = BertJapaneseTokenizer.from_pretrained('./models/bert-base-japanese/')\r\n```\r\nwhere\r\n```\r\n─ models\r\n └- bert-base-japanese\r\n ├- vocab.txt\r\n ├- config.json\r\n └- pytorch_model.bin\r\n```\r\nI think this is probably an obstacle caused by a change in the path on S3 due to this commit. The version of transformers installed by pip is old and you may be pointing to the wrong path.\r\nhttps://github.com/huggingface/transformers/commit/455c6390938a5c737fa63e78396cedae41e4e87e\r\n\r\nReinstall with the latest version of transformers and it should work.\r\n```\r\ngit clone [email protected]: huggingface/transformers.git\r\npip install ./transformers\r\n```",
"I apologize, it's my fault. I `mv`ed files around instead of `copy`ing them as we do usually, so I broke backward compatibility for the `bert-base-japanese` models.\r\n\r\nAs @reo11 said, you'll need to install from source for now. You can also do: \r\n`pip install git+git://github.com/huggingface/transformers.git`\r\n\r\nSorry about that.",
"@reo11 Thank you so much!\r\n@julien-c Thank you for your response. Since a lot of us trying to use transformers in production too, please consider having stable workflow. (Anyways you guys doing great!)"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using BertJapaneseTokenizer:
Language I am using the model on Japanese:
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is: Just to load
## To reproduce
```
>>> from transformers import BertJapaneseTokenizer
>>> tokenizer = BertJapaneseTokenizer.from_pretrained('bert-base-japanese')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bayartsogtyadamsuren/DDAM-Projects/isid/myenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/Users/bayartsogtyadamsuren/DDAM-Projects/isid/myenv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 496, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name 'bert-base-japanese' was not found in tokenizers model name list (bert-base-japanese, bert-base-japanese-whole-word-masking, bert-base-japanese-char, bert-base-japanese-char-whole-word-masking). We assumed 'bert-base-japanese' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```
## Expected behavior
To load
## Environment info
- `transformers` version: 2.7.0
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4060/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4059 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4059/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4059/comments | https://api.github.com/repos/huggingface/transformers/issues/4059/events | https://github.com/huggingface/transformers/issues/4059 | 608,736,647 | MDU6SXNzdWU2MDg3MzY2NDc= | 4,059 | Dropout training | {
"login": "weiarqq",
"id": 33859151,
"node_id": "MDQ6VXNlcjMzODU5MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/33859151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiarqq",
"html_url": "https://github.com/weiarqq",
"followers_url": "https://api.github.com/users/weiarqq/followers",
"following_url": "https://api.github.com/users/weiarqq/following{/other_user}",
"gists_url": "https://api.github.com/users/weiarqq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiarqq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiarqq/subscriptions",
"organizations_url": "https://api.github.com/users/weiarqq/orgs",
"repos_url": "https://api.github.com/users/weiarqq/repos",
"events_url": "https://api.github.com/users/weiarqq/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiarqq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @weiarqq, \r\n\r\nWhere does the code come from? What is the application? What do you mean by \"training of dropout\"? the `training=kwargs.get(\"training\", False)` is a flag for pytorch's dropout to switch it on and off depending on whether you are in training module \"training=True\" or not."
] | 1,588 | 1,588 | 1,588 | NONE | null | # ❓ Questions & Help
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
I don't see any place where training of dropout are being used in examples, I was puzzled.So can someone tell me about this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4059/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4058 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4058/comments | https://api.github.com/repos/huggingface/transformers/issues/4058/events | https://github.com/huggingface/transformers/issues/4058 | 608,736,357 | MDU6SXNzdWU2MDg3MzYzNTc= | 4,058 | How to run squad for chinese dataset? | {
"login": "Lapis-Hong",
"id": 23524486,
"node_id": "MDQ6VXNlcjIzNTI0NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/23524486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lapis-Hong",
"html_url": "https://github.com/Lapis-Hong",
"followers_url": "https://api.github.com/users/Lapis-Hong/followers",
"following_url": "https://api.github.com/users/Lapis-Hong/following{/other_user}",
"gists_url": "https://api.github.com/users/Lapis-Hong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lapis-Hong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lapis-Hong/subscriptions",
"organizations_url": "https://api.github.com/users/Lapis-Hong/orgs",
"repos_url": "https://api.github.com/users/Lapis-Hong/repos",
"events_url": "https://api.github.com/users/Lapis-Hong/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lapis-Hong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | NONE | null | I already change the SquadExamples origin token process, but it still not work for chinese.
I get really low results | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4058/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4057 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4057/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4057/comments | https://api.github.com/repos/huggingface/transformers/issues/4057/events | https://github.com/huggingface/transformers/pull/4057 | 608,604,378 | MDExOlB1bGxSZXF1ZXN0NDEwMzUyMDM2 | 4,057 | Add AlbertForPreTraining and TFAlbertForPreTraining models. | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=h1) Report\n> Merging [#4057](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c99fe0386be118bceaab1c85cdb8309eb8cb8208&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `96.87%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4057 +/- ##\n==========================================\n+ Coverage 78.31% 78.37% +0.06% \n==========================================\n Files 120 120 \n Lines 19854 19916 +62 \n==========================================\n+ Hits 15549 15610 +61 \n- Misses 4305 4306 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.10% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.23% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <ø> (ø)` | |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.20% <94.73%> (+1.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4057/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `87.19% <100.00%> (+0.93%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=footer). Last update [c99fe03...0171b91](https://codecov.io/gh/huggingface/transformers/pull/4057?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi! This is really cool, thanks for taking the time to do that! A few notes on your remarks:\r\n\r\n> TFBertForPreTraining does not include an option to calculate the loss, but BertForPreTraining does. \r\n\r\nThis is not a bug, it's done on purpose. We try to respect as much as possible each library's paradigm. With PyTorch you can compute the loss both inside and outside the model, so we make it easy for the user to supply the labels and compute a standard loss.\r\n\r\nFor TensorFlow however, the loss is usually computed outside the model, especially with Keras. None of our TensorFlow models computes the loss for that very reason.\r\n\r\n> TFAlbertMLMHead adds two biases to the final output: self.decoder_bias and self.bias. TFBertMLMHead only adds one bias: self.bias.\r\n\r\nThat's, unfortunately, a mistake on my side. https://github.com/huggingface/transformers/pull/4076\r\n\r\n\r\nI'm taking a look at your PR, and will see if we can upload weights on S3 containing the SOP weights as well.",
"Hi @LysandreJik, any updates on your side? I know last week was ICLR. We're prepping a code release and blog post, so it would be great to have this merged in soon :)",
"Hi @jarednielsen, I took a look today, and it's in good shape! There's a few things to update, however, as this is an opportunity to update our checkpoints to include the SOP.\r\n\r\nIn order to do this, I had to do a few changes, especially removing the `AlbertPreTrainingHeads` class so that the masked lm layer (previously `AlbertForPreTraining.cls.predictions`) could be at the same level than the `AlbertForMaskedLM` class: `AlbertForMaskedLM.predictions`.\r\n\r\nThere's a few other changes that need to be done in the conversion utility. I'll look into doing TensorFlow tomorrow and will try merge tomorrow evening + upload the updated checkpoints on the S3; is this timeline okay with you?\r\n\r\nDo you mind if I push directly on your fork, so that all the changes are visible in this PR?",
"Thanks for the prompt response! Yes, that timeline sounds great; feel free to push directly onto my fork.",
"Cool, will do that. We're releasing transformers `v2.9` today so it won't be in it, but it will be in the next version which should be released sometimes next week.",
"Thanks a lot for your contribution! Will upload the weights on S3 in the coming days."
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | I mirrored the structure of `BertForPreTraining` and `TFBertForPreTraining`.
A few differences between the PyTorch and TensorFlow implementations of Bert I noticed along the way:
- TFBertForSequenceClassification has dropout in the classifier head, but TFBertForPreTraining does not. I included dropout in the TFAlbertForPreTraining classifier head.
- TFBertForPreTraining does not include an option to calculate the loss, but BertForPreTraining does. Fixing this will require more wrangling of the dataset and preprocessor, so I'll save that for a later PR. Mirrored the structure here.
- BertForPreTraining has attributes: `self.bert`, `self.cls.predictions`, `self.cls.seq_relationship`. TFBertForPreTraining has attributes: `self.bert`, `self.dropout`, `self.classifier`. I chose to unify these attributes in AlbertForPreTraining and TFAlbertForPreTraining under `self.albert`, `self.cls.predictions`, and `self.cls.sop_classifier`.
- TFAlbertMLMHead adds two biases to the final output: `self.decoder_bias` and `self.bias`. TFBertMLMHead only adds one bias: `self.bias`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4057/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4057",
"html_url": "https://github.com/huggingface/transformers/pull/4057",
"diff_url": "https://github.com/huggingface/transformers/pull/4057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4057.patch",
"merged_at": 1588895092000
} |
https://api.github.com/repos/huggingface/transformers/issues/4056 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4056/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4056/comments | https://api.github.com/repos/huggingface/transformers/issues/4056/events | https://github.com/huggingface/transformers/pull/4056 | 608,575,978 | MDExOlB1bGxSZXF1ZXN0NDEwMzI4NTc0 | 4,056 | Minor Readme Fixes | {
"login": "MichalMalyska",
"id": 12971408,
"node_id": "MDQ6VXNlcjEyOTcxNDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/12971408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichalMalyska",
"html_url": "https://github.com/MichalMalyska",
"followers_url": "https://api.github.com/users/MichalMalyska/followers",
"following_url": "https://api.github.com/users/MichalMalyska/following{/other_user}",
"gists_url": "https://api.github.com/users/MichalMalyska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichalMalyska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichalMalyska/subscriptions",
"organizations_url": "https://api.github.com/users/MichalMalyska/orgs",
"repos_url": "https://api.github.com/users/MichalMalyska/repos",
"events_url": "https://api.github.com/users/MichalMalyska/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichalMalyska/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Added contact info and fixed typos. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4056/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4056",
"html_url": "https://github.com/huggingface/transformers/pull/4056",
"diff_url": "https://github.com/huggingface/transformers/pull/4056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4056.patch",
"merged_at": 1588106536000
} |
https://api.github.com/repos/huggingface/transformers/issues/4055 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4055/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4055/comments | https://api.github.com/repos/huggingface/transformers/issues/4055/events | https://github.com/huggingface/transformers/pull/4055 | 608,553,503 | MDExOlB1bGxSZXF1ZXN0NDEwMzA5OTc4 | 4,055 | [Naming] lm_labels -> labels ; masked_lm_labels -> masked_labels | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah that's quite a big breaking change 🤣\r\nwe may want to be a little bit careful on backward compatibility",
"I think this would be a very welcome update. Maybe adding an alias to the method signature would be sufficient to maintain compatibility?\r\n\r\nWe could deprecate the old ones as well, with a warning and remove them in a future version.",
"Yes adding an alias would be good for me.\r\n\r\nOne possible option to deprecate progressively (that Keras use for instance) is to move the old parameters in `**kwargs` so they are not advocated anymore and add a deprecation warning later on."
] | 1,588 | 1,591 | 1,591 | MEMBER | null | This PR is a simple find and replace of the regex "lm_labels" into "labels" over all files.
This way we can use the Trainer class for all models.
It does break backward compatibility a bit though, since some people will have to rename the `lm_labels` argument in their code.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4055/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4055",
"html_url": "https://github.com/huggingface/transformers/pull/4055",
"diff_url": "https://github.com/huggingface/transformers/pull/4055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4055.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4054 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4054/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4054/comments | https://api.github.com/repos/huggingface/transformers/issues/4054/events | https://github.com/huggingface/transformers/pull/4054 | 608,533,355 | MDExOlB1bGxSZXF1ZXN0NDEwMjkzNTQ5 | 4,054 | Changes to fix working for transformer-xl | {
"login": "bajajahsaas",
"id": 11806556,
"node_id": "MDQ6VXNlcjExODA2NTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/11806556?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bajajahsaas",
"html_url": "https://github.com/bajajahsaas",
"followers_url": "https://api.github.com/users/bajajahsaas/followers",
"following_url": "https://api.github.com/users/bajajahsaas/following{/other_user}",
"gists_url": "https://api.github.com/users/bajajahsaas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bajajahsaas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bajajahsaas/subscriptions",
"organizations_url": "https://api.github.com/users/bajajahsaas/orgs",
"repos_url": "https://api.github.com/users/bajajahsaas/repos",
"events_url": "https://api.github.com/users/bajajahsaas/events{/privacy}",
"received_events_url": "https://api.github.com/users/bajajahsaas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, it's been a while, but this has since been fixed in #3716."
] | 1,588 | 1,592 | 1,592 | NONE | null | Update arguments (`lm_labels` to `labels`)
Use labels as data; labels are shifted inside the model (https://github.com/bajajahsaas/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L855) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4054",
"html_url": "https://github.com/huggingface/transformers/pull/4054",
"diff_url": "https://github.com/huggingface/transformers/pull/4054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4054.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4053 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4053/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4053/comments | https://api.github.com/repos/huggingface/transformers/issues/4053/events | https://github.com/huggingface/transformers/pull/4053 | 608,495,737 | MDExOlB1bGxSZXF1ZXN0NDEwMjYzNjc2 | 4,053 | [isort] add known 3rd party to setup.cfg | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Resolve local/circleci isort discrepancy by adding sacrebleu, rouge_score to `known_third_party`.
Previously, if you had either package installed on your system, they would be `known_third_party`. Now they are `known_third_party` on everybody's system.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4053/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4053",
"html_url": "https://github.com/huggingface/transformers/pull/4053",
"diff_url": "https://github.com/huggingface/transformers/pull/4053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4053.patch",
"merged_at": 1588108321000
} |
https://api.github.com/repos/huggingface/transformers/issues/4052 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4052/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4052/comments | https://api.github.com/repos/huggingface/transformers/issues/4052/events | https://github.com/huggingface/transformers/pull/4052 | 608,479,160 | MDExOlB1bGxSZXF1ZXN0NDEwMjUwMjUz | 4,052 | Checking new isort | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4052/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4052",
"html_url": "https://github.com/huggingface/transformers/pull/4052",
"diff_url": "https://github.com/huggingface/transformers/pull/4052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4052.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4051 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4051/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4051/comments | https://api.github.com/repos/huggingface/transformers/issues/4051/events | https://github.com/huggingface/transformers/pull/4051 | 608,467,064 | MDExOlB1bGxSZXF1ZXN0NDEwMjQwMzEz | 4,051 | Fix TF input docstrings to refer to tf.Tensor rather than torch.Float… | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @jarednielsen - thanks a lot for the clean-up :-) "
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | …Tensor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4051/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4051",
"html_url": "https://github.com/huggingface/transformers/pull/4051",
"diff_url": "https://github.com/huggingface/transformers/pull/4051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4051.patch",
"merged_at": 1588249736000
} |
https://api.github.com/repos/huggingface/transformers/issues/4050 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4050/comments | https://api.github.com/repos/huggingface/transformers/issues/4050/events | https://github.com/huggingface/transformers/pull/4050 | 608,457,395 | MDExOlB1bGxSZXF1ZXN0NDEwMjMyNDA4 | 4,050 | Remove jitted method so that our models are pickable. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=h1) Report\n> Merging [#4050](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a0a8c1c6f4f2f0c80ff07d36713a3ada785eec5&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4050 +/- ##\n=======================================\n Coverage 78.87% 78.88% \n=======================================\n Files 111 111 \n Lines 18536 18533 -3 \n=======================================\n- Hits 14621 14620 -1 \n+ Misses 3915 3913 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/4050/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `88.88% <ø> (+7.93%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4050/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.02% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=footer). Last update [9a0a8c1...16e9e66](https://codecov.io/gh/huggingface/transformers/pull/4050?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | closes https://github.com/huggingface/transformers/issues/4038 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4050/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4050",
"html_url": "https://github.com/huggingface/transformers/pull/4050",
"diff_url": "https://github.com/huggingface/transformers/pull/4050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4050.patch",
"merged_at": 1588168399000
} |
https://api.github.com/repos/huggingface/transformers/issues/4049 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4049/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4049/comments | https://api.github.com/repos/huggingface/transformers/issues/4049/events | https://github.com/huggingface/transformers/pull/4049 | 608,443,844 | MDExOlB1bGxSZXF1ZXN0NDEwMjIxNDYw | 4,049 | p_mask in SQuAD pre-processing | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,589 | 1,589 | MEMBER | null | In question answering, the `p_mask` is a mask with 1 for tokens than cannot be in the answer (0 for token which can be in an answer).
Currently the p_mask construction fails for RoBERTa, and is overall not very robust as it depends directly on the `token_type_ids`, while some models do not make use of them and therefore do not generate them (best example being RoBERTa).
This PR changes the way the p_mask is built, and slightly changes it as well. Up to now, here's what was done with the `p_mask`, I believe with the original implementation being google-research BERT's SQuAD script:
- Consider every token that is not the question to be a possible answer. This includes special tokens, and padding tokens.
This PR changes this by removing those special tokens and padding tokens from the possible answer pool. It does still keep the classification token as a possible answer, given that it's sometimes used as the `impossible` token.
closes https://github.com/huggingface/transformers/issues/2788 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4049/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4049/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4049",
"html_url": "https://github.com/huggingface/transformers/pull/4049",
"diff_url": "https://github.com/huggingface/transformers/pull/4049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4049.patch",
"merged_at": 1589490473000
} |
https://api.github.com/repos/huggingface/transformers/issues/4048 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4048/comments | https://api.github.com/repos/huggingface/transformers/issues/4048/events | https://github.com/huggingface/transformers/issues/4048 | 608,425,951 | MDU6SXNzdWU2MDg0MjU5NTE= | 4,048 | Why is the pooler output used for sequence classification (if it does not represent the input semantic well)? | {
"login": "mkaze",
"id": 8656825,
"node_id": "MDQ6VXNlcjg2NTY4MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8656825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkaze",
"html_url": "https://github.com/mkaze",
"followers_url": "https://api.github.com/users/mkaze/followers",
"following_url": "https://api.github.com/users/mkaze/following{/other_user}",
"gists_url": "https://api.github.com/users/mkaze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkaze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkaze/subscriptions",
"organizations_url": "https://api.github.com/users/mkaze/orgs",
"repos_url": "https://api.github.com/users/mkaze/repos",
"events_url": "https://api.github.com/users/mkaze/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkaze/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@mkaze hello, what's your tensorflow version and transformers version?",
"@etveritas When I was working on this, I was using the latest versions at the time, i.e. TF 2.1 and Transformers 2.8. ",
"@mkaze oh, thank you~",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> the concatenation of CLS token representation for all the last four layers (as suggested in BERT paper);\r\n\r\nWhy are you using this 4-layer implementation suggestion from the paper? The paper clearly recommends this logic only for feature-based version of the classifier (which is mentioned only for comparison purposes), and not for the primary fine-tuning version of the model. I don't really get it.\r\n"
] | 1,588 | 1,662 | 1,598 | NONE | null | # ❓ Questions & Help
## Details
In the [documentation](https://huggingface.co/transformers/model_doc/bert.html#tfbertmodel) of `TFBertModel`, it is stated that the `pooler_output` is not a good semantic representation of input (emphasis mine):
> `pooler_output (tf.Tensor of shape (batch_size, hidden_size))`:
> Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. **This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.**
However, if we look at the [source code][1] of `TFBertForSequenceClassification`, we see that it's the pooler output which is used for classification of input sequence:
```python
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False))
logits = self.classifier(pooled_output)
```
**I was wondering why this is the case, and doesn't this contradict what is stated in the documentation?**
FYI, to try the approach suggested in the documentation (i.e. averaging or pooling hidden state), I also created two custom classifiers where one of them is using only the `CLS` token representation, i.e. `outputs[0][:,0]`, and the other is using the concatenation of `CLS` token representation and the average of last hidden state, i.e. `mean(outputs[0], axis=1)`. However, in the experiments on my dataset, both performed poorly (around 73% validation accuracy, with 99% train accuracy) compared to using the built-in `TFBertForSequenceClassification` (which achieved around 79% validation accuracy, with 99% train accuracy).
Even I tried the concatenation of `CLS` token representation for all the last four layers (as suggested in BERT paper); however, it also performed poorly (around 70% validation accuracy, with 99% train accuracy).
I am (almost) certain that my implementation is correct; however, I include it here for reference:
```python
config = BertConfig.from_pretrained("dbmdz/bert-base-german-cased",
num_labels=len(le.classes_),
output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
bert_model = TFBertModel.from_pretrained("dbmdz/bert-base-german-cased",
config=config,
trainable=True)
seq_max_len = 128
input_ids_layer = tf.keras.layers.Input(shape=(seq_max_len,), dtype='int32')
attention_mask_layer = tf.keras.layers.Input(shape=(seq_max_len,), dtype='int32')
inputs = [input_ids_layer, attention_mask_layer]
outputs = bert_model(inputs)
hidden_states = outputs[2]
last_four_hidden_states = tf.keras.layers.concatenate(list(hidden_states[-4:]))
cls_vector = tf.keras.layers.Lambda(lambda x: x[:,0])(last_four_hidden_states)
sentence_vector = tf.keras.layers.Dropout(0.1)(cls_vector)
out = tf.keras.layers.Dense(len(le.classes_), name="classifier")(sentence_vector)
model = tf.keras.models.Model(inputs, out)
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy("accuracy")
model.compile(loss=loss, optimizer=opt, metrics=[metric])
history = model.fit([train_input_ids, train_attention_mask],
train_labels,
epochs=30,
batch_size=16,
validation_data=([test_input_ids, test_attention_maks], test_labels))
```
**A link to original question on Stack Overflow**: This question is more related to theoretical aspects of machine learning and therefore is considered as off-topic on SO.
[1]: https://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/modeling_tf_bert.py#L923-L926 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4048/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4048/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4047 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4047/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4047/comments | https://api.github.com/repos/huggingface/transformers/issues/4047/events | https://github.com/huggingface/transformers/issues/4047 | 608,366,343 | MDU6SXNzdWU2MDgzNjYzNDM= | 4,047 | GPT2LMHeadModel Documentation Mismatch for labels | {
"login": "cdpierse",
"id": 8831892,
"node_id": "MDQ6VXNlcjg4MzE4OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8831892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cdpierse",
"html_url": "https://github.com/cdpierse",
"followers_url": "https://api.github.com/users/cdpierse/followers",
"following_url": "https://api.github.com/users/cdpierse/following{/other_user}",
"gists_url": "https://api.github.com/users/cdpierse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cdpierse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cdpierse/subscriptions",
"organizations_url": "https://api.github.com/users/cdpierse/orgs",
"repos_url": "https://api.github.com/users/cdpierse/repos",
"events_url": "https://api.github.com/users/cdpierse/events{/privacy}",
"received_events_url": "https://api.github.com/users/cdpierse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I guess you are right. The same is true for transformer-xl as well. \r\nCode: https://github.com/bajajahsaas/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L855",
"Yes, I think you guys are right! Thanks for pointing this out. I think we should do a general name cleaning here. \r\n\r\nSee linked PR.",
"This was fixed by #4711 "
] | 1,588 | 1,591 | 1,591 | CONTRIBUTOR | null | # 🐛 Bug
## Information
This is a very small discrepancy between the docs and code for the `GPT2LMHeadModel`.
Docs mention setting `lm_labels = input_ids` https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/modeling_gpt2.py#L547-L552
Should this read `labels = input_ids` instead ?
Since this is the single LM Model head it only has one `labels` param unlike the double head model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4047/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4046 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4046/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4046/comments | https://api.github.com/repos/huggingface/transformers/issues/4046/events | https://github.com/huggingface/transformers/pull/4046 | 608,361,568 | MDExOlB1bGxSZXF1ZXN0NDEwMTU0MTEz | 4,046 | [EncoderDecoder Tests] Improve tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=h1) Report\n> Merging [#4046](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6af3306a1da0322f58861b1fbb62ce5223d97b8a&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4046 +/- ##\n==========================================\n+ Coverage 78.84% 78.85% +0.01% \n==========================================\n Files 114 114 \n Lines 18689 18689 \n==========================================\n+ Hits 14735 14737 +2 \n+ Misses 3954 3952 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4046/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.55% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=footer). Last update [6af3306...0ed64db](https://codecov.io/gh/huggingface/transformers/pull/4046?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"By the way, do you guys want to do this for all other ModelTester objects @patrickvonplaten and @sshleifer? And maybe (if relevant) define a common abstract class they inherit from?",
"> By the way, do you guys want to do this for all other ModelTester objects @patrickvonplaten and @sshleifer? And maybe (if relevant) define a common abstract class they inherit from?\r\n\r\nI think this would be a good idea. We can probably clean up the tests a lot this way! I will note it down on my To-Do-List",
"LMK if you want to split it up @patrickvonplaten ",
"> LMK if you want to split it up @patrickvonplaten\r\n\r\nI tihnk I won't find time in the next 2 weeks to do this - feel free to start working on it if you want :-) "
] | 1,588 | 1,589 | 1,588 | MEMBER | null | Currently when executing:
``pytest tests/test_modeling_encoder_decoder.py`` all BERT tests are run as well due to problems with the import of BertModelTester (see PR #3383).
This PR takes the PR: #4027 and does some minimal changes in the encoder decoder test file.
I think @sshleifer found a great solution to circumvent that by moving the `BertModelTester` class out of the BertModelTest class.
What do you think @julien-c , @LysandreJik ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4046/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4046",
"html_url": "https://github.com/huggingface/transformers/pull/4046",
"diff_url": "https://github.com/huggingface/transformers/pull/4046.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4046.patch",
"merged_at": 1588551517000
} |
https://api.github.com/repos/huggingface/transformers/issues/4045 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4045/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4045/comments | https://api.github.com/repos/huggingface/transformers/issues/4045/events | https://github.com/huggingface/transformers/pull/4045 | 608,358,264 | MDExOlB1bGxSZXF1ZXN0NDEwMTUxMzYz | 4,045 | [EncoderDecoder] Add working examples to docstring | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4045/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4045",
"html_url": "https://github.com/huggingface/transformers/pull/4045",
"diff_url": "https://github.com/huggingface/transformers/pull/4045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4045.patch",
"merged_at": 1588084404000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4044 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4044/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4044/comments | https://api.github.com/repos/huggingface/transformers/issues/4044/events | https://github.com/huggingface/transformers/issues/4044 | 608,356,898 | MDU6SXNzdWU2MDgzNTY4OTg= | 4,044 | Email Alerts: Run failed: Torch hub integration | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | CONTRIBUTOR | null | Possible to disable @julien-c? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4044/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4043 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4043/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4043/comments | https://api.github.com/repos/huggingface/transformers/issues/4043/events | https://github.com/huggingface/transformers/pull/4043 | 608,297,756 | MDExOlB1bGxSZXF1ZXN0NDEwMTAxNTkz | 4,043 | daigo/bert-base-japanese-sentiment | {
"login": "ydaigo",
"id": 44220424,
"node_id": "MDQ6VXNlcjQ0MjIwNDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44220424?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydaigo",
"html_url": "https://github.com/ydaigo",
"followers_url": "https://api.github.com/users/ydaigo/followers",
"following_url": "https://api.github.com/users/ydaigo/following{/other_user}",
"gists_url": "https://api.github.com/users/ydaigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydaigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydaigo/subscriptions",
"organizations_url": "https://api.github.com/users/ydaigo/orgs",
"repos_url": "https://api.github.com/users/ydaigo/repos",
"events_url": "https://api.github.com/users/ydaigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydaigo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | I create japanese binary classification. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4043",
"html_url": "https://github.com/huggingface/transformers/pull/4043",
"diff_url": "https://github.com/huggingface/transformers/pull/4043.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4043.patch",
"merged_at": 1588102531000
} |
https://api.github.com/repos/huggingface/transformers/issues/4042 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4042/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4042/comments | https://api.github.com/repos/huggingface/transformers/issues/4042/events | https://github.com/huggingface/transformers/pull/4042 | 608,282,142 | MDExOlB1bGxSZXF1ZXN0NDEwMDg5MjUz | 4,042 | info about loading file None is not informative | {
"login": "max-yue",
"id": 13486398,
"node_id": "MDQ6VXNlcjEzNDg2Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/max-yue",
"html_url": "https://github.com/max-yue",
"followers_url": "https://api.github.com/users/max-yue/followers",
"following_url": "https://api.github.com/users/max-yue/following{/other_user}",
"gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/max-yue/subscriptions",
"organizations_url": "https://api.github.com/users/max-yue/orgs",
"repos_url": "https://api.github.com/users/max-yue/repos",
"events_url": "https://api.github.com/users/max-yue/events{/privacy}",
"received_events_url": "https://api.github.com/users/max-yue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=h1) Report\n> Merging [#4042](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/180585741cf3cdd6890cb99610923a8ae9691220&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4042 +/- ##\n=======================================\n Coverage 78.50% 78.50% \n=======================================\n Files 111 111 \n Lines 18492 18492 \n=======================================\n+ Hits 14517 14518 +1 \n+ Misses 3975 3974 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.72% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `72.61% <0.00%> (-0.42%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4042/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=footer). Last update [1805857...789019b](https://codecov.io/gh/huggingface/transformers/pull/4042?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | CONTRIBUTOR | null | When file_path is None do not say loading file None | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4042/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4042",
"html_url": "https://github.com/huggingface/transformers/pull/4042",
"diff_url": "https://github.com/huggingface/transformers/pull/4042.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4042.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/4041 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4041/comments | https://api.github.com/repos/huggingface/transformers/issues/4041/events | https://github.com/huggingface/transformers/pull/4041 | 608,273,212 | MDExOlB1bGxSZXF1ZXN0NDEwMDgxOTk4 | 4,041 | [wip] more fp16 test coverage | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,591 | 1,591 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4041/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4041",
"html_url": "https://github.com/huggingface/transformers/pull/4041",
"diff_url": "https://github.com/huggingface/transformers/pull/4041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4041.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4040 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4040/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4040/comments | https://api.github.com/repos/huggingface/transformers/issues/4040/events | https://github.com/huggingface/transformers/pull/4040 | 608,267,978 | MDExOlB1bGxSZXF1ZXN0NDEwMDc3NjMz | 4,040 | Small cosmetic changes to CamemBERT model card | {
"login": "louismartin",
"id": 12654189,
"node_id": "MDQ6VXNlcjEyNjU0MTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/12654189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louismartin",
"html_url": "https://github.com/louismartin",
"followers_url": "https://api.github.com/users/louismartin/followers",
"following_url": "https://api.github.com/users/louismartin/following{/other_user}",
"gists_url": "https://api.github.com/users/louismartin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louismartin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louismartin/subscriptions",
"organizations_url": "https://api.github.com/users/louismartin/orgs",
"repos_url": "https://api.github.com/users/louismartin/repos",
"events_url": "https://api.github.com/users/louismartin/events{/privacy}",
"received_events_url": "https://api.github.com/users/louismartin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Thanks @louismartin :)"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Very minor cosmetic changes to the CamemBERT model card.
@benjamin-mlr will soon merge model cards for the other models :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4040/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4040",
"html_url": "https://github.com/huggingface/transformers/pull/4040",
"diff_url": "https://github.com/huggingface/transformers/pull/4040.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4040.patch",
"merged_at": 1588102376000
} |
https://api.github.com/repos/huggingface/transformers/issues/4039 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4039/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4039/comments | https://api.github.com/repos/huggingface/transformers/issues/4039/events | https://github.com/huggingface/transformers/issues/4039 | 608,222,931 | MDU6SXNzdWU2MDgyMjI5MzE= | 4,039 | Add BPE dropout to tokenizers | {
"login": "c00k1ez",
"id": 16941854,
"node_id": "MDQ6VXNlcjE2OTQxODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/16941854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c00k1ez",
"html_url": "https://github.com/c00k1ez",
"followers_url": "https://api.github.com/users/c00k1ez/followers",
"following_url": "https://api.github.com/users/c00k1ez/following{/other_user}",
"gists_url": "https://api.github.com/users/c00k1ez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c00k1ez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c00k1ez/subscriptions",
"organizations_url": "https://api.github.com/users/c00k1ez/orgs",
"repos_url": "https://api.github.com/users/c00k1ez/repos",
"events_url": "https://api.github.com/users/c00k1ez/events{/privacy}",
"received_events_url": "https://api.github.com/users/c00k1ez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | CONTRIBUTOR | null | # 🚀 Feature request
Hi!
Thank you for the great library!
Several days ago I was trying to fine-tune the GPT2 model with a small dataset. It was a bit hard because the model overfitted every time.
So, after it, I found a great paper with simple sub-word regularization, that outperformed classic BPE on text generation tasks (like NMT).
[Original paper](https://arxiv.org/pdf/1910.13267.pdf).
Unfortunately, I can't find any python implementation, only C++ [here](https://github.com/VKCOM/YouTokenToMe/blob/master/youtokentome/cpp/bpe.cpp).
Metrics from the paper:
 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4039/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4039/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4038/comments | https://api.github.com/repos/huggingface/transformers/issues/4038/events | https://github.com/huggingface/transformers/issues/4038 | 608,169,112 | MDU6SXNzdWU2MDgxNjkxMTI= | 4,038 | GPT-2 models are unpickable | {
"login": "Lawiss",
"id": 30115537,
"node_id": "MDQ6VXNlcjMwMTE1NTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/30115537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lawiss",
"html_url": "https://github.com/Lawiss",
"followers_url": "https://api.github.com/users/Lawiss/followers",
"following_url": "https://api.github.com/users/Lawiss/following{/other_user}",
"gists_url": "https://api.github.com/users/Lawiss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lawiss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lawiss/subscriptions",
"organizations_url": "https://api.github.com/users/Lawiss/orgs",
"repos_url": "https://api.github.com/users/Lawiss/repos",
"events_url": "https://api.github.com/users/Lawiss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lawiss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I think this may come from the [activation functions](https://github.com/huggingface/transformers/blob/master/src/transformers/activations.py), some of which are jitted. This allows better performance.\r\n\r\nIn what situation would you use `pickle` rather than our API `save_pretrained` ?\r\n\r\n```py\r\nfrom transformers import GPT2DoubleHeadsModel\r\nimport pickle\r\nmodel = GPT2DoubleHeadsModel.from_pretrained(\"gpt2-medium\")\r\nmodel.save_pretrained(\"test\")\r\n```",
"Hi Lysandre,\r\n\r\nThanks for your answer !\r\nI use [Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) which requires the model to be picklable (https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#make-model-picklable). I see from your link that if the PyTorch version is <1.4, the activation function is not jitted : \r\nhttps://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/activations.py#L32-L44\r\n\r\nI think i'll test again downgrading my PyTorch version to 1.3.1.",
"I understand, this is indeed an issue. We've had other several issues due to this, I think it would be better to revert to picklable option. Will open a PR with this objective in a bit.",
"Thank you for your responsiveness !",
"It should be fixed on master now. Could you let me know if running the code from the master branch solves your issue?",
"Yes, it works, thank you 👌",
"I am getting the same error trying to save AlbertForTokenClassification() model using `torch.save(model, 'temp.pt')`\r\nBut `torch.save(model.state_dict(), 'temp.pt')` works.",
"@paramansh Do you mind opening an issue with your specific problem, your software versions and code sample so that we may debug on our side? Thanks."
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts:
Hi,
I'm trying to train a GPT-2 Double Heads Model (based on your transfer-learning-conv-ai guide) using Pytorch Lightning. However I have a problem when trying to train the model on ddp distributed backend : the `GPT2DoubleHeadsModel` class seems to be unpickable and my training script fails with the following error :
`TypeError: can't pickle torch._C.ScriptFunction objects`
## To reproduce
Run :
```
from transformers import GPT2DoubleHeadsModel
import pickle
model = GPT2DoubleHeadsModel.from_pretrained("gpt2-medium")
pickle.dump(model,open("test.bin","wb"))
````
The problem does not occur when using `bert-base-uncased` for example. I tried to search which part of GPT-2 class contains ` torch._C.ScriptFunction objects` without success. Do you have an idea to avoid this error ?
Thanks in advance.
- `transformers` version: 2.8
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5 Cuda 10.2
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, ddp
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4038/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4037/comments | https://api.github.com/repos/huggingface/transformers/issues/4037/events | https://github.com/huggingface/transformers/issues/4037 | 608,122,227 | MDU6SXNzdWU2MDgxMjIyMjc= | 4,037 | torch num_samples=0 error on XLMnet @ run_language_modeling.py | {
"login": "ysig",
"id": 28439529,
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysig",
"html_url": "https://github.com/ysig",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"repos_url": "https://api.github.com/users/ysig/repos",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi! Are you using a multi-GPU setup ?",
"No I am running it on a single GPU.",
"Several other threads discuss this issue: \r\n - Either your datasets are too small: https://github.com/kaushaltrivedi/fast-bert/issues/181#issuecomment-596462172 \r\n - Or you haven't set `--block_size`: https://github.com/huggingface/transformers/issues/2380#issuecomment-572762207 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
While finetuning a language model on an XLNET:
```
python transformers/examples/run_language_modeling.py \
--output_dir=lms/xlnet_custom \
--model_type=xlnet \
--model_name_or_path=xlnet-base-cased \
--do_train \
--train_data_file=$TRAIN_FILE \
--per_gpu_train_batch_size 1 \
```
at the step after the features are produced, I get the error:
```
Traceback (most recent call last):
File "transformers/examples/run_language_modeling.py", line 284, in <module>
main()
File "transformers/examples/run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/transformers/trainer.py", line 218, in train
train_dataloader = self.get_train_dataloader()
File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/transformers/trainer.py", line 160, in get_train_dataloader
RandomSampler(self.train_dataset) if self.args.local_rank == -1 else DistributedSampler(self.train_dataset)
File "/home/ysig/miniconda3/envs/hug/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0
```
My text is a raw text of the size of MBs.
I am running this on a CUDA 10.2 in a python 3.7.6 conda environment with a torch version of 1.5.0.
Is there something I am possibly doing wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4037/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4036/comments | https://api.github.com/repos/huggingface/transformers/issues/4036/events | https://github.com/huggingface/transformers/issues/4036 | 608,104,069 | MDU6SXNzdWU2MDgxMDQwNjk= | 4,036 | BertForSequenceClassification producing same output during evaluation | {
"login": "drjosephliu",
"id": 22230085,
"node_id": "MDQ6VXNlcjIyMjMwMDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22230085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drjosephliu",
"html_url": "https://github.com/drjosephliu",
"followers_url": "https://api.github.com/users/drjosephliu/followers",
"following_url": "https://api.github.com/users/drjosephliu/following{/other_user}",
"gists_url": "https://api.github.com/users/drjosephliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drjosephliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drjosephliu/subscriptions",
"organizations_url": "https://api.github.com/users/drjosephliu/orgs",
"repos_url": "https://api.github.com/users/drjosephliu/repos",
"events_url": "https://api.github.com/users/drjosephliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/drjosephliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, have you solved this problem?\r\n\r\nI have a similar problem. I'm doing a multi-classification task over 8 classes. My task is to classify the entity type of short texts(usually < 32, much shorter than yours). These classes are something like school name, company name, job title, etc. Besides, I'm using the tf2 version model `TFBertForSequenceClassification` for my task.\r\n\r\nHowever, at most of the time(99% just as you describe), my fine-tuned model gives a same output distribution over my 8 classes whatever text I feed.",
"Yes, turns out I was not constructing the model properly. All I had to do was call `\r\nmodel = BertForSequenceClassification.from_pretrained(config)` instead."
] | 1,588 | 1,590 | 1,590 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I'm doing mutli-class classification over 50 classes on the Reuters 5050 dataset. The task is to identify the author that wrote the text. The texts can get up to 1700 tokens long but I'm truncating them down to 512.
I've noticed that during evaluation, the model 99% of the time outputs the same label regardless of the input. I say 99% because once in a while it outputs something different. This **does not** happen during training.
Things I've tried:
- Removed `~/.cache/torch/transformers/` directory
- Tested learning rates of `[2e-5, 1e-5, 5e-6, 1e-6, 1e-7, 1e-8]`
- Reducing batch size to 1
- Setting:
```
hidden_dropout_prob=0.5
'use_cached_eval_features': False,
'no_cache': True,
'overwrite_output_dir': True,
'reprocess_input_data': True,
```
**Tokenising and encoding:**
```
MAX_LEN = 512
def get_encodings(texts):
token_ids = []
attention_masks = []
for text in texts:
token_id = tokenizer.encode(text,
add_special_tokens=True,
max_length=MAX_LEN,
pad_to_max_length=True)
token_ids.append(token_id)
return token_ids
def get_attention_masks(padded_encodings):
attention_masks = []
for encoding in padded_encodings:
attention_mask = [int(token_id > 0) for token_id in encoding]
attention_masks.append(attention_mask)
return attention_masks
train_encodings = get_encodings(train_df.text.values)
train_attention_masks = get_attention_masks(train_encodings)
test_encodings = get_encodings(test_df.text.values)
test_attention_masks = get_attention_masks(test_encodings)
```
**Packing into datasets and dataloaders:**
```
batch_size = 1
train_data = TensorDataset(X_train, train_masks, y_train)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(X_test, test_masks, y_test)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
```
**Model setup:**
```
config = BertConfig.from_pretrained(
'bert-base-uncased',
num_labels = 50,
output_attentions = False,
output_hidden_states = False,
max_position_embeddings=MAX_LEN,
hidden_dropout_prob=0.5,
args={
'use_cached_eval_features': False,
'no_cache': True,
'overwrite_output_dir': True,
'reprocess_input_data': True,
}
)
model = BertForSequenceClassification(config)
model.to(device)
optimizer = AdamW(model.parameters(),
lr = 1e-5
eps = 1e-8
)
epochs = 4
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0,
num_training_steps = total_steps)
```
**Training:**
```
loss_values = []
print("Training...")
for epoch_i in range(0, epochs):
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
t0 = time.time()
total_loss = 0
model.train()
for step, batch in enumerate(train_dataloader):
if step % 40 == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
b_texts = batch[0].to(device)
b_attention_masks = batch[1].to(device)
b_authors = batch[2].to(device)
model.zero_grad()
outputs = model(b_texts,
token_type_ids=None,
attention_mask=b_attention_masks,
labels=b_authors)
loss = outputs[0]
total_loss += loss.item()
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
avg_train_loss = total_loss / len(train_dataloader)
# Store the loss value for plotting the learning curve.
loss_values.append(avg_train_loss)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(format_time(time.time() - t0)))
```
**Evaluation:**
```
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in validation_dataloader:
b_texts = batch[0].to(device)
b_attention_masks = batch[1].to(device)
b_authors = batch[2].to(device)
# Telling the model not to compute or store gradients, saving memory and
# speeding up validation
with torch.no_grad():
# Forward pass, calculate logit predictions.
# This will return the logits rather than the loss because we have
# not provided labels.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
outputs = model(b_texts,
token_type_ids=None,
attention_mask=b_attention_masks)
# Get the "logits" output by the model. The "logits" are the output
# values prior to applying an activation function like the softmax.
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
author_ids = b_authors.to('cpu').numpy()
print("Pred: {}, label: {}".format(np.argmax(logits).flatten(),
author_ids.flatten()))
# Calculate the accuracy for this batch of test sentences.
tmp_eval_accuracy = flat_accuracy(logits, author_ids)
# Accumulate the total accuracy.
eval_accuracy += tmp_eval_accuracy
# Track the number of batches
nb_eval_steps += 1
# Report the final accuracy for this validation run.
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print(" Validation took: {:}".format(format_time(time.time() - t0)))
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4036/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4035/comments | https://api.github.com/repos/huggingface/transformers/issues/4035/events | https://github.com/huggingface/transformers/pull/4035 | 608,102,922 | MDExOlB1bGxSZXF1ZXN0NDA5OTQ1MzU1 | 4,035 | Model card for roberta-base-squad2-covid | {
"login": "bogdankostic",
"id": 48713846,
"node_id": "MDQ6VXNlcjQ4NzEzODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/48713846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bogdankostic",
"html_url": "https://github.com/bogdankostic",
"followers_url": "https://api.github.com/users/bogdankostic/followers",
"following_url": "https://api.github.com/users/bogdankostic/following{/other_user}",
"gists_url": "https://api.github.com/users/bogdankostic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bogdankostic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bogdankostic/subscriptions",
"organizations_url": "https://api.github.com/users/bogdankostic/orgs",
"repos_url": "https://api.github.com/users/bogdankostic/repos",
"events_url": "https://api.github.com/users/bogdankostic/events{/privacy}",
"received_events_url": "https://api.github.com/users/bogdankostic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Great [dataset](https://github.com/deepset-ai/COVID-QA), thanks for sharing a model card\r\n\r\n[model card](https://huggingface.co/deepset/roberta-base-squad2-covid)"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4035",
"html_url": "https://github.com/huggingface/transformers/pull/4035",
"diff_url": "https://github.com/huggingface/transformers/pull/4035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4035.patch",
"merged_at": 1588102171000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4034/comments | https://api.github.com/repos/huggingface/transformers/issues/4034/events | https://github.com/huggingface/transformers/issues/4034 | 608,088,418 | MDU6SXNzdWU2MDgwODg0MTg= | 4,034 | 🐛 Saving TF model : Expected Operation, Variable, or Tensor, got None | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! We prefer using our own API for this, using `from_pretrained` which also takes care of the configuration:\r\n\r\n```py\r\nfrom transformers import TFElectraModel\r\n\r\nmodel = TFElectraModel.from_pretrained(\"google/electra-base-discriminator\")\r\n!mkdir swag\r\nmodel.save_pretrained(\"swag\")\r\n```\r\n\r\nIs there a reason why you would rather use `.save` rather than `.save_pretrained`?",
"I was using `.save()` because I made a custom model, with additional layers on top of Electra, and I want to save these layers as well.\r\n\r\nFinally, I simply sub-classed `TFElectraPreTrainedModel` and add my custom layers there, so I can use `.save_pretrained`.\r\n\r\nThanks for the help !",
"I am also facing similar issue.\r\nThe reason for using save method is i want to host the saved model using tensorflow model server",
"@nirajkale did you manage to solve this? I'm facing the same issue with other pre-trained."
] | 1,588 | 1,679 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
I'm training TFElectraModel. Training goes well, but when I try to save the model, I met the following error :
`TypeError: Expected Operation, Variable, or Tensor, got None`
## Minimal reproducible example
```
!pip install transformers
from transformers import TFElectraModel
model = TFElectraModel.from_pretrained("google/electra-base-discriminator")
!mkdir swag
model.save("swag")
```
## Related
#2336
https://github.com/tensorflow/tensorflow/issues/35432
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4034/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4033/comments | https://api.github.com/repos/huggingface/transformers/issues/4033/events | https://github.com/huggingface/transformers/pull/4033 | 608,077,939 | MDExOlB1bGxSZXF1ZXN0NDA5OTI1MjI3 | 4,033 | Model Card: gaochangkuan README.md | {
"login": "ScottishFold007",
"id": 36957508,
"node_id": "MDQ6VXNlcjM2OTU3NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36957508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScottishFold007",
"html_url": "https://github.com/ScottishFold007",
"followers_url": "https://api.github.com/users/ScottishFold007/followers",
"following_url": "https://api.github.com/users/ScottishFold007/following{/other_user}",
"gists_url": "https://api.github.com/users/ScottishFold007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScottishFold007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScottishFold007/subscriptions",
"organizations_url": "https://api.github.com/users/ScottishFold007/orgs",
"repos_url": "https://api.github.com/users/ScottishFold007/repos",
"events_url": "https://api.github.com/users/ScottishFold007/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScottishFold007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"As discussed over email, can you move this file to https://huggingface.co/gaochangkuan/model_dir ?\r\n\r\nThanks!",
"> As discussed over email, can you move this file to https://huggingface.co/gaochangkuan/model_dir ?\r\n> \r\n> Thanks!\r\n\r\nOk, I will do it later~",
"I meant the file path of the README.md file :)\r\n\r\nI've done it on your branch directly. Thank you for sharing: [model page](https://huggingface.co/gaochangkuan/model_dir)."
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4033",
"html_url": "https://github.com/huggingface/transformers/pull/4033",
"diff_url": "https://github.com/huggingface/transformers/pull/4033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4033.patch",
"merged_at": 1588300019000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4032/comments | https://api.github.com/repos/huggingface/transformers/issues/4032/events | https://github.com/huggingface/transformers/pull/4032 | 607,972,768 | MDExOlB1bGxSZXF1ZXN0NDA5ODQyNTIy | 4,032 | [experimental] rename torch_device -> default_device | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4032/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4032",
"html_url": "https://github.com/huggingface/transformers/pull/4032",
"diff_url": "https://github.com/huggingface/transformers/pull/4032.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4032.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4031/comments | https://api.github.com/repos/huggingface/transformers/issues/4031/events | https://github.com/huggingface/transformers/pull/4031 | 607,963,374 | MDExOlB1bGxSZXF1ZXN0NDA5ODM1NDAz | 4,031 | [gitignore] fixtures created by unit tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=h1) Report\n> Merging [#4031](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4031 +/- ##\n=======================================\n Coverage 78.44% 78.44% \n=======================================\n Files 111 111 \n Lines 18518 18518 \n=======================================\n Hits 14527 14527 \n Misses 3991 3991 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4031/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=footer). Last update [4e817ff...361724e](https://codecov.io/gh/huggingface/transformers/pull/4031?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Which tests yield those fixtures? I just ran all the tests and I have none of those",
"It comes from adding a failing test so we don't need it. My bad."
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4031/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4031",
"html_url": "https://github.com/huggingface/transformers/pull/4031",
"diff_url": "https://github.com/huggingface/transformers/pull/4031.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4031.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4030/comments | https://api.github.com/repos/huggingface/transformers/issues/4030/events | https://github.com/huggingface/transformers/pull/4030 | 607,945,586 | MDExOlB1bGxSZXF1ZXN0NDA5ODIxNTY0 | 4,030 | CDN urls | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This also gives us a nice speedup on CI which is always nice"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | Use CDN urls for weights (not for tokenizer or config files) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4030",
"html_url": "https://github.com/huggingface/transformers/pull/4030",
"diff_url": "https://github.com/huggingface/transformers/pull/4030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4030.patch",
"merged_at": 1588120034000
} |
https://api.github.com/repos/huggingface/transformers/issues/4029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4029/comments | https://api.github.com/repos/huggingface/transformers/issues/4029/events | https://github.com/huggingface/transformers/issues/4029 | 607,912,514 | MDU6SXNzdWU2MDc5MTI1MTQ= | 4,029 | TF2 - how to access intermediate layers of pre-trained bert model? | {
"login": "yagelardan",
"id": 30495788,
"node_id": "MDQ6VXNlcjMwNDk1Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/30495788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yagelardan",
"html_url": "https://github.com/yagelardan",
"followers_url": "https://api.github.com/users/yagelardan/followers",
"following_url": "https://api.github.com/users/yagelardan/following{/other_user}",
"gists_url": "https://api.github.com/users/yagelardan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yagelardan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yagelardan/subscriptions",
"organizations_url": "https://api.github.com/users/yagelardan/orgs",
"repos_url": "https://api.github.com/users/yagelardan/repos",
"events_url": "https://api.github.com/users/yagelardan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yagelardan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | NONE | null | (I'm following [this](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) pytorch tutorial about BERT word embeddings, and in the tutorial the author is access the intermediate layers of the BERT model.)
What I want is to acess the last, lets say, 4 last layers of a single token input to the BERT model in tensorflow2 code. Because each layer output a vector of length 764 - so last 4 layers will have a shape of 4*768=3072 (for each token).
How can I implement in TF/keras/TF2, to get the intermediate layers of pretrained model for a token input? (later I will try to get the tokens for each token in a sentence, but for now one token is enough).
I'm using the huggingface's BERT model:
!pip install transformers
from transformers import (TFBertModel, BertTokenizer)
bert_model = TFBertModel.from_pretrained("bert-base-uncased") # Automatically loads the config
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
sentence_marked = "hello"
tokenized_text = bert_tokenizer.tokenize(sentence_marked)
indexed_tokens = bert_tokenizer.convert_tokens_to_ids(tokenized_text)
print (indexed_tokens)
>> prints [7592]
The output is a token ([7592]), which should be the input of the for the BERT model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4029/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4028/comments | https://api.github.com/repos/huggingface/transformers/issues/4028/events | https://github.com/huggingface/transformers/issues/4028 | 607,912,405 | MDU6SXNzdWU2MDc5MTI0MDU= | 4,028 | CamembertForSequenceClassification not initialized from pretrained model | {
"login": "Hadjerkhd",
"id": 17832283,
"node_id": "MDQ6VXNlcjE3ODMyMjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17832283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hadjerkhd",
"html_url": "https://github.com/Hadjerkhd",
"followers_url": "https://api.github.com/users/Hadjerkhd/followers",
"following_url": "https://api.github.com/users/Hadjerkhd/following{/other_user}",
"gists_url": "https://api.github.com/users/Hadjerkhd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hadjerkhd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hadjerkhd/subscriptions",
"organizations_url": "https://api.github.com/users/Hadjerkhd/orgs",
"repos_url": "https://api.github.com/users/Hadjerkhd/repos",
"events_url": "https://api.github.com/users/Hadjerkhd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hadjerkhd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, do you mind showing how you're loading the model?",
"Hello, \r\nHere is how I'm loading the model : \r\n`model = CamembertForSequenceClassification.from_pretrained(\"https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base-pytorch_model.bin\", config=bertconfig)`\r\n\r\nI'm giving the link to the model because when using the name 'camembert-base' it triggers an error saying it can't find the model at the following link : \r\n`https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base/pytorch_model.bin`\r\n\r\nI just would like to point out that the class `CamembertForSequenceClassification` , I'm using, is a re-implementation for the HuggingFace one following [wang's](https://github.com/wang-h/bert-relation-classification) logic, to re-adapt if for relation extraction, as follow : \r\n```\r\n\r\nclass CamembertForSequenceClassification(RobertaForSequenceClassification):\r\n def __init__(self, config):\r\n super(CamembertForSequenceClassification, self).__init__(config)\r\n \r\n self.num_labels = config.num_labels\r\n self.l2_reg_lambda = config.l2_reg_lambda\r\n self.bert = CamembertModel(config)\r\n \r\n self.latent_entity_typing = config.latent_entity_typing\r\n self.latent_size = config.hidden_size\r\n self.latent_type = nn.Parameter(torch.FloatTensor(\r\n 3, config.hidden_size), requires_grad=True) \r\n \r\n if self.latent_entity_typing:\r\n classifier_size += config.hidden_size*2\r\n \r\n self.dropout = nn.Dropout(config.hidden_dropout_prob)\r\n \r\n classifier_size = config.hidden_size*3 \r\n self.classifier = nn.Linear(\r\n classifier_size, self.config.num_labels)\r\n\r\n self.init_weights()\r\n\r\n def forward(self, input_ids, token_type_ids=None, attention_mask=None, e1_mask=None, e2_mask=None, labels=None,\r\n position_ids=None, head_mask=None):\r\n \r\n outputs = self.bert(input_ids, attention_mask=attention_mask,token_type_ids=token_type_ids, position_ids=position_ids, \r\n head_mask=head_mask)\r\n\r\n pooled_output = outputs[1]\r\n sequence_output = outputs[0]\r\n\r\n def extract_entity(sequence_output, e_mask):\r\n extended_e_mask = e_mask.unsqueeze(1)\r\n extended_e_mask = torch.bmm(\r\n extended_e_mask.float(), sequence_output).squeeze(1)\r\n return extended_e_mask.float()\r\n \r\n e1_h = extract_entity(sequence_output, e1_mask)\r\n e2_h = extract_entity(sequence_output, e2_mask)\r\n \r\n context = self.dropout(pooled_output)\r\n pooled_output = torch.cat([context, e1_h, e2_h], dim=-1)\r\n\r\n logits = self.classifier(pooled_output)\r\n\r\n outputs = (logits,) + outputs[2:]\r\n\r\n device = logits.get_device()\r\n l2 = l2_loss(self.parameters())\r\n # print(l2)\r\n if device >= 0:\r\n l2 = l2.to(device)\r\n loss = l2 * self.l2_reg_lambda\r\n if labels is not None:\r\n if self.num_labels == 1:\r\n # We are doing regression\r\n loss_fct = MSELoss()\r\n loss += loss_fct(logits.view(-1), labels.view(-1))\r\n else:\r\n probabilities = F.softmax(logits, dim=-1)\r\n log_probs = F.log_softmax(logits, dim=-1)\r\n one_hot_labels = F.one_hot(labels, num_classes=self.num_labels)\r\n if device >= 0:\r\n one_hot_labels = one_hot_labels.to(device)\r\n\r\n dist = one_hot_labels[:, 1:].float() * log_probs[:, 1:]\r\n example_loss_except_other, _ = dist.min(dim=-1)\r\n per_example_loss = - example_loss_except_other.mean()\r\n\r\n rc_probabilities = probabilities - probabilities * one_hot_labels.float()\r\n second_pre, _ = rc_probabilities[:, 1:].max(dim=-1)\r\n rc_loss = - (1 - second_pre).log().mean()\r\n\r\n #print(loss, per_example_loss, rc_loss)\r\n loss += per_example_loss + 5 * rc_loss\r\n\r\n outputs = (loss,) + outputs\r\n\r\n return outputs \r\n````",
"any solution for my problem please ? am I loading the pre-trained model the wrong way ? ",
"That's because your `CamembertForSequenceClassification` does not conform to our model, regarding naming. \r\n\r\n`CamembertForSequenceClassification` inherits from `RobertaForSequenceClassification` and therefore the transformer model should be named `self.roberta`, and not `self.bert`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | Model I am using **CamembertForSequenceClassification** for a relation classification task.
I'm trying to adapt the model following the implementation proposed by [wang](https://github.com/wang-h/bert-relation-classification) for this [paper](https://arxiv.org/pdf/1905.08284.pdf)
When I try to load the pre trained model
`loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/camembert-base-pytorch_model.bin `
to fine-tune it on my task, I get the following INFO message. I find it weird because it seems that a lot of model's params are not initialized (contrary to what I see for BERT model) :
`04/28/2020 00:56:31 - INFO - transformers.modeling_utils - Weights of CamembertForSequenceClassification not initialized from pretrained model: ['latent_type', 'classifier.weight', 'classifier.bias', 'bert.embeddings.word_embeddings.weight', 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', 'bert.encoder.layer.0.attention.self.key.bias', 'bert.encoder.layer.0.attention.self.value.weight', 'bert.encoder.layer.0.attention.self.value.bias', 'bert.encoder.layer.0.attention.output.dense.weight', 'bert.encoder.layer.0.attention.output.dense.bias', 'bert.encoder.layer.0.attention.output.LayerNorm.weight', 'bert.encoder.layer.0.attention.output.LayerNorm.bias', 'bert.encoder.layer.0.intermediate.dense.weight', 'bert.encoder.layer.0.intermediate.dense.bias', 'bert.encoder.layer.0.output.dense.weight', 'bert.encoder.layer.0.output.dense.bias', 'bert.encoder.layer.0.output.LayerNorm.weight', 'bert.encoder.layer.0.output.LayerNorm.bias', 'bert.encoder.layer.1.attention.self.query.weight', 'bert.encoder.layer.1.attention.self.query.bias', 'bert.encoder.layer.1.attention.self.key.weight', 'bert.encoder.layer.1.attention.self.key.bias', 'bert.encoder.layer.1.attention.self.value.weight', 'bert.encoder.layer.1.attention.self.value.bias', 'bert.encoder.layer.1.attention.output.dense.weight', 'bert.encoder.layer.1.attention.output.dense.bias', 'bert.encoder.layer.1.attention.output.LayerNorm.weight', 'bert.encoder.layer.1.attention.output.LayerNorm.bias', 'bert.encoder.layer.1.intermediate.dense.weight', 'bert.encoder.layer.1.intermediate.dense.bias', 'bert.encoder.layer.1.output.dense.weight', 'bert.encoder.layer.1.output.dense.bias', 'bert.encoder.layer.1.output.LayerNorm.weight', 'bert.encoder.layer.1.output.LayerNorm.bias', 'bert.encoder.layer.2.attention.self.query.weight', 'bert.encoder.layer.2.attention.self.query.bias', 'bert.encoder.layer.2.attention.self.key.weight', 'bert.encoder.layer.2.attention.self.key.bias', 'bert.encoder.layer.2.attention.self.value.weight', 'bert.encoder.layer.2.attention.self.value.bias', 'bert.encoder.layer.2.attention.output.dense.weight', 'bert.encoder.layer.2.attention.output.dense.bias', 'bert.encoder.layer.2.attention.output.LayerNorm.weight', 'bert.encoder.layer.2.attention.output.LayerNorm.bias', 'bert.encoder.layer.2.intermediate.dense.weight', 'bert.encoder.layer.2.intermediate.dense.bias', 'bert.encoder.layer.2.output.dense.weight', 'bert.encoder.layer.2.output.dense.bias', 'bert.encoder.layer.2.output.LayerNorm.weight', 'bert.encoder.layer.2.output.LayerNorm.bias', 'bert.encoder.layer.3.attention.self.query.weight', 'bert.encoder.layer.3.attention.self.query.bias', 'bert.encoder.layer.3.attention.self.key.weight', 'bert.encoder.layer.3.attention.self.key.bias', 'bert.encoder.layer.3.attention.self.value.weight', 'bert.encoder.layer.3.attention.self.value.bias', 'bert.encoder.layer.3.attention.output.dense.weight', 'bert.encoder.layer.3.attention.output.dense.bias', 'bert.encoder.layer.3.attention.output.LayerNorm.weight', 'bert.encoder.layer.3.attention.output.LayerNorm.bias', 'bert.encoder.layer.3.intermediate.dense.weight', 'bert.encoder.layer.3.intermediate.dense.bias', 'bert.encoder.layer.3.output.dense.weight', 'bert.encoder.layer.3.output.dense.bias', 'bert.encoder.layer.3.output.LayerNorm.weight', 'bert.encoder.layer.3.output.LayerNorm.bias', 'bert.encoder.layer.4.attention.self.query.weight', 'bert.encoder.layer.4.attention.self.query.bias', 'bert.encoder.layer.4.attention.self.key.weight', 'bert.encoder.layer.4.attention.self.key.bias', 'bert.encoder.layer.4.attention.self.value.weight', 'bert.encoder.layer.4.attention.self.value.bias', 'bert.encoder.layer.4.attention.output.dense.weight', 'bert.encoder.layer.4.attention.output.dense.bias', 'bert.encoder.layer.4.attention.output.LayerNorm.weight', 'bert.encoder.layer.4.attention.output.LayerNorm.bias', 'bert.encoder.layer.4.intermediate.dense.weight', 'bert.encoder.layer.4.intermediate.dense.bias', 'bert.encoder.layer.4.output.dense.weight', 'bert.encoder.layer.4.output.dense.bias', 'bert.encoder.layer.4.output.LayerNorm.weight', 'bert.encoder.layer.4.output.LayerNorm.bias', 'bert.encoder.layer.5.attention.self.query.weight', 'bert.encoder.layer.5.attention.self.query.bias', 'bert.encoder.layer.5.attention.self.key.weight', 'bert.encoder.layer.5.attention.self.key.bias', 'bert.encoder.layer.5.attention.self.value.weight', 'bert.encoder.layer.5.attention.self.value.bias', 'bert.encoder.layer.5.attention.output.dense.weight', 'bert.encoder.layer.5.attention.output.dense.bias', 'bert.encoder.layer.5.attention.output.LayerNorm.weight', 'bert.encoder.layer.5.attention.output.LayerNorm.bias', 'bert.encoder.layer.5.intermediate.dense.weight', 'bert.encoder.layer.5.intermediate.dense.bias', 'bert.encoder.layer.5.output.dense.weight', 'bert.encoder.layer.5.output.dense.bias', 'bert.encoder.layer.5.output.LayerNorm.weight', 'bert.encoder.layer.5.output.LayerNorm.bias', 'bert.encoder.layer.6.attention.self.query.weight', 'bert.encoder.layer.6.attention.self.query.bias', 'bert.encoder.layer.6.attention.self.key.weight', 'bert.encoder.layer.6.attention.self.key.bias', 'bert.encoder.layer.6.attention.self.value.weight', 'bert.encoder.layer.6.attention.self.value.bias', 'bert.encoder.layer.6.attention.output.dense.weight', 'bert.encoder.layer.6.attention.output.dense.bias', 'bert.encoder.layer.6.attention.output.LayerNorm.weight', 'bert.encoder.layer.6.attention.output.LayerNorm.bias', 'bert.encoder.layer.6.intermediate.dense.weight', 'bert.encoder.layer.6.intermediate.dense.bias', 'bert.encoder.layer.6.output.dense.weight', 'bert.encoder.layer.6.output.dense.bias', 'bert.encoder.layer.6.output.LayerNorm.weight', 'bert.encoder.layer.6.output.LayerNorm.bias', 'bert.encoder.layer.7.attention.self.query.weight', 'bert.encoder.layer.7.attention.self.query.bias', 'bert.encoder.layer.7.attention.self.key.weight', 'bert.encoder.layer.7.attention.self.key.bias', 'bert.encoder.layer.7.attention.self.value.weight', 'bert.encoder.layer.7.attention.self.value.bias', 'bert.encoder.layer.7.attention.output.dense.weight', 'bert.encoder.layer.7.attention.output.dense.bias', 'bert.encoder.layer.7.attention.output.LayerNorm.weight', 'bert.encoder.layer.7.attention.output.LayerNorm.bias', 'bert.encoder.layer.7.intermediate.dense.weight', 'bert.encoder.layer.7.intermediate.dense.bias', 'bert.encoder.layer.7.output.dense.weight', 'bert.encoder.layer.7.output.dense.bias', 'bert.encoder.layer.7.output.LayerNorm.weight', 'bert.encoder.layer.7.output.LayerNorm.bias', 'bert.encoder.layer.8.attention.self.query.weight', 'bert.encoder.layer.8.attention.self.query.bias', 'bert.encoder.layer.8.attention.self.key.weight', 'bert.encoder.layer.8.attention.self.key.bias', 'bert.encoder.layer.8.attention.self.value.weight', 'bert.encoder.layer.8.attention.self.value.bias', 'bert.encoder.layer.8.attention.output.dense.weight', 'bert.encoder.layer.8.attention.output.dense.bias', 'bert.encoder.layer.8.attention.output.LayerNorm.weight', 'bert.encoder.layer.8.attention.output.LayerNorm.bias', 'bert.encoder.layer.8.intermediate.dense.weight', 'bert.encoder.layer.8.intermediate.dense.bias', 'bert.encoder.layer.8.output.dense.weight', 'bert.encoder.layer.8.output.dense.bias', 'bert.encoder.layer.8.output.LayerNorm.weight', 'bert.encoder.layer.8.output.LayerNorm.bias', 'bert.encoder.layer.9.attention.self.query.weight', 'bert.encoder.layer.9.attention.self.query.bias', 'bert.encoder.layer.9.attention.self.key.weight', 'bert.encoder.layer.9.attention.self.key.bias', 'bert.encoder.layer.9.attention.self.value.weight', 'bert.encoder.layer.9.attention.self.value.bias', 'bert.encoder.layer.9.attention.output.dense.weight', 'bert.encoder.layer.9.attention.output.dense.bias', 'bert.encoder.layer.9.attention.output.LayerNorm.weight', 'bert.encoder.layer.9.attention.output.LayerNorm.bias', 'bert.encoder.layer.9.intermediate.dense.weight', 'bert.encoder.layer.9.intermediate.dense.bias', 'bert.encoder.layer.9.output.dense.weight', 'bert.encoder.layer.9.output.dense.bias', 'bert.encoder.layer.9.output.LayerNorm.weight', 'bert.encoder.layer.9.output.LayerNorm.bias', 'bert.encoder.layer.10.attention.self.query.weight', 'bert.encoder.layer.10.attention.self.query.bias', 'bert.encoder.layer.10.attention.self.key.weight', 'bert.encoder.layer.10.attention.self.key.bias', 'bert.encoder.layer.10.attention.self.value.weight', 'bert.encoder.layer.10.attention.self.value.bias', 'bert.encoder.layer.10.attention.output.dense.weight', 'bert.encoder.layer.10.attention.output.dense.bias', 'bert.encoder.layer.10.attention.output.LayerNorm.weight', 'bert.encoder.layer.10.attention.output.LayerNorm.bias', 'bert.encoder.layer.10.intermediate.dense.weight', 'bert.encoder.layer.10.intermediate.dense.bias', 'bert.encoder.layer.10.output.dense.weight', 'bert.encoder.layer.10.output.dense.bias', 'bert.encoder.layer.10.output.LayerNorm.weight', 'bert.encoder.layer.10.output.LayerNorm.bias', 'bert.encoder.layer.11.attention.self.query.weight', 'bert.encoder.layer.11.attention.self.query.bias', 'bert.encoder.layer.11.attention.self.key.weight', 'bert.encoder.layer.11.attention.self.key.bias', 'bert.encoder.layer.11.attention.self.value.weight', 'bert.encoder.layer.11.attention.self.value.bias', 'bert.encoder.layer.11.attention.output.dense.weight', 'bert.encoder.layer.11.attention.output.dense.bias', 'bert.encoder.layer.11.attention.output.LayerNorm.weight', 'bert.encoder.layer.11.attention.output.LayerNorm.bias', 'bert.encoder.layer.11.intermediate.dense.weight', 'bert.encoder.layer.11.intermediate.dense.bias', 'bert.encoder.layer.11.output.dense.weight', 'bert.encoder.layer.11.output.dense.bias', 'bert.encoder.layer.11.output.LayerN`
Any thoughts about that ?
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4028/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4027/comments | https://api.github.com/repos/huggingface/transformers/issues/4027/events | https://github.com/huggingface/transformers/pull/4027 | 607,905,442 | MDExOlB1bGxSZXF1ZXN0NDA5Nzg4OTI4 | 4,027 | Hoist bert model tester for patrick | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=h1) Report\n> Merging [#4027](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4027 +/- ##\n==========================================\n- Coverage 78.44% 78.37% -0.08% \n==========================================\n Files 111 111 \n Lines 18518 18518 \n==========================================\n- Hits 14527 14513 -14 \n- Misses 3991 4005 +14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.46% <0.00%> (-2.31%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=footer). Last update [4e817ff...7189fff](https://codecov.io/gh/huggingface/transformers/pull/4027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4027/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4027",
"html_url": "https://github.com/huggingface/transformers/pull/4027",
"diff_url": "https://github.com/huggingface/transformers/pull/4027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4027.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4026/comments | https://api.github.com/repos/huggingface/transformers/issues/4026/events | https://github.com/huggingface/transformers/issues/4026 | 607,903,160 | MDU6SXNzdWU2MDc5MDMxNjA= | 4,026 | Training a new language model with custom loss and input representation | {
"login": "shenkev",
"id": 5405172,
"node_id": "MDQ6VXNlcjU0MDUxNzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5405172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shenkev",
"html_url": "https://github.com/shenkev",
"followers_url": "https://api.github.com/users/shenkev/followers",
"following_url": "https://api.github.com/users/shenkev/following{/other_user}",
"gists_url": "https://api.github.com/users/shenkev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shenkev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shenkev/subscriptions",
"organizations_url": "https://api.github.com/users/shenkev/orgs",
"repos_url": "https://api.github.com/users/shenkev/repos",
"events_url": "https://api.github.com/users/shenkev/events{/privacy}",
"received_events_url": "https://api.github.com/users/shenkev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We recently implemented a new `Trainer` which should allow to easily change the training loop. We don't have example scripts showing how to override this yet. Here's the [trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) file.\r\n\r\nIf I wanted to modify the way the loss is handled, what I would do is create a specific trainer that inherits from `Trainer`. I would then simply override the `_training_step` method to handle the loss/losses.\r\n\r\nIn order to modify the input representation to build inputs made up of three sequences, the best would be for you to create a Dataset similar to [`TextDataset`](https://github.com/huggingface/transformers/blob/fa49b9afeab5545f14b3661b35195b829fcf8ef5/src/transformers/data/datasets/language_modeling.py#L16), which builds your inputs as you wish.\r\n\r\nYou can then modify the `run_language_modeling.py` file to use your dataset. Let me know if I can help further!",
"Hi, this is super useful advice for getting me started! After looking at the files you pointed out, it seems like in order for me to implement the input representation and custom loss function, I need to modify transformers.modeling_bert.py.\r\n\r\nI have 2 questions.\r\n\r\n1. If I implement my own local version of modeling_bert.py, how should I instantiate the BertForMaskedLM class? The way the example does it is with AutoModelWithLMHead.from_pretrained - this obscures how to actually instantiate a particular model class.\r\n\r\n2. For concatenating the 3 sequences in the input, how would I make sure a [SEP] token is inserted between each sequence? My line_by_line data file looks as follows:\r\n\r\nsequence 1 \\t sequence 2 \\t sequence 3 \\n\r\nsequence 1 \\t sequence 2 \\t sequence 3 \\n\r\nsequence 1 \\t sequence 2 \\t sequence 3 \\n\r\nsequence 1 \\t sequence 2 \\t sequence 3 \\n\r\n.\r\n.\r\n.\r\n\r\nI think my desired input looks like this:\r\n\r\n[sequence 1's tokens] [sep] [sequence 2's tokens] [sep] [sequence 3's tokens]\r\n\r\nand I'd like to apply position embedding to each sequence 1, 2, 3.",
"I don't think you would have to modify the `modeling_bert.py` file. You may be thinking that because the models like `BertForMaskedLM` can [compute the losses](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L947-L958) if you give them labels.\r\n\r\nHowever, all those classes only compute the losses if you hand them the `labels`, and the loss function cannot be changed. I don't know what is your specific use-case, but if you want to use a custom loss function you could retrieve the model's hidden states, and compute your loss then (outside the model). The tensor containing the model's last hidden states is the first value in the model's output tuple if you don't specify labels.\r\n\r\nSeeing as the way you construct your dataset is different from the \"traditional\" way of handling sequences, I think you would have to build your own encoding method relying on `encode`, `encode_plus` or `batch_encode_plus`. Unfortunately, the logic we have for building inputs is very specific to the model's pre-trainings and sequence classification, so these work only for two sequences.\r\n\r\nWe did discuss at one point handling more than 2 sequences when building inputs but never got to it.\r\n\r\nOur models handle creating position embeddings on their own so except if you use a different type of positional embeddings than the model's, you shouldn't have to do anything to handle those.",
"Thanks for the pointers. I'll have an attempt at implementing encode/encode_plus/batch_encode_plus. One question before I do so.\r\n\r\nIt seems like a lot of changes have been made in the previous 3 weeks since 2.8.0 came out. These changes seem to be affecting the files I want to manipulate. Specifically, I don't think trainer.py was even used by run_language_modeling.py 3 weeks ago.\r\n\r\nDo you recommend moving forward with my project using the latest code changes on the master branch, or using the March 02 2020 snapshot (which I'm guessing is the 2.8.0 release snapshot)?\r\n\r\nThe files you referred me to were all on master. It seems like you can't run them unless transformer is installed from source (pip install version isn't compatible). I'm a bit concerned with using master - I tried training a tokenizer on it and it seemed slower which impleis the latest changes don't seem to have gone through thorough testing.",
"Hi, I've thought over your advice a bit more and I think there's an easier solution. Suppose the 3 sequences of my input have disjoint vocabulary (I think this is a decent assumption for my particular dataset/usecase).\r\n\r\nE.g. each line is (seq1, seq2, seq3). seq1 is english, seq2 is french, seq3 is spanish.\r\n\r\nCould I just train 3 different tokenizers and tell BertForMaskedLM the total vocab size is the sum of the 3 tokenizer's vocab sizes?\r\n\r\nI realized there's the token_type_ids parameter in the BertEmbeddings which has been implemented for an arbitrary value of config.type_vocab_size. It seems like I can then just set config.type_vocab_size=3 and pass in token_type_ids=[0, 0, ... 1, 1, ... 2, 2].\r\n\r\nDoes this seem reasonable?\r\n\r\nThanks so much for your help!",
"Indeed, there has been a lot of changes in the last few weeks! Since the last release, we've moved to a new `Trainer` class, which abstracts most of the code that isn't especially important for users (fp16, multi-GPU, gradient accumulation etc). We've also completely moved to using `tokenizers`, which uses rust as a backend for tokenizing. Until now the support was ok, now it's fully supported.\r\n\r\nYou would use the previous snapshot only if you want to have a script that does not rely on any abstraction. The previous scripts were mostly standalone, which means that it would be a good learning experience to understand every small detail regarding training. However, it may be lengthy to adapt to your particular usecase. That's what we're trying to fix with the new `Trainer`, where re-implementing a few methods is simple.\r\n\r\nI find it weird that you tried to train a tokenizer on `master` and it was slower. We **do** test thoroughly, and the new tokenizers have undergone a lot of trial and error and a lot of tests to be in the position they are now.\r\n\r\nI think training three different tokenizers would work, but keep in mind that this would require a lot of memory. The embeddings take up a big portion of your model's memory, so beware of the total vocabulary size. \r\n\r\nYou would also need to think about how you want to separate the three tokenizers, and especially how to make sure you have no overlap. Using separate tokenizers for each sequence, and then shifting the token indices would probably be the most robust (cc @n1t0).",
"Yup, that's the implementation I had in mind for separating the tokenizers!\r\n\r\nAs a new user, I think the new abstractions that were introduced makes calling the API/running the script a lot easier but it obscures some of the underlying code - especially someone who doesn't have experience with the abstraction libraries you are using. I think I will go with the previous snapshot and probably switch over to master if I get stuck.\r\n\r\nPlease disregard the comment on training the tokenizer is slower, I did something wrong on my end.",
"Indeed, we plan to add examples showing how to use the `Trainer` to custom tasks down the road!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
I'm following https://huggingface.co/blog/how-to-train which describe an overview of training a new language model. However, the file the guide points to run_language_modeling.py abstracts away a lot of things. It's not clear if it's possible to train with a custom loss/input representation.
For example, what if I want to train using 3 sequences concatenated together instead of 2 as in the original Bert paper? (e.g. [context, context, question] or [sent1, sent2, sent3] where the task is whether sentences sent1, sent2, sent3 are 3 consecutive sentences or not.)
Do I need to modify the source code to achieve this? Is there any documentation to modify the underlying model or loss functions? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4026/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4026/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4025/comments | https://api.github.com/repos/huggingface/transformers/issues/4025/events | https://github.com/huggingface/transformers/pull/4025 | 607,903,107 | MDExOlB1bGxSZXF1ZXN0NDA5Nzg3MDY3 | 4,025 | Minor fix in Transformers Notebook | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=h1) Report\n> Merging [#4025](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2fade302ac55a289f00c61b0947a8e00dadf41fe&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4025 +/- ##\n=======================================\n Coverage 78.51% 78.51% \n=======================================\n Files 111 111 \n Lines 18486 18486 \n=======================================\n+ Hits 14514 14515 +1 \n+ Misses 3972 3971 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4025/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.02% <0.00%> (+0.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=footer). Last update [2fade30...3a3c460](https://codecov.io/gh/huggingface/transformers/pull/4025?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Perfekt, vielen Dank haha :-) "
] | 1,588 | 1,588 | 1,588 | COLLABORATOR | null | Minor fix for the German example sentence in notebook :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4025",
"html_url": "https://github.com/huggingface/transformers/pull/4025",
"diff_url": "https://github.com/huggingface/transformers/pull/4025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4025.patch",
"merged_at": 1588057945000
} |
https://api.github.com/repos/huggingface/transformers/issues/4024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4024/comments | https://api.github.com/repos/huggingface/transformers/issues/4024/events | https://github.com/huggingface/transformers/pull/4024 | 607,899,604 | MDExOlB1bGxSZXF1ZXN0NDA5Nzg0MjMw | 4,024 | Fix for #3846 | {
"login": "lukovnikov",
"id": 1732910,
"node_id": "MDQ6VXNlcjE3MzI5MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukovnikov",
"html_url": "https://github.com/lukovnikov",
"followers_url": "https://api.github.com/users/lukovnikov/followers",
"following_url": "https://api.github.com/users/lukovnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions",
"organizations_url": "https://api.github.com/users/lukovnikov/orgs",
"repos_url": "https://api.github.com/users/lukovnikov/repos",
"events_url": "https://api.github.com/users/lukovnikov/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukovnikov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=h1) Report\n> Merging [#4024](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ab90353f1abfd15f8d21f99395658d060679a08c&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4024 +/- ##\n==========================================\n- Coverage 78.01% 77.93% -0.09% \n==========================================\n Files 114 114 \n Lines 18671 18671 \n==========================================\n- Hits 14566 14551 -15 \n- Misses 4105 4120 +15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.36% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `90.47% <0.00%> (-2.47%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=footer). Last update [ab90353...333778c](https://codecov.io/gh/huggingface/transformers/pull/4024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks good to me :-) Thanks for the PR! What do you think @thomwolf @n1t0 ?",
"Hi @lukovnikov,\r\n\r\nwas the last commit by accident? I don't think that this one is needed to fix the issue no?",
"Thanks for pointing that out, should be good now."
] | 1,588 | 1,589 | 1,589 | CONTRIBUTOR | null | Fix for #3846 . PretrainedTokenizer mapped " do not" to " don't" when .decode(...) is called. Removed the " do not" --> " don't" mapping from clean_up_tokenization(...). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4024",
"html_url": "https://github.com/huggingface/transformers/pull/4024",
"diff_url": "https://github.com/huggingface/transformers/pull/4024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4024.patch",
"merged_at": 1589373178000
} |
https://api.github.com/repos/huggingface/transformers/issues/4023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4023/comments | https://api.github.com/repos/huggingface/transformers/issues/4023/events | https://github.com/huggingface/transformers/issues/4023 | 607,861,323 | MDU6SXNzdWU2MDc4NjEzMjM= | 4,023 | TFBert: Out of memory error when acting on a strided slice of input. | {
"login": "codeninja",
"id": 14914,
"node_id": "MDQ6VXNlcjE0OTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/14914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codeninja",
"html_url": "https://github.com/codeninja",
"followers_url": "https://api.github.com/users/codeninja/followers",
"following_url": "https://api.github.com/users/codeninja/following{/other_user}",
"gists_url": "https://api.github.com/users/codeninja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codeninja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codeninja/subscriptions",
"organizations_url": "https://api.github.com/users/codeninja/orgs",
"repos_url": "https://api.github.com/users/codeninja/repos",
"events_url": "https://api.github.com/users/codeninja/events{/privacy}",
"received_events_url": "https://api.github.com/users/codeninja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TF Bert & TFBertForSequenceEncoding
Language I am using the model on (English, Chinese ...): EN
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## Code To reproduce
```
batch_size = 10
window = 50
# seq_len = 512 # not good to override unless the sequence truncatting is disabled.
from transformers import TFBertModel, TFBertForSequenceClassification, BertTokenizer
# configuration = BertConfig()
def build_model(num_labels = 10, max_seq_len=seq_len, use_logits=True, batch_size = batch_size, label_size = None):
print(f"Number of labels: {num_labels}")
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-06, clipnorm=1.0)
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=use_logits)
macc = tf.keras.metrics.CategoricalAccuracy('accuracy')
print(f"Optimizer used: {optimizer}")
print(f"Loss used: {loss}")
print(f"Acc used: {macc}")
num_labels = len(unique_classes) # 9 unique classes
bert_config = BertConfig.from_pretrained("bert-base-cased",
num_labels=len(unique_classes),
output_hidden_states=False,
output_attentions=True)
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=bert_config)
# [ ENCODINGS, ATTN_MASK]
agent_input = Input(shape=[2, seq_len], batch_size=batch_size, name='agent_input', dtype='int32')
print("Agent Input:",agent_input)
print('Stacking Inputs')
agent_encodings = agent_input[...,0,:]
agent_attn = agent_input[...,1,:]
print('Agent Encodings:', agent_encodings)
print('Agent Attn: ', agent_attn)
print("Building Bert Model")
agent_outputs = bert_model([agent_encodings, agent_attn])
agent_predictions = agent_outputs[0]
agent_attn = agent_outputs[1]
print('agent_outputs', agent_outputs)
model = Model(inputs=agent_input, outputs=agent_predictions)
model.compile(optimizer=optimizer, loss=loss, metrics=[macc])
return model
#_________________________
print('shape for train data', agent_encodings_train.shape)
batch_size = 100
bert_model = build_model(len(unique_classes), seq_len, True, batch_size)
bert_model.summary()
#_________________________
bert_model.evaluate(agent_encodings_train[:100], agent_class_train[:100], batch_size=batch_size)
```


agent_encodings_train contains a (batch_size, 2, sequence_len) shape of tensors containing seq_len encoded bert tokens and seq_len bert attention tokens.
I am slicing them in the model into each token part and passing them to bert. You will notice in the summary. there are 2 strided slices being passed to BERT. The model compiles, and evaluates, but when calling model.fit I constantly run into memory problems as seen below.
```
bert_model.summary()
history = bert_model.fit(agent_encodings_train[:100], agent_class_train[:100],
epochs=1,
batch_size=batch_size,
class_weights=class_weights,
shuffle=False)
```
```
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-115-712b9a078eba> in <module>
10 # validation_data=(agent_encodings_test, agent_class_test),
11 class_weights=class_weights,
---> 12 shuffle=False)
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
63 def _method_wrapper(self, *args, **kwargs):
64 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 65 return method(self, *args, **kwargs)
66
67 # Running inside `run_distribute_coordinator` already.
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
781 batch_size=batch_size):
782 callbacks.on_train_batch_begin(step)
--> 783 tmp_logs = train_function(iterator)
784 # Catch OutOfRangeError for Datasets of unknown size.
785 # This blocks until the batch has finished executing.
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\def_function.py in __call__(self, *args, **kwds)
577 xla_context.Exit()
578 else:
--> 579 result = self._call(*args, **kwds)
580
581 if tracing_count == self._get_tracing_count():
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
641 # Lifting succeeded, so variables are initialized and we can run the
642 # stateless function.
--> 643 return self._stateless_fn(*args, **kwds)
644 else:
645 canon_args, canon_kwds = \
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in __call__(self, *args, **kwargs)
2418 with self._lock:
2419 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 2420 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
2421
2422 @property
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in _filtered_call(self, args, kwargs)
1663 if isinstance(t, (ops.Tensor,
1664 resource_variable_ops.BaseResourceVariable))),
-> 1665 self.captured_inputs)
1666
1667 def _call_flat(self, args, captured_inputs, cancellation_manager=None):
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1744 # No tape is watching; skip to running the function.
1745 return self._build_call_outputs(self._inference_function.call(
-> 1746 ctx, args, cancellation_manager=cancellation_manager))
1747 forward_backward = self._select_forward_and_backward_functions(
1748 args,
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\function.py in call(self, ctx, args, cancellation_manager)
596 inputs=args,
597 attrs=attrs,
--> 598 ctx=ctx)
599 else:
600 outputs = execute.execute_with_cancellation(
~\Anaconda3\envs\cat2_nightly\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[100,86,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:63) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[clip_by_norm_2/truediv/_30]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[100,86,3072] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:63) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_253267]
Errors may have originated from an input operation.
Input Source operations connected to node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv:
model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/dense/BiasAdd (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:319)
Input Source operations connected to node model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/activation/truediv:
model_11/tf_bert_for_sequence_classification_31/bert/encoder/layer_._0/intermediate/dense/BiasAdd (defined at C:\Users\codeninja\Anaconda3\envs\cat2_nightly\lib\site-packages\transformers\modeling_tf_bert.py:319)
Function call stack:
train_function -> train_function
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env
` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
[env.txt](https://github.com/huggingface/transformers/files/4542124/env.txt)
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4022/comments | https://api.github.com/repos/huggingface/transformers/issues/4022/events | https://github.com/huggingface/transformers/issues/4022 | 607,836,126 | MDU6SXNzdWU2MDc4MzYxMjY= | 4,022 | NER Pipeline with CamemBERT not showing entities | {
"login": "laubil",
"id": 64003665,
"node_id": "MDQ6VXNlcjY0MDAzNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/64003665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laubil",
"html_url": "https://github.com/laubil",
"followers_url": "https://api.github.com/users/laubil/followers",
"following_url": "https://api.github.com/users/laubil/following{/other_user}",
"gists_url": "https://api.github.com/users/laubil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laubil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laubil/subscriptions",
"organizations_url": "https://api.github.com/users/laubil/orgs",
"repos_url": "https://api.github.com/users/laubil/repos",
"events_url": "https://api.github.com/users/laubil/events{/privacy}",
"received_events_url": "https://api.github.com/users/laubil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is because Camembert has not been fine-tuned on a NER task yet. \r\n\r\nIn the documentation they specify : \"The models that this pipeline (ie. ner) can use are models that have been fine-tuned on a token classification task. See the up-to-date list of available models on huggingface.co/models.\"\r\n\r\nIf you need to use Camembert for a named entity recognition task you must wait for a model to be uploaded or you can fine-tune it on your own/open data. ",
"I had missed that point. Thanks a lot!",
"@ColleterVi Do you have any resouces on how to fine tune camemBert for NER task on a specifie domain dataset.",
"@MouadBH I don't have a link but you can just follow the three steps\r\n\r\n\r\n## Step 1: Create config object according to dataset labels\r\nfrom transformers import AutoConfig, AutoTokenizer, AutoModelForTokenClassification\r\n\r\nconfig = AutoConfig.from_pretrained(\"Jean-Baptiste/camembert-base\") \r\nlabel2id = {\r\n \"I-LOC\": 1,\r\n \"I-MISC\": 3,\r\n \"I-ORG\": 4,\r\n \"I-PER\": 2,\r\n \"O\": 0,\r\n}\r\nid2label = {y:x for x,y in label2id.items()}\r\nconfig.__setattr__(\"id2label\", id2label)\r\nconfig.__setattr__(\"label2id\", label2id)\r\nconfig.__setattr__(\"num_labels\", 5)\r\n\r\n## Step 2: Change the tokenizer config attribute\r\ntokenizer = AutoTokenizer.from_pretrained(\"camembert-base\")\r\ntokenizer.config = config\r\n\r\n## Step 3: Change the model config attribute\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"Jean-Baptiste/camembert-base\", num_labels=6)\r\nmodel.config = config"
] | 1,588 | 1,643 | 1,588 | NONE | null | Hi,
Using NER pipeline with CamemBERT, I am only getting "LABEL_O" and "LABEL_I" as entities.
```
from transformers import pipeline
import torch
nlp = pipeline(task='ner', model="camembert-base", tokenizer="camembert-base", framework='pt', device=0)
sequence = "Lundi, David ira au magasin Printemps de Lille pour acheter du vin."
print(nlp(sequence))
```
output:
`[{'word': '<s>', 'score': 0.5307311415672302, 'entity': 'LABEL_0'}, {'word': 'Lundi', 'score': 0.537407636642456, 'entity': 'LABEL_1'}, {'word': ',', 'score': 0.5144394040107727, 'entity': 'LABEL_1'}, {'word': 'David', 'score': 0.5270906090736389, 'entity': 'LABEL_1'}, {'word': 'ira', 'score': 0.5355848073959351, 'entity': 'LABEL_1'}, {'word': 'au', 'score': 0.5498790740966797, 'entity': 'LABEL_1'}, {'word': 'magasin', 'score': 0.5076472163200378, 'entity': 'LABEL_1'}, {'word': 'Printemps', 'score': 0.530289351940155, 'entity': 'LABEL_1'}, {'word': 'de', 'score': 0.5026782155036926, 'entity': 'LABEL_0'}, {'word': 'Lille', 'score': 0.5144190192222595, 'entity': 'LABEL_1'}, {'word': 'pour', 'score': 0.5344067215919495, 'entity': 'LABEL_1'}, {'word': 'acheter', 'score': 0.550661563873291, 'entity': 'LABEL_1'}, {'word': 'du', 'score': 0.5307605266571045, 'entity': 'LABEL_1'}, {'word': 'vin', 'score': 0.5279666781425476, 'entity': 'LABEL_1'}, {'word': '.', 'score': 0.5378196835517883, 'entity': 'LABEL_0'}, {'word': '</s>', 'score': 0.523155927658081, 'entity': 'LABEL_0'}]
`
Entity only takes the LABEL_0 and LABEL_1 values. I would have expected entity to be: "Person", "Location", "Organisation" etc or something along those lines.
set up:
transformers: 2.5.0 (similar outcomes with versions 2.7 and 2.8, but for some reason the word tagging is cleaner in 2.5, a "_" is added in front of the words in 2.7 and 2.8)
Python: 3.6.9
torch: 1.5.0
Linux Ubuntu 18.04
Many thanks for your help! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4022/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4021/comments | https://api.github.com/repos/huggingface/transformers/issues/4021/events | https://github.com/huggingface/transformers/issues/4021 | 607,804,668 | MDU6SXNzdWU2MDc4MDQ2Njg= | 4,021 | T5 Tokenization of unique masked tokens (<extra_id_1>) is incorrect | {
"login": "mansimov",
"id": 1727860,
"node_id": "MDQ6VXNlcjE3Mjc4NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1727860?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimov",
"html_url": "https://github.com/mansimov",
"followers_url": "https://api.github.com/users/mansimov/followers",
"following_url": "https://api.github.com/users/mansimov/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimov/subscriptions",
"organizations_url": "https://api.github.com/users/mansimov/orgs",
"repos_url": "https://api.github.com/users/mansimov/repos",
"events_url": "https://api.github.com/users/mansimov/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tried the following code:\r\n```python\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-base\")\r\n\r\ntext = \"The dog <extra_id_1> in the park\"\r\ntokenized_text = tokenizer.tokenize(text)\r\nprint (tokenized_text)\r\n```\r\n\r\nWith the same settings, it works fine with the following output:\r\n\r\n```python\r\n['▁The', '▁dog', '<extra_id_1>', '▁in', '▁the', '▁park']\r\n```\r\n\r\nFYI, I just installed transformers on Google Colab using `pip install transformers`.",
"@girishponkiya thanks for comment!\r\n\r\nwhich version of tokenizers and transformers are you using ?",
"torch: 1.5.0+cu101\r\ntransformers: 2.8.0\r\ntokenizers: 0.7.0",
"I tried with the following environment as well:\r\n\r\n- torch: 1.5.0+cu101\r\n- transformers: 2.8.0\r\n- tokenizers: **0.5.2**\r\n\r\n..and got the following output:\r\n```\r\n['▁The', '▁dog', '<extra_id_1>', '▁in', '▁the', '▁park']\r\n```",
"We found the same problem.\r\n\r\nBy gitting bisect, the following commit raises this ?\r\n\r\n```\r\n* commit 96ab75b8dd48a9384a74ba4307a4ebfb197343cd\r\n| Author: Funtowicz Morgan <[email protected]>\r\n| Date: Mon Apr 6 22:29:15 2020 +0000\r\n(Pull Request # 3185)\r\n```\r\n\r\nIn src/transformers/tokenization_utils.py # 272 (at the current master, Line 500),\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L500\r\nDid you forget \"setattr(self, key, value)\" after assert.\r\n\r\nSee:\r\nhttps://github.com/huggingface/transformers/commit/96ab75b8dd48a9384a74ba4307a4ebfb197343cd#diff-66606105b55d0ec62ae34112ea3e3d20R272\r\n(please scroll down tokenization_utils.py and \"Load diff\")\r\n\r\nBest,\r\n\r\nP.S.) bisect results\r\n```\r\n$ git bisect log\r\ngit bisect start\r\n# bad: [b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8] Fix bug in examples: double wrap into DataParallel during eval\r\ngit bisect bad b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8\r\n# good: [e52d1258e010b88d3507a4f527c6201616c119ad] Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py (#3631)\r\ngit bisect good e52d1258e010b88d3507a4f527c6201616c119ad\r\n# bad: [c59b1e682d6ebaf7295c63418d4570228904e690] [examples] unit test for run_bart_sum (#3544)\r\ngit bisect bad c59b1e682d6ebaf7295c63418d4570228904e690\r\n# bad: [31baeed614bf7f65aafde545f20a95e84cd293b4] Update quotes\r\ngit bisect bad 31baeed614bf7f65aafde545f20a95e84cd293b4\r\n# bad: [e344e3d4021421ec0d631d076daf17f8a4e82e69] [examples] SummarizationDataset cleanup (#3451)\r\ngit bisect bad e344e3d4021421ec0d631d076daf17f8a4e82e69\r\n# bad: [5aa8a278a3f13b8f83a0deb9b6d743f159cea23c] Fix roberta checkpoint conversion script (#3642)\r\ngit bisect bad 5aa8a278a3f13b8f83a0deb9b6d743f159cea23c\r\n# bad: [0a9d09b42a9c7c1ccc00da48486a1188078e8594] fixed TransfoXLLMHeadModel documentation (#3661)\r\ngit bisect bad 0a9d09b42a9c7c1ccc00da48486a1188078e8594\r\n# bad: [96ab75b8dd48a9384a74ba4307a4ebfb197343cd] Tokenizers v3.0.0 (#3185)\r\ngit bisect bad 96ab75b8dd48a9384a74ba4307a4ebfb197343cd\r\n# first bad commit: [96ab75b8dd48a9384a74ba4307a4ebfb197343cd] Tokenizers v3.0.0 (#3185)\r\n```",
"@takahiro971 \r\n\r\nHere is the output of me running `git bisect log`\r\n```\r\ngit bisect start\r\n\r\n# bad: [b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8] Fix bug in examples: double wrap into DataParallel during eval\r\ngit bisect bad b1ff0b2ae7d368b7db3a8a8472a29cc195d278d8\r\n# good: [e52d1258e010b88d3507a4f527c6201616c119ad] Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py (#3631)\r\ngit bisect good e52d1258e010b88d3507a4f527c6201616c119ad\r\n```\r\n\r\nCan you point out how git bisect solves this issue?\r\nThanks!",
"@mansimov\r\n\r\nHi!\r\nThe 'git bisect' is a tool for determine when (which commit) contains bugs.\r\n\r\nThis issue is not still solved.\r\n\r\nOld version (before at Apr 6) is right.\r\nBut the commit 96ab75b8dd48a9384a74ba4307a4ebfb197343cd have a bug and raises this issue.\r\n\r\nAlso, the current master (latest source) have the same problem.\r\n\r\nBTW, How did you installed transformer?\r\nclone from GitHub and 'pip install --editable .' ?\r\nor 'pip install transformers==2.8.0' ?\r\n\r\nIf first one (and use newer version), it may have the bug.\r\n\r\n\r\nWe can three choices:\r\n\r\n1) Use older version (before above commit on Apr 6).\r\n or 'pip install transformers==2.8.0' is also OK, because it seems older than the above commit.\r\n2) Clone repo to local and, Hack the transformers code.\r\n3) Wait until somebody fix this bug, and release next version.\r\n\r\nBest,\r\n",
"FYI; my hack is like:\r\n\r\n```\r\n$ g log -p\r\n(snip)\r\nDate: Fri May 8 10:52:13 2020 +0900\r\n\r\n 不具合修正\r\n\r\ndiff --git a/src/transformers/tokenization_utils.py b/src/transformers/tokenization_utils.py\r\nindex a2d258a..91f345b 100644\r\n--- a/src/transformers/tokenization_utils.py\r\n+++ b/src/transformers/tokenization_utils.py\r\n@@ -498,6 +498,7 @@ class SpecialTokensMixin:\r\n if key in self.SPECIAL_TOKENS_ATTRIBUTES:\r\n if key == \"additional_special_tokens\":\r\n assert isinstance(value, (list, tuple)) and all(isinstance(t, str) for t in value)\r\n+ setattr(self, key, value)\r\n elif isinstance(value, AddedTokenFast):\r\n setattr(self, key, str(value))\r\n elif isinstance(value, str):\r\n\r\n```\r\n\r\nBe careful, the line number may be changed in the different revision.",
"Thanks @takahiro971 this solved my issue!",
"I think this issue is solved with #4353 "
] | 1,588 | 1,591 | 1,591 | CONTRIBUTOR | null | Hi!
Thanks for the awesome library!
I am trying to tokenize the following text using the T5Tokenizer `tokenizer = T5Tokenizer.from_pretrained("t5-base")`
`text = "The dog <extra_id_1> in the park"`
`tokenized_text = tokenizer.tokenize(text)`
`print (tokenized_text)`
And get the following output:
`['▁The', '▁dog', '▁', '<', 'extra', '_', 'i', 'd', '_', '1', '>', '▁in', '▁the', '▁park']`
The `<extra_id_1>` is tokenized incorrectly. Any idea on how to solve this issue?
I am using transformers version 2.8.0 and tokenizers version 0.7.0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4021/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4020/comments | https://api.github.com/repos/huggingface/transformers/issues/4020/events | https://github.com/huggingface/transformers/pull/4020 | 607,798,309 | MDExOlB1bGxSZXF1ZXN0NDA5NzAxOTI1 | 4,020 | Pass existing tensorboard SummaryWriter to Trainer PR (#4019) | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=h1) Report\n> Merging [#4020](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4020 +/- ##\n=======================================\n Coverage 78.44% 78.45% \n=======================================\n Files 111 111 \n Lines 18518 18520 +2 \n=======================================\n+ Hits 14527 14529 +2 \n Misses 3991 3991 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.94% <66.66%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.76% <0.00%> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4020/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=footer). Last update [4e817ff...62dbe2d](https://codecov.io/gh/huggingface/transformers/pull/4020?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Another consideration is that the user might want to add extra details to the log after the training has finished. Right now, the end of the training loop closes the `tb_writer`, preventing any further logging. Not sure what a reasonable workaround for this is, maybe add a param for the train function that if set to True, will close `tb_writer` and if False, will keep it open (default: True).",
"Looks good, thanks"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Implements feature request for issue #4019.
You should be able to pass in you're own `SummaryWriter`s to `Trainer` via the `tb_writer` parameter to the `__init__` function:
```
tb_writer = SummaryWriter(log_dir="my_log_dir")
tb_writer.add_hparams(my_hparams_dict, my_metrics_dict)
trainer = Trainer(
model = model,
args = training_args,
train_dataset = train_dataset,
tb_writer = tb_writer
)
trainer.train()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4020",
"html_url": "https://github.com/huggingface/transformers/pull/4020",
"diff_url": "https://github.com/huggingface/transformers/pull/4020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4020.patch",
"merged_at": 1588636705000
} |
https://api.github.com/repos/huggingface/transformers/issues/4019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4019/comments | https://api.github.com/repos/huggingface/transformers/issues/4019/events | https://github.com/huggingface/transformers/issues/4019 | 607,795,719 | MDU6SXNzdWU2MDc3OTU3MTk= | 4,019 | Pass existing tensorboard SummaryWriter to Trainer. | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | # Feature request
The [`Trainer`](https://github.com/huggingface/transformers/blob/41750a6cff55e401364568868d619747de3db037/src/transformers/trainer.py#L102) class should let us pass our own `tb_writer` (Currently, a new `tb_writer` is created using the `args.logging_dir` argument if it exists).
## Motivation
This let's us have more detailed logs since we can add add things like hyper-params, histographs, and whatnot if we choose so. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4019/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4019/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4018/comments | https://api.github.com/repos/huggingface/transformers/issues/4018/events | https://github.com/huggingface/transformers/issues/4018 | 607,703,328 | MDU6SXNzdWU2MDc3MDMzMjg= | 4,018 | String Format should be more pythonic | {
"login": "Liangtaiwan",
"id": 20909894,
"node_id": "MDQ6VXNlcjIwOTA5ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liangtaiwan",
"html_url": "https://github.com/Liangtaiwan",
"followers_url": "https://api.github.com/users/Liangtaiwan/followers",
"following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Liangtaiwan/orgs",
"repos_url": "https://api.github.com/users/Liangtaiwan/repos",
"events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liangtaiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Until very recently, we were supporting Python >= 3.5, which is why we were not using f-strings. We're slowly but surely replacing them now!"
] | 1,588 | 1,588 | 1,588 | CONTRIBUTOR | null | Instead of using %s, %d, I think that we should use f-string or format. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4018/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4017/comments | https://api.github.com/repos/huggingface/transformers/issues/4017/events | https://github.com/huggingface/transformers/pull/4017 | 607,664,913 | MDExOlB1bGxSZXF1ZXN0NDA5NTkzODgw | 4,017 | TF version of the trainer | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for these reviews :) I'm currently running the trainer over all the GLUE tasks and few NER datasets. I still have few bugs solved that will be pushed soon, that will include your reviews.\r\n\r\nI do have some logs in Tensorboard yes, I can share them but I suggest you do some test on your side as well to be sure it is ok.",
"@julien-c As asked here few Tensorboards:\r\n\r\n- Sequence Classification example with [GLUE mrpc](https://tensorboard.dev/experiment/xJt1nogwRDO1RkebX6Qspg/#scalars) \r\n- Regression example with [GLUE sts-b](https://tensorboard.dev/experiment/szdpv9BzTVq8euJKBZpuAQ/#scalars)\r\n- Token Classification example with [Germeval](https://tensorboard.dev/experiment/JMNV2qqbRRamq9EbPLo3Pg/#scalars)\r\n\r\nThe final commit will arrive during the weekend 😄 ",
"@julien-c I'm now ok with this PR, the trainer has been tested over all the GLUE tasks + Germeval + CoNLL2002 and 2003. There is stil a non blocking minor bug for regression tasks (a raised exception that cannot be catched each time the graph is computed) that I do not really understand, so I will wrap it up into a small piece of code and open an issue in the TF repo. \r\n\r\nDo you want to do a last check?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=h1) Report\n> Merging [#4017](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1cdd2ad2afb73f6af185aafecb7dd7941a90c4d1&el=desc) will **decrease** coverage by `0.70%`.\n> The diff coverage is `34.36%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #4017 +/- ##\n==========================================\n- Coverage 78.85% 78.15% -0.71% \n==========================================\n Files 114 117 +3 \n Lines 18688 18938 +250 \n==========================================\n+ Hits 14737 14801 +64 \n- Misses 3951 4137 +186 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `18.26% <18.26%> (ø)` | |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `58.53% <58.53%> (ø)` | |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <83.33%> (-0.04%)` | :arrow_down: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.08% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.87% <100.00%> (-2.03%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.61% <0.00%> (-0.17%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/4017/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=footer). Last update [1cdd2ad...456a024](https://codecov.io/gh/huggingface/transformers/pull/4017?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Awesome work @jplu! 💪💪💪"
] | 1,588 | 1,589 | 1,588 | CONTRIBUTOR | null | Tensorflow version of the PyTorch Trainer found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py). It follow as much as possible the same API, usage and internal method names.
An example can be found [here](https://github.com/jplu/transformers/blob/tf-trainer/examples/ner/run_tf_ner.py). The code contains some hacks mostly for handling datasets see for example the [`TFDataset`](https://github.com/jplu/transformers/blob/tf-trainer/examples/ner/utils_ner.py#L141) that tries to imitate the `Dataset` type from PyTorch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4017/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4017",
"html_url": "https://github.com/huggingface/transformers/pull/4017",
"diff_url": "https://github.com/huggingface/transformers/pull/4017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4017.patch",
"merged_at": 1588784212000
} |
https://api.github.com/repos/huggingface/transformers/issues/4016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4016/comments | https://api.github.com/repos/huggingface/transformers/issues/4016/events | https://github.com/huggingface/transformers/issues/4016 | 607,647,969 | MDU6SXNzdWU2MDc2NDc5Njk= | 4,016 | Text Generation generated <|endoftext|> | {
"login": "chuanhhoang",
"id": 19351207,
"node_id": "MDQ6VXNlcjE5MzUxMjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19351207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chuanhhoang",
"html_url": "https://github.com/chuanhhoang",
"followers_url": "https://api.github.com/users/chuanhhoang/followers",
"following_url": "https://api.github.com/users/chuanhhoang/following{/other_user}",
"gists_url": "https://api.github.com/users/chuanhhoang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chuanhhoang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chuanhhoang/subscriptions",
"organizations_url": "https://api.github.com/users/chuanhhoang/orgs",
"repos_url": "https://api.github.com/users/chuanhhoang/repos",
"events_url": "https://api.github.com/users/chuanhhoang/events{/privacy}",
"received_events_url": "https://api.github.com/users/chuanhhoang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @chuanhhoang, \r\n\r\nRunning the same command you did actually gives me quite decent results, which are different each time I run the script (`do_sample=True` in the script which means that the models samples every time differently). \r\n\r\nAn output I got was: \r\n\r\n```\r\nF=== GENERATED SEQUENCE 1 ===\r\n<|endoftext|>NOW REBOOTING ONCE EVERY 5 MINUTES\r\n\r\nBefore posting a rant, feel free to read the rest of this article\r\n\r\nIt's been on this strange path for a long time now: from an ostensibly well meaning project to an out of control piece of crap.\r\n\r\nWhen the whole thing started back in 2012, I started down the path I'm currently on with calling it Icarus.\r\n\r\nBeing a fan of NGE, I felt like I should do something with the FEMINIST archive I'd be putting online, so I took the ideas I was getting from NGE and combined them into a short movie.\r\n\r\nWhen I was finished, I needed a way to get it on to some sort of an internet scene to spread the word, so I wrote it up as a 30-minute film. That became Icarus, basically.\r\n\r\nIt's probably the easiest project I've done so far, the easiest to get some sales for, and the easiest to get friends involved with.\r\n\r\nThing is, with IGE and all its tropes, you can already see the script is being shown at cons and getting more and more ridiculous with each iteration. But like, two days after I made the video, I was doing 25-hour and 30-hour runs at Cineplex in Halifax that I was writing the screenplay for.\r\n\r\nThis isn't easy to do; we've all seen it, can recognize it, and just kind of go back to your normal routine for a little while. It's a marathon, but it's a marathon.\r\n\r\nFor those who've read the script, there's a lot of filler story \"I don't care about you\", \"You'll get what's coming to you\", \"Listen to this I don't care if you're scared\", and more exposition like that.\r\nIt's not pretty at all.\r\n\r\nOf course, theres always the fic awards, well, that guy. The one that gives the light at the end of the tunnel.\r\n\r\nIt's fine; the fan base on here and elsewhere know this, so the fans are going to let the judges decide.\r\n\r\nBut yeah, for now, I'm playing that game. Any other reviews are written and will be under embargo. I've removed most and all of my reviews because all I've done is label each review \"a movie review\". That\r\n```\r\n\r\nNote that we now also have generation pipeline, so it shoud be easy to play around with the parameters there. Also here is a blog that might be useful to see how you can influence the generation: \r\nhttps://huggingface.co/blog/how-to-generate",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,588 | 1,594 | 1,594 | NONE | null | The generated text include a lot of <|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|><|endoftext|>
Model I am using GPT2:
Language I am using the model on English:
The problem arises when using:
`python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2-large --length=500 --num_return_sequences=1 `
I am using the latest transfomer from github and build from source code and use PyTorch.
Also is there anyway to generate a different text each run? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4015/comments | https://api.github.com/repos/huggingface/transformers/issues/4015/events | https://github.com/huggingface/transformers/issues/4015 | 607,606,571 | MDU6SXNzdWU2MDc2MDY1NzE= | 4,015 | Fast Tokenizers: `batch_encode_plus` error | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue should be submitted in another repo.\r\nhttps://github.com/huggingface/tokenizers."
] | 1,587 | 1,592 | 1,592 | COLLABORATOR | null | Hi,
I just want report the following error using fast tokenizers for better tracking.
The combination of:
* `batch_encode_plus` method
* `is_pretokenized=True` and
* `return_tensors=True`
returns the following error message:
```bash
~/.venvs/flair-2/lib/python3.8/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, is_pretokenized, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_lengths, **kwargs)
2501 stack = tf.stack(stack, axis=0)
2502 elif return_tensors == "pt":
-> 2503 stack = torch.stack(stack, dim=0)
2504 # elif not return_tensors and len(stack) == 1:
2505 # stack = stack[0]
RuntimeError: stack expects each tensor to be equal size, but got [34] at entry 0 and [8] at entry 1
```
The code for re-producing this error is:
```python
from transformers import BertTokenizerFast
model_name = "bert-base-cased"
tokenizer = BertTokenizerFast.from_pretrained(model_name)
sentences = ["Schloß Nymphenburg is a nice castle in Munich".split(),
"Berlin and Munich are cool .".split()]
output = tokenizer.batch_encode_plus(batch_text_or_text_pairs=sentences,
is_pretokenized=True,
return_tensors="pt",
)
```
Notice: This bug only occurs, when input sentences have different lenghts and the `return_tensors="pt"` is used!
----
Versions, that I've installed:
* Latest Transformers version from `master`, 4e817ff41885063e08bb3bcd63e5adfd835b9911
* Tokenizers in version *0.7.0*
* PyTorch *1.5.0* | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4015/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4015/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/4014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4014/comments | https://api.github.com/repos/huggingface/transformers/issues/4014/events | https://github.com/huggingface/transformers/pull/4014 | 607,581,832 | MDExOlB1bGxSZXF1ZXN0NDA5NTI2NjI5 | 4,014 | [Fix common tests on GPU] send model, ids to torch_device | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"~@patrickvonplaten \r\n any idea why tf tests dont run? \r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/5734/workflows/7986a344-3a5f-4f4f-a5e7-f8fd82ef7a2d/jobs/33381/steps\r\nseems like test_modeling_common.py only used by torch.~\r\n\r\nresolved: had a `PretrainedConfig` type hint 😲 ",
"Great, thanks for fixing this :-) "
] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/4014",
"html_url": "https://github.com/huggingface/transformers/pull/4014",
"diff_url": "https://github.com/huggingface/transformers/pull/4014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/4014.patch",
"merged_at": 1588168041000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/4013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/4013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/4013/comments | https://api.github.com/repos/huggingface/transformers/issues/4013/events | https://github.com/huggingface/transformers/issues/4013 | 607,579,786 | MDU6SXNzdWU2MDc1Nzk3ODY= | 4,013 | test_lm_head_model_random_*_generate fail on GPU | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,587 | 1,588 | 1,588 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/4013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/4013/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.